The Evolution of Machine Learning and its Future in Healthcare

Machine learning has continued to be developed in recent times and spread within industries and into new industries. In the healthcare industry, Machine Learning Management In Clinical Care has continued to be explored and grow. Machine learning hasn’t always been around, let’s take a peep into the past and see the evolution of machine learning.

1973: This year marked what would be the beginning of the first venture into the discovery of machine learning. Machine learning was discovered when Bayes’ Theorem Thomas Bayes work An Essay towards solving a Problem in the Doctrine of Chances is published two years after his death, having been amended and edited by a friend of Bayes, Richard Price.

1805: In this year, Adrien-Marie Legendre describes the “méthode des moindres carrés”, known in English as the least squares method. Till current day, the least squares method is used widely in data fitting.

1812: In this year, Pierre-Simon Laplace publishes Théorie Analytique des Probabilités, in which he expands upon the work of Bayes and defines what is now known as Bayes’ Theorem.

1950: In 1950, Alan Turing proposed to build a machine that could learn from itself. A “learning machine”. This might be the beginning of the phrase “machine learning”

1951: In 1951, the first neural network machine was built and this machine was able to learn.

1952: In 1952, machines who could play checkers were built. Then Arthur Samuel joins IBM’s Poughkeepsie Laboratory and begins working on some of the very first machine learning programs, first creating programs that play checkers.

1957: In this year Frank Rosenblatt invented the perceptron while working at the Cornell Aeronautical Laboratory. The invention of the perceptron generated a great deal of excitement and was widely covered in the media.

Fast-forward to the 21st century:

2010: In 2010 a website that serves as a platform for machine learning competitions called Kaggle was launched.

2011: In 2011 IBM’s Watson was able to beat humans in the game, Jeopardy. The machine used a combination of machine learning and natural language processing to play the game and win.

2012: In 2012, Google created a neural network that could recognize cats from watching unlabeled images on youtube.

2014: In 2014, Facebook published a breakthrough in a project called DeepFace. The system could use neural networks to identify faces with almost 100% accuracy. This resulted in an improvement of systems that could now rival human accuracy and performance. Also this year, Google finishes up a work on a platform called Sibyl. Sybil is a proprietary platform for massively parallel machine learning used internally by Google to make predictions about user behavior and provide recommendations.

2016: In 2016, Google’s AlphaGo program became the first computer program to beat a professional player in the game Go. This program used a combination of machine learning and tree search techniques. The program was later improved as AlphaGo Zero and then in 2017 generalized to Chess and more two-player games with AlphaZero.

From 2017 to now, there has been more an more advancement in machine learning and Artificial intelligence such as the development of self-driving cars by Tesla, Google, and Apple. There has also been an advancement in applying Machine Learning and AI technology to robots like the celebrity (quite creepy) robot Sophie.

Machine Learning Engineering

I bet a lot of people wonder what Machine Learning entails, and what the process requires like what kind of engineering goes into creating a machine learning program. The process of building a machine learning program takes place in three stages. These stages are: data processing, model building, and deployment and monitoring.

The second stage of machine learning building contains the pipeline model. The pipeline model is the machine learning algorithm that learns to predict given input data. It is a very important aspect of machine learning and so an important stage of the creating process. The pipeline model is also where deep learning machine learning lives. Deep learning is a subcategory of machine learning algorithms that use multi-layered neural networks to learn complex relationships between inputs and outputs. The more layers in the neural network, the more complex it can capture.

Despite the focus on deep learning at the big tech company AI research labs, most applications of machine learning at these same companies do not rely on neural networks and instead use traditional machine learning models. The most common models include linear/logistic regression, random forests and boosted decision trees. These are the models behind, among other services tech companies use, friend suggestions, ad targeting, user interest prediction, supply/demand simulation and search result ranking.

There are other forms of machine learning that don’t use deep neural networks. These machines are referred to as traditional statistical machines. They have a more limited capacity to capture information about training data. Even though they are basic, they work well with many applications and have plenty of uses that having the deep learning capacity would be of no use for. Many machine learning engineers and builders still use these traditional statistical machines for some tasks even though they have access to deep learning functions.

Even though the middle stage of machine learning is very important, the first(beginning) and last (ending) stages are also very important. A lot of time in machine learning is spent on preparing and monitoring the models that have been built. The first stage involves cleaning and formatting vast amounts of data to be fed into the model. The last stage involves careful deployment and monitoring of the model.

And some of the tools engineers use to train these models are similarly well-worn. One of the most commonly used machine learning libraries is sci-kit-learn, which was released a decade ago. Another one is the Google’s TensorFlow which is becoming more popular.

There are good reasons to use simpler models over deep learning. Deep neural networks are hard to train. They require more time and computational power and they usually require different hardware, specifically GPUs). Getting deep learning to work is hard — it still requires extensive manual fiddling, involving a combination of intuition and trial and error. With traditional machine learning models, the time engineers spend on model training and tuning is relatively short. Ultimately, if the accuracy improvements that deep learning can achieve are modest, the need for scalability and development speed outweighs their value.