Blog | Machine Learning

The different types of Machine Learning: The basics you need to know

The different types of Machine Learning: The basics you need to know
share on
by Sanjeev Kapoor 19 Oct 2018

Machine Learning (ML) and Artificial Intelligence (AI) are undoubtedly two of the greatest trends of our time, even though there have been around for over two decades. The recent surge of interest in ML and AI is largely due to advances in computing and storage, which facilitate the fast and cost-effective processing of the arbitrarily large amount of data, including real-time data streams with very high ingestion rates. Such advances enable the development and deployment of hyper-efficient deep learning algorithms, which enables machines to reason in ways similar to the human mind. This is how Google’s AI engine has recently managed to defeat a Chinese genius in the GO game. However, despite these capabilities, the basic models by which machines learn remain quite the same during the last few decades.

In principle, a machine can learn by observation, if given a sufficient amount of data. This is not much different from humans, who learn by observation as well. Machines are able to identify recurring patterns within datasets given to them, which they can later associate with rules and other forms of knowledge representation. Nevertheless, there are three different ways by which machines are trained to produce rules and knowledge, including supervised learning, unsupervised learning and reinforcement learning.


Supervised Learning

Supervised learning relies on some known examples of input and output patterns. Based on these examples, it trains a machine to learn how to map the inputs to the outputs. In most cases, an input object is represented as a vector and comprises several parameters. Likewise, it produces an output vector of one or more values. Using the training examples, a supervised learning algorithm can infer rules or other forms of mathematical functions that can effectively map the inputs of the examples to their corresponding outputs. Accordingly, this mathematical function can be used to map new examples i.e. to identify the output in the case of a new, unknown or unseen instance.  In this way, supervised learning functions can be generalized and used to handle situations in a plausible manner, considering the past experience included in the training samples. In several cases, this is also called “concept learning”.

Machine Learning or something else.
Let's help you with your IT project.


The most prominent example of a supervised learning technique is a classification engine, which learns to categorize input instances (e.g., vectors) to one out of many given output classes. A set of training examples containing various instances and the class where they belong can be typically used to train a classifier and to extract a proper classification function. In this case, the training examples are labeled i.e. they contain the class where each one of the given instances is assigned.


Based on this general description of supervised learning, we can make the following important inferences:

  • Data Overfitting Challenges: The training examples cannot always cover the entire spectrum of real-world cases. Rather they capture a mini-world that is a subset of the real world in a specific timeframe and for a limited set of use cases. Therefore, supervised learning is susceptible to problems associated with the quality and completeness of the data, notably the so-called data overfitting problem. To alleviate this problem, data scientists exploit domain knowledge when building their mapping functions i.e. they take advantage of experts that validate the soundness of the mapping functions for use in the real world.
  • Amount of Training Data: The more the available training examples are, the higher the likelihood of building effective supervised learning systems. You may have wondered why and how IT giants like Google, Apple, and Amazon are able to build very effective ML and AI systems. One of the primary reasons is that they are able to work with massive amounts of data, which are usually by far larger than the training datasets available to competitors.


Unsupervised Learning

Unsupervised learning is another type of machine learning technique, which learns based on “unlabeled” training examples i.e. test data that have not been labeled or assigned to a specific category. Instead of relying on the labels to identify a mapping of inputs to outputs, unsupervised learning techniques attempt to identify data that look similar to each other. In this way, they classify new instances to the very same category where other instances with similar characteristics belong.

While unsupervised learning is used in many applications (e.g., notably summarization and explorative applications), its most classical use is the clustering of many instances in groups based on their commonality. Clustering involves the grouping of a set of objects in such a way that objects in the same group (i.e. “a cluster”) are more similar based on some metric to each other than to those in other groups (i.e. other “clusters”). In essence, clustering techniques are able to group items based on their similarity, without knowing in advance the number of groups based on some “labeling” of the training examples.

The are many popular unsupervised learning techniques, such as K-means clustering, self-organizing map (SOM) and adaptive resonance theory (ART) algorithms. In general, unsupervised learning algorithms perform more complex processing tasks than supervised learning ones. As such they are more appropriate for identifying and classifying complex patterns, such as the objects seen by a self-guided vehicle.

Nevertheless, unsupervised learning presents several challenges, one of them being the fact that it is difficult to identify whether they are providing proper and acceptable results or not. This is largely due to the fact that in unsupervised learning there are no concrete metrics of a model’s accuracy and therefore there is a lack of tangible insights on whether the model is appropriate. On the contrary, in supervised learning data scientists make use of concrete accuracy metrics (e.g., precision, recall, classification error rates) that drive them in deciding whether a model is acceptable or if additional developments are required. Hence, the unsupervised learning process can be in several cases subjective, which makes the integration of unsupervised learning in real-life applications even more challenging. In most cases, there is a need for designing and executing A/B tests, as a means of validating the usefulness of unsupervised learning algorithms in real deployments.


Reinforcement Learning

Reinforcement learning (RL) is the third prominent type of machine learning, which is based on the development of software programs that behave in a way that maximizes some cumulative incentive or reward. It has it’s roots in the development of cooperative algorithms in other disciplines, such as game theory, co-simulation multi-agent systems, operations research and genetic algorithms. For example, the quite popular dynamic programming techniques in operations research aims at optimizing multiple parameters at the same time.

The main difference between reinforcement learning and supervised learning is that they focus on overall performance rather than on matching specific inputs to specific outputs based on training data. In this way, reinforcement learning attempts to find a balance between exploration (i.e. an uncharted territory) and current knowledge. Reinforcement learning is used in specialized areas such as neuroscience, bidding, advertising, as well as in many resource allocation problems for different industries. However, its overall potential has been in several cases questioned.


For simpler problems, one can find individual supervised learning or unsupervised learning algorithms. Even basic classification models (e.g., models based on Bayesian statistics) can work very well and can yield useful results. However, for more complex problems, data scientists have to combine techniques and algorithms from all three ML types in complex pipelines in order to achieve the expected results.

Leave a comment

Recent Posts

get in touch

We're here to help!

Terms of use
Privacy Policy
Cookie Policy
Site Map
2020 IT Exchange, Inc