Alleantia IoT solutions

Ensemble Modeling: how to improve Machine Learning

Written by Alleantia Guru | Aug 13, 2021 12:43:26 PM

To improve the results of Machine Learning projects, the Ensemble Modeling technique is used. 

Ensemble Modeling allows to obtain, thanks to a series of ensemble methods that use multiple models, a better predictive performance compared to the models from which it is made up.

A focus on Ensemble Modeling technique

An example of is the recognition, and analysis, of a certain type of data according to a certain identification, using some simple, and different among them, rules.
Individually each rule is not able to respect our process of analysis, but only respecting them all together (ensemble) will obtain the desired result.

Similarly, Ensemble Modelling puts together the predictions of different models with the aim of increasing the power of the prediction.

Ensemble Modeling Techniques:

  1. Bagging: this technique aims to create a set of classifiers having the same importance. At the time of classification each model will give a score on the prediction and the result will be the class that has received the most votes. The classification results obtained on the real and test data are put together to draw a single conclusion about the classification.
  2. Boosting: Unlike bagging, each classifier will affect the final grade with a certain weight. This weight will be calculated based on the accuracy error that each model will commit in the learning phase. It is triggered so an iterative process adding classifiers until it reaches the desired accuracy or until it reaches a maximum limit of models.
    Boosting tends to the overfitting, that is to work well on the data of training and test but less well on the real data, but in general it works better of the boosting.
  3. Stacking: While in the bagging the output was the result of a vote, in the stacking comes introduced an ulterior classifier (said meta-classifier) that uses the predictions of other sub-models in order to carry out an ulterior learning.

    It is often possible to combine multiple models from the same algorithm, but better results can be obtained by using predictions provided by different algorithms.

    Once we decide to use ensemble modeling to solve our problem, we immediately have a question:

Should we give all models the same weight in the prediction or not?

Actually this is also a machine learning problem, assuming a linear model we will have weights that multiply the result of a particular model and the goal is to find the weights that minimize the overall error.

In conclusion, we can say that the two major benefits of Ensemble Modeling are better prediction and greater model stability, because the "noise" in the data is reduced due to diversification.