4.3.5. Boosting AdaBoost

Given N training data points and M classifiers. The AdaBoost algorithm does the following:

  1. Initialize weights w_i = \frac{1}{N}

  2. For m=1..M:

    1. Fit the classifier G_m to the training data using weights w_i
    2. Compute the classifier error

    \text{err}_m = \frac{ \sum\limits_{i=1}^N w_i I\{ y_i \neq G_m(x_i)\} }{ \sum\limits_{i=1}^N w_i}

    1. Compute the classifier weight

    \alpha_m = \log \left( \frac{ 1-\text{err}_m }{ \text{err}_m } \right)

    1. Recompute all data weights w_i

    w_i = w_i \exp{ \left( \alpha_m I\{ y_i \neq G_m(x_i) \} \right) }

  3. The combined classifier is

G(x) = \text{sign} \left( \sum\limits_{m=1}^M \alpha_m G_m(x) \right) Interfaces

class ailib.fitting.AdaBoost

Bases: ailib.fitting.model.Model

AdaBoost algorithm.

Given M (weak) classifiers, AdaBoost iteratively trains each one of them with respect to so far misclassified samples. For a formal representation, consult the respective documentation.

As can be seen from the schematics, the classifiers are required to support weighted samples. The classifiers have to be linked to the instance of this class before training. This can be achieved through calls to the AdaBoost.addModel() method.


Add a trainable model to the set of boosted classifiers.

The models are required to support weighted samples.

Parameter:m (Model) – Model
err((x, y))
Return the distance between the target y and the model prediction at the point x.

Evaluate the model at data point x.

The outcome is the majority vote over the outcome of all classifiers w.r.t. their weight (note the formal representation).


Apply the AdaBoost algorithm on the presented data set.

Parameter:data – Training data.
Returns:self Examples

Table Of Contents

Previous topic

4.3.4. Bagging

Next topic

4.3.6. Trees and Random Forests

This Page