Gradient Boosting
Reviewing Gradient Boosting before going over the xgboost documentation.
References
Notes
Gradient boosting is a machine learning technique based on boosting in a functional space, where the target is pseudo-residuals instead of residuals as in traditional boosting. It gives a prediction model in the form of an ensemble of weak prediction models, i.e., models that make very few assumptions about the data, which are typically simple decision trees. When a decision tree is the weak learner, the resulting algorithm is called gradient-boosting trees; it usually outperforms random forest. As with other boosting methods, a gradient-boosted trees model is built in stages, but it generalizes the other methods by allowing optimization of an arbitrary differentiable loss function.
Gradient boosting combines weak "learners" into a single strong learner iteratively, It is easiest to explain in the least-squares regression setting, where the goal is to "teach" a model to predict values of the form by minimizing the mean squared error , where indexes over some training set of size of actual values of the output variable :
- = the predicted value
- = the observed value
- = the number of samples in
If the algorithm has stages, at each stage (), suppose some imperfect model . In order to improve , our algorithm should add some new estimator, . Thus
or, equivalently,
Therefore, gradient boosting will fit to the residual . As in other boosting variants, each attempts to correct the errors of its predecessor .
Many supervised learning problems involve an output variable and a vector of input variables , related to each other with some probabilistic distribution. The goal is to find some function that best approximates the output variables of input variables.
Gradient boosting is typically used with decision trees of a fixed size as base learners. Gradient boosting can be used in the field of learning to rank. The commercial web search engines Yahoo and Yandex use variants of gradient boosting in their machine-learned ranking engines.