Ensembles on Random Patches
Looking for more research papers to read, I scanned my Hands-On Machine Learning notes for the many papers that were referenced there. This is one of those papers. These papers are mainly on machine learning and deep learning topics.
Reference Ensembles on Random Patches Paper
Introduction
This paper considers supervised learning under the assumption that the available memory is small compared to the dataset size. This general framework is relevant in the context of big data, distributed databases and embedded systems. This paper investigates a very simple, yet effective, ensemble framework that builds each individual model of the ensemble from a random patch of data obtained by drawing random samples from both instances and features from the whole dataset. Using decision-tree based estimators, this paper shows that this proposed method provides on par performance in terms of accuracy while simultaneously lowering the memory needs and attains significantly better performance when memory is severely constrained.
Notes
This paper considers supervised learning problems for which the dataset is so large that it cannot be loaded into memory. In a previous paper, it was proposed to use the Pasting method to tackle this problem by learning an ensemble of estimators individually built on random subsets of the training examples, hence alleviating the memory requirements the vase estimators would be built on only small parts of the whole dataset. Another paper proposed to learn an ensemble of estimators individually built on random subspaces (random subsets of features). This can be seen as a way to reduce the memory requirements of building individual models. This paper proposes to combine and leverage both approaches at the same time: learn an ensemble of estimators on random patches, on random subsets of the samples and features. This paper shows that this approach preserves comparable accuracy with respect to other ensemble approaches which build base estimators on the whole training set while drastically lowering the memory requirements and allowing an equivalent reduction of the global computing time.
The Random Patches algorithm in this work is a wrapper ensemble method that can be described in the following terms: Let be the set of all random patches of size that can be drawing from the dataset , where (resp. ) is the number of samples (resp. the number of features in ) and where (resp. ) is a hyperparameter that controls the number of samples in a patch (resp, the number of features). That is, is the set of all possible subsets containing samples (among ) with features (among ). The method then works as follows:
- Draw a patch uniformly at random
- Build an estimator on the selected patch
- Repeat 1-2 for a preassigned number of estimators
- Aggregate the predictions by voting (in case of classifiers) or averaging (in case of regressors) the predictions of the estimators
Results
The results show first and foremost that ensembles of randomized trees nearly always beat ensembles of standard decision trees. s off-the-shelf methods, the paper advocates that ensembles of such trees should be preferred to ensembles of decision trees. This study validates the Random Patches approach.
Comments
You have to be logged in to add a comment
User Comments
There are currently no comments for this article.