Pasting Small Votes for Classification in Large Databases and On-Line
Looking for more research papers to read, I scanned my Hands-On Machine Learning notes for the many papers that were referenced there. This is one of those papers. These papers are mainly on machine learning and deep learning topics.
Reference Pasting Small Votes for Classification in Large Databases and On-Line Paper
Introduction
Many databases have grown to the point where they cannot fit into the fast memory of even large memory machines, to say nothing of current workstations. IF what we want to do is to use these databases to construct predictions of various characteristics, then since the usual methods require that all data be held in fast memory, various work-arounds have to be used. This paper studies one such class of methods which give accuracy comparable to that which could have been obtained if all data could have been held in core and which are computationally fast. The procesure takes small pieces of data, grows a predictor on each small piece and then pasts these predictions together. A version is given that scales to terabyte data sets.
Suppose that database is a large collection of examples where are input vectors and the are class labels. What we want to do is use this data to construct a classifier which can accurately assign a class label to future inputs . But is too large to hold in the memory of any available computer. What is shown in this paper is that the aggregation of many classifiers, each grown on a training set of modest size selected from the data base , can achieve almost optimum classification accuracy. The procedure is refered to as pasting votes together, and has two key items in its implementation:
- Suppose that up to the present, predictors have been constructed. A new training set of size is selected from either by random or importance sampling. The st predictor is grown on this new training set and aggregated with the previous . The aggregation is by unweighted plurality voting. If random sampling is used to select the training set, the training set is called an Rprecinct and the procedure is called pasting Rvotes (R=random). If is done by importance sampling, the training set is called Iprecinct and the procedure as pasting Ivotes (I=importance). The importance sampling used samples more heavily from the instances more likely tp be misclassified.
- An estimate of the generalization error for the kth aggregation of is updated. The pasting stops when stops decreasing. The estimate of can be gotten using a test set, but our implementation uses out-of-baf estimation.
The simplest version of pasting is to select each training set of size by random sampling from the data base , grow the classifier, repeat a preassigned number of times, stop and aggregate the classifiers by voting. If, after aggregation, accuracy is checked on the test set, then further runs can be used to optimize the values of and . A more sophisticated version estimates after the kth aggregation and stops after stops decreasing.
The prime conclusion is that pasting small Ivotes together is a promising approach to constructing classifiers for large data sets and for on-line learning.
Comments
You have to be logged in to add a comment
User Comments
There are currently no comments for this article.