Ensemble Learning and Random Forests Exercises Answers

This chapter goes over Bagging Classifiers, Voting Classifiers, Random Forests, Extra-Trees, and Boosting algorithms.

Ensemble Learning and Random Forests Exercises Answers

Question 1

If you have trained five different models on the exact same training data, and they all achieve 95% precision, is there any chance that you can combine these models to get better results? If so, how? If not, why?

Yes there is a better chance that when you combine these models you get better results. This is the essence of ensemble learning. If the models make different kinds of errors (if the models are independent from each other), then the models will have different types of errors. You can used soft voting (predict the highest probability classifier) or hard voting (predict the highest class probability averaged) to predict classes in classification tasks, and for regression tasks, you can take the average of the predictions. Ensemble methods generally have much less variance than each individual model and have slightly lower bias than the individual models/

Question 2

What is the difference between hard and soft voting classifiers?

Hard voting classifiers predict the class with the highest probability, with the highest probability being calculated as the average of the probabilities for the class for each of the models that makes up the ensemble. Soft voting classifiers predict the class that has the highest probability out of the probabilities predicted by each of the models. Soft voting often achieves higher performance than hard voting because it gives more weight to highly confident votes.

Question 3

Is it possible to speed up training of a bagging ensemble by distributing it across multiple servers? What about pasting ensembles, boosting ensembles, random forests, or stacking ensembles?

You can speed up the training of a bagging ensemble by distributing it across multiple servers - this is why it scales well. The same can be said of pasting ensembles. Boosting ensembles are ensemble methods that train predictors sequentially, each trying to correct for its predecessor. The important drawback of this sequential learning technique is that it cannot be parallelized (run on multiple servers).Random Forests can be run on multiple servers because Random Forests are an ensemble of decision trees trained generally trained via the bagging or pasting methods. The predictors that make up the stacking ensemble can be trained on different servers, but the stacking model itself I believe needs to be trained on one server.

Question 4

What is the benefit of out-of-bag evaluation?

Out-of-Bag evaluation allows you to evaluate your model on just the training set, since some of the training instances are not used to train the predictor. If using bagging. some instances will be sampled several times, while others will not be sampled at all. The benefit of this is that you don;t have to create a validation set.

Question 5

What makes Extra-Trees more random than regular Random Forests? How can this extra randomness help? Are Extra-Trees slower or faster than regular Random Forests?

Extra-Trees is more random than regular forest due to the fact that it uses random thresholds for each feature rather than searching for the best possible thresholds (Random forests consider only a random number of features for splitting, while Extra-Trees uses random thresholds for each random subset of features). This extra randomness can trade more bias for lower variance. Extra-Trees is much faster to run - due to searching for the best threshold at each feature at every node being the most time consuming task of creating a tree.

Question 6

If your AdaBoost ensemble underfits the training data, what hyperparameters should you tweak and how?

You can increase the number of base estimators by increasing n_estimators, and you can increase the learning rate learning_rate to allow the model to learn more aggressively from the training data. You could also try using more complex base models.

Question 7

If your Gradient Boosting ensemble overfits the training set, should you increase or decrease the learning rate?

You should decrease the learning rate.

The learning_rate hyperparameter scales the contribution of each tree. If you set it to a low value, such as 0.1, you will need more trees in the ensemble to fit the training set, but the predictions will usually generalize better. This is a regularization technique called shrinkage.

The image below shows two GradientBoostedRegressionTrees trained with different learning rates and number of estimators.

GBRT Learning Rate and n_estimators

Question 8

Load the MNIST data (introduced in Chapter 3), and split it into a training set, a validation set, and a test set (e.g., use 50,000 instances for training, 10,000 for validation, and 10,000 for testing). Then train various classifiers, such as a Random Forest classifier, an Extra-Trees classifier, and an SVM. Next, try to combine them into an ensemble that outperforms them all on the validation set, using a soft or hard voting classifier. Once you have found one, try it on the test set. How much better does it perform compared to the individual classifiers?

from sklearn.datasets import fetch_openml 

mnist = fetch_openml('mnist_784', version=1)
data, target, description = mnist["data"], mnist["target"], mnist["DESCR"]
print(description)
data = data.to_numpy()
target = target.to_numpy()
X_train, X_val, X_test, y_train, y_val, y_test = data[:50000], data[50000:60000], data[60000:], target[:50000], target[50000:60000], target[60000:]
print("X_train Shape: {}".format(X_train.shape))
print("X_val Shape: {}".format(X_val.shape))
print("X_test Shape: {}".format(X_test.shape))
print("y_train Shape: {}".format(y_train.shape))
print("y_val Shape: {}".format(y_val.shape))
print("y_test Shape: {}".format(y_test.shape))
out[10]

**Author**: Yann LeCun, Corinna Cortes, Christopher J.C. Burges
**Source**: [MNIST Website](http://yann.lecun.com/exdb/mnist/) - Date unknown
**Please cite**:

The MNIST database of handwritten digits with 784 features, raw data available at: http://yann.lecun.com/exdb/mnist/. It can be split in a training set of the first 60,000 examples, and a test set of 10,000 examples

It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image. It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting. The original black and white (bilevel) images from NIST were size normalized to fit in a 20x20 pixel box while preserving their aspect ratio. The resulting images contain grey levels as a result of the anti-aliasing technique used by the normalization algorithm. the images were centered in a 28x28 image by computing the center of mass of the pixels, and translating the image so as to position this point at the center of the 28x28 field.

With some classification methods (particularly template-based methods, such as SVM and K-nearest neighbors), the error rate improves when the digits are centered by bounding box rather than center of mass. If you do this kind of pre-processing, you should report it in your publications. The MNIST database was constructed from NIST's NIST originally designated SD-3 as their training set and SD-1 as their test set. However, SD-3 is much cleaner and easier to recognize than SD-1. The reason for this can be found on the fact that SD-3 was collected among Census Bureau employees, while SD-1 was collected among high-school students. Drawing sensible conclusions from learning experiments requires that the result be independent of the choice of training set and test among the complete set of samples. Therefore it was necessary to build a new database by mixing NIST's datasets.

The MNIST training set is composed of 30,000 patterns from SD-3 and 30,000 patterns from SD-1. Our test set was composed of 5,000 patterns from SD-3 and 5,000 patterns from SD-1. The 60,000 pattern training set contained examples from approximately 250 writers. We made sure that the sets of writers of the training set and test set were disjoint. SD-1 contains 58,527 digit images written by 500 different writers. In contrast to SD-3, where blocks of data from each writer appeared in sequence, the data in SD-1 is scrambled. Writer identities for SD-1 is available and we used this information to unscramble the writers. We then split SD-1 in two: characters written by the first 250 writers went into our new training set. The remaining 250 writers were placed in our test set. Thus we had two sets with nearly 30,000 examples each. The new training set was completed with enough examples from SD-3, starting at pattern # 0, to make a full set of 60,000 training patterns. Similarly, the new test set was completed with SD-3 examples starting at pattern # 35,000 to make a full set with 60,000 test patterns. Only a subset of 10,000 test images (5,000 from SD-1 and 5,000 from SD-3) is available on this site. The full 60,000 sample training set is available.

Downloaded from openml.org.
X_train Shape: (50000, 784)
X_val Shape: (10000, 784)
X_test Shape: (10000, 784)
y_train Shape: (50000,)
y_val Shape: (10000,)
y_test Shape: (10000,)

from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier, BaggingClassifier
from sklearn.svm import SVC
from sklearn.experimental import enable_halving_search_cv
from sklearn.model_selection import GridSearchCV, HalvingGridSearchCV
from sklearn.metrics import accuracy_score
import numpy as np 
import math
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler

def percent_diff(val1,val2):
    return abs(val1-val2)/((val1 + val2 ) / 2)
out[11]
trees_param = [
    { "max_depth": [3,4,8], "criterion": ["gini","entropy"], "n_estimators": [100,200]}
]
etr_pipe = ExtraTreesClassifier()
e_tree_clf = HalvingGridSearchCV(etr_pipe,param_grid=trees_param,cv=5,verbose=3,refit=True)
random_forest = RandomForestClassifier()
rand_forest_clf = HalvingGridSearchCV(random_forest,param_grid=trees_param,cv=5,verbose=3,refit=True)
e_tree_clf.fit(X_train,y_train)
rand_forest_clf.fit(X_train,y_train)
out[12]

n_iterations: 3
n_required_iterations: 3
n_possible_iterations: 3
min_resources_: 5555
max_resources_: 50000
aggressive_elimination: False
factor: 3
----------
iter: 0
n_candidates: 12
n_resources: 5555
Fitting 5 folds for each of 12 candidates, totalling 60 fits
[CV 1/5] END criterion=gini, max_depth=3, n_estimators=100;, score=(train=0.730, test=0.716) total time= 0.6s
[CV 2/5] END criterion=gini, max_depth=3, n_estimators=100;, score=(train=0.734, test=0.713) total time= 0.4s
[CV 3/5] END criterion=gini, max_depth=3, n_estimators=100;, score=(train=0.703, test=0.690) total time= 0.5s
[CV 4/5] END criterion=gini, max_depth=3, n_estimators=100;, score=(train=0.707, test=0.685) total time= 0.5s
[CV 5/5] END criterion=gini, max_depth=3, n_estimators=100;, score=(train=0.744, test=0.746) total time= 0.4s
[CV 1/5] END criterion=gini, max_depth=3, n_estimators=200;, score=(train=0.732, test=0.721) total time= 1.0s
[CV 2/5] END criterion=gini, max_depth=3, n_estimators=200;, score=(train=0.727, test=0.712) total time= 1.0s
[CV 3/5] END criterion=gini, max_depth=3, n_estimators=200;, score=(train=0.731, test=0.705) total time= 0.9s
[CV 4/5] END criterion=gini, max_depth=3, n_estimators=200;, score=(train=0.735, test=0.716) total time= 0.9s
[CV 5/5] END criterion=gini, max_depth=3, n_estimators=200;, score=(train=0.727, test=0.726) total time= 1.0s
[CV 1/5] END criterion=gini, max_depth=4, n_estimators=100;, score=(train=0.796, test=0.795) total time= 0.6s
[CV 2/5] END criterion=gini, max_depth=4, n_estimators=100;, score=(train=0.810, test=0.779) total time= 0.6s
[CV 3/5] END criterion=gini, max_depth=4, n_estimators=100;, score=(train=0.788, test=0.762) total time= 0.6s
[CV 4/5] END criterion=gini, max_depth=4, n_estimators=100;, score=(train=0.804, test=0.765) total time= 0.6s
[CV 5/5] END criterion=gini, max_depth=4, n_estimators=100;, score=(train=0.805, test=0.788) total time= 0.6s
[CV 1/5] END criterion=gini, max_depth=4, n_estimators=200;, score=(train=0.802, test=0.808) total time= 1.1s
[CV 2/5] END criterion=gini, max_depth=4, n_estimators=200;, score=(train=0.801, test=0.778) total time= 1.1s
[CV 3/5] END criterion=gini, max_depth=4, n_estimators=200;, score=(train=0.778, test=0.749) total time= 1.1s
[CV 4/5] END criterion=gini, max_depth=4, n_estimators=200;, score=(train=0.799, test=0.774) total time= 1.1s
[CV 5/5] END criterion=gini, max_depth=4, n_estimators=200;, score=(train=0.811, test=0.802) total time= 1.1s
[CV 1/5] END criterion=gini, max_depth=8, n_estimators=100;, score=(train=0.963, test=0.913) total time= 0.9s
[CV 2/5] END criterion=gini, max_depth=8, n_estimators=100;, score=(train=0.967, test=0.905) total time= 0.8s
[CV 3/5] END criterion=gini, max_depth=8, n_estimators=100;, score=(train=0.964, test=0.910) total time= 0.9s
[CV 4/5] END criterion=gini, max_depth=8, n_estimators=100;, score=(train=0.963, test=0.878) total time= 0.9s
[CV 5/5] END criterion=gini, max_depth=8, n_estimators=100;, score=(train=0.962, test=0.897) total time= 0.9s
[CV 1/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.965, test=0.917) total time= 1.7s
[CV 2/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.966, test=0.904) total time= 1.6s
[CV 3/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.965, test=0.904) total time= 1.7s
[CV 4/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.964, test=0.886) total time= 1.7s
[CV 5/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.962, test=0.893) total time= 1.8s
[CV 1/5] END criterion=entropy, max_depth=3, n_estimators=100;, score=(train=0.716, test=0.706) total time= 0.4s
[CV 2/5] END criterion=entropy, max_depth=3, n_estimators=100;, score=(train=0.718, test=0.696) total time= 0.5s
[CV 3/5] END criterion=entropy, max_depth=3, n_estimators=100;, score=(train=0.710, test=0.683) total time= 0.5s
[CV 4/5] END criterion=entropy, max_depth=3, n_estimators=100;, score=(train=0.694, test=0.670) total time= 0.5s
[CV 5/5] END criterion=entropy, max_depth=3, n_estimators=100;, score=(train=0.700, test=0.698) total time= 0.5s
[CV 1/5] END criterion=entropy, max_depth=3, n_estimators=200;, score=(train=0.711, test=0.702) total time= 0.9s
[CV 2/5] END criterion=entropy, max_depth=3, n_estimators=200;, score=(train=0.732, test=0.718) total time= 1.0s
[CV 3/5] END criterion=entropy, max_depth=3, n_estimators=200;, score=(train=0.710, test=0.682) total time= 0.9s
[CV 4/5] END criterion=entropy, max_depth=3, n_estimators=200;, score=(train=0.712, test=0.687) total time= 0.9s
[CV 5/5] END criterion=entropy, max_depth=3, n_estimators=200;, score=(train=0.730, test=0.727) total time= 0.9s
[CV 1/5] END criterion=entropy, max_depth=4, n_estimators=100;, score=(train=0.791, test=0.783) total time= 0.5s
[CV 2/5] END criterion=entropy, max_depth=4, n_estimators=100;, score=(train=0.801, test=0.778) total time= 0.6s
[CV 3/5] END criterion=entropy, max_depth=4, n_estimators=100;, score=(train=0.784, test=0.738) total time= 0.6s
[CV 4/5] END criterion=entropy, max_depth=4, n_estimators=100;, score=(train=0.786, test=0.761) total time= 0.5s
[CV 5/5] END criterion=entropy, max_depth=4, n_estimators=100;, score=(train=0.796, test=0.784) total time= 0.5s
[CV 1/5] END criterion=entropy, max_depth=4, n_estimators=200;, score=(train=0.800, test=0.788) total time= 1.1s
[CV 2/5] END criterion=entropy, max_depth=4, n_estimators=200;, score=(train=0.812, test=0.782) total time= 1.0s
[CV 3/5] END criterion=entropy, max_depth=4, n_estimators=200;, score=(train=0.781, test=0.747) total time= 1.0s
[CV 4/5] END criterion=entropy, max_depth=4, n_estimators=200;, score=(train=0.791, test=0.769) total time= 1.1s
[CV 5/5] END criterion=entropy, max_depth=4, n_estimators=200;, score=(train=0.799, test=0.783) total time= 1.1s
[CV 1/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.967, test=0.915) total time= 0.9s
[CV 2/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.970, test=0.909) total time= 0.9s
[CV 3/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.965, test=0.908) total time= 0.9s
[CV 4/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.969, test=0.886) total time= 0.9s
[CV 5/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.967, test=0.901) total time= 0.9s
[CV 1/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.966, test=0.914) total time= 2.0s
[CV 2/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.971, test=0.905) total time= 1.8s
[CV 3/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.967, test=0.906) total time= 1.8s
[CV 4/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.968, test=0.887) total time= 1.8s
[CV 5/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.968, test=0.904) total time= 1.8s
----------
iter: 1
n_candidates: 4
n_resources: 16665
Fitting 5 folds for each of 4 candidates, totalling 20 fits
[CV 1/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.933, test=0.914) total time= 6.2s
[CV 2/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.938, test=0.903) total time= 6.3s
[CV 3/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.942, test=0.914) total time= 6.1s
[CV 4/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.942, test=0.905) total time= 6.3s
[CV 5/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.943, test=0.909) total time= 6.7s
[CV 1/5] END criterion=gini, max_depth=8, n_estimators=100;, score=(train=0.932, test=0.910) total time= 3.3s
[CV 2/5] END criterion=gini, max_depth=8, n_estimators=100;, score=(train=0.933, test=0.895) total time= 3.4s
[CV 3/5] END criterion=gini, max_depth=8, n_estimators=100;, score=(train=0.939, test=0.913) total time= 3.5s
[CV 4/5] END criterion=gini, max_depth=8, n_estimators=100;, score=(train=0.938, test=0.904) total time= 3.4s
[CV 5/5] END criterion=gini, max_depth=8, n_estimators=100;, score=(train=0.940, test=0.901) total time= 3.7s
[CV 1/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.934, test=0.914) total time= 8.3s
[CV 2/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.940, test=0.899) total time= 8.1s
[CV 3/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.943, test=0.916) total time= 6.2s
[CV 4/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.942, test=0.905) total time= 7.0s
[CV 5/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.944, test=0.907) total time= 6.5s
[CV 1/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.931, test=0.911) total time= 3.6s
[CV 2/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.935, test=0.896) total time= 3.2s
[CV 3/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.939, test=0.912) total time= 3.4s
[CV 4/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.940, test=0.905) total time= 3.3s
[CV 5/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.942, test=0.910) total time= 3.1s
----------
iter: 2
n_candidates: 2
n_resources: 49995
Fitting 5 folds for each of 2 candidates, totalling 10 fits
[CV 1/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.922, test=0.917) total time= 22.7s
[CV 2/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.922, test=0.911) total time= 20.9s
[CV 3/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.924, test=0.911) total time= 21.6s
[CV 4/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.924, test=0.912) total time= 22.0s
[CV 5/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.923, test=0.912) total time= 22.3s
[CV 1/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.923, test=0.919) total time= 22.9s
[CV 2/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.923, test=0.913) total time= 23.6s
[CV 3/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.922, test=0.913) total time= 23.1s
[CV 4/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.925, test=0.908) total time= 21.0s
[CV 5/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.924, test=0.910) total time= 20.4s
n_iterations: 3
n_required_iterations: 3
n_possible_iterations: 3
min_resources_: 5555
max_resources_: 50000
aggressive_elimination: False
factor: 3
----------
iter: 0
n_candidates: 12
n_resources: 5555
Fitting 5 folds for each of 12 candidates, totalling 60 fits
[CV 1/5] END criterion=gini, max_depth=3, n_estimators=100;, score=(train=0.757, test=0.715) total time= 0.6s
[CV 2/5] END criterion=gini, max_depth=3, n_estimators=100;, score=(train=0.752, test=0.739) total time= 0.6s
[CV 3/5] END criterion=gini, max_depth=3, n_estimators=100;, score=(train=0.774, test=0.743) total time= 0.5s
[CV 4/5] END criterion=gini, max_depth=3, n_estimators=100;, score=(train=0.736, test=0.717) total time= 0.5s
[CV 5/5] END criterion=gini, max_depth=3, n_estimators=100;, score=(train=0.763, test=0.734) total time= 0.5s
[CV 1/5] END criterion=gini, max_depth=3, n_estimators=200;, score=(train=0.750, test=0.713) total time= 1.2s
[CV 2/5] END criterion=gini, max_depth=3, n_estimators=200;, score=(train=0.753, test=0.741) total time= 1.2s
[CV 3/5] END criterion=gini, max_depth=3, n_estimators=200;, score=(train=0.766, test=0.740) total time= 1.2s
[CV 4/5] END criterion=gini, max_depth=3, n_estimators=200;, score=(train=0.744, test=0.738) total time= 1.1s
[CV 5/5] END criterion=gini, max_depth=3, n_estimators=200;, score=(train=0.761, test=0.743) total time= 1.2s
[CV 1/5] END criterion=gini, max_depth=4, n_estimators=100;, score=(train=0.823, test=0.793) total time= 0.7s
[CV 2/5] END criterion=gini, max_depth=4, n_estimators=100;, score=(train=0.819, test=0.788) total time= 0.7s
[CV 3/5] END criterion=gini, max_depth=4, n_estimators=100;, score=(train=0.831, test=0.796) total time= 0.7s
[CV 4/5] END criterion=gini, max_depth=4, n_estimators=100;, score=(train=0.826, test=0.807) total time= 0.7s
[CV 5/5] END criterion=gini, max_depth=4, n_estimators=100;, score=(train=0.830, test=0.810) total time= 0.7s
[CV 1/5] END criterion=gini, max_depth=4, n_estimators=200;, score=(train=0.829, test=0.793) total time= 1.4s
[CV 2/5] END criterion=gini, max_depth=4, n_estimators=200;, score=(train=0.827, test=0.800) total time= 1.4s
[CV 3/5] END criterion=gini, max_depth=4, n_estimators=200;, score=(train=0.844, test=0.815) total time= 1.5s
[CV 4/5] END criterion=gini, max_depth=4, n_estimators=200;, score=(train=0.828, test=0.807) total time= 1.4s
[CV 5/5] END criterion=gini, max_depth=4, n_estimators=200;, score=(train=0.838, test=0.814) total time= 1.4s
[CV 1/5] END criterion=gini, max_depth=8, n_estimators=100;, score=(train=0.978, test=0.905) total time= 1.3s
[CV 2/5] END criterion=gini, max_depth=8, n_estimators=100;, score=(train=0.976, test=0.911) total time= 1.3s
[CV 3/5] END criterion=gini, max_depth=8, n_estimators=100;, score=(train=0.974, test=0.909) total time= 1.2s
[CV 4/5] END criterion=gini, max_depth=8, n_estimators=100;, score=(train=0.971, test=0.901) total time= 1.2s
[CV 5/5] END criterion=gini, max_depth=8, n_estimators=100;, score=(train=0.977, test=0.904) total time= 1.2s
[CV 1/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.978, test=0.905) total time= 2.5s
[CV 2/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.977, test=0.914) total time= 2.5s
[CV 3/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.976, test=0.911) total time= 2.6s
[CV 4/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.974, test=0.905) total time= 2.5s
[CV 5/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.975, test=0.912) total time= 2.5s
[CV 1/5] END criterion=entropy, max_depth=3, n_estimators=100;, score=(train=0.727, test=0.713) total time= 0.8s
[CV 2/5] END criterion=entropy, max_depth=3, n_estimators=100;, score=(train=0.726, test=0.722) total time= 0.8s
[CV 3/5] END criterion=entropy, max_depth=3, n_estimators=100;, score=(train=0.734, test=0.711) total time= 0.8s
[CV 4/5] END criterion=entropy, max_depth=3, n_estimators=100;, score=(train=0.733, test=0.718) total time= 0.8s
[CV 5/5] END criterion=entropy, max_depth=3, n_estimators=100;, score=(train=0.761, test=0.746) total time= 0.8s
[CV 1/5] END criterion=entropy, max_depth=3, n_estimators=200;, score=(train=0.744, test=0.705) total time= 1.6s
[CV 2/5] END criterion=entropy, max_depth=3, n_estimators=200;, score=(train=0.755, test=0.739) total time= 1.6s
[CV 3/5] END criterion=entropy, max_depth=3, n_estimators=200;, score=(train=0.748, test=0.729) total time= 1.6s
[CV 4/5] END criterion=entropy, max_depth=3, n_estimators=200;, score=(train=0.727, test=0.708) total time= 1.6s
[CV 5/5] END criterion=entropy, max_depth=3, n_estimators=200;, score=(train=0.751, test=0.731) total time= 1.6s
[CV 1/5] END criterion=entropy, max_depth=4, n_estimators=100;, score=(train=0.813, test=0.787) total time= 1.0s
[CV 2/5] END criterion=entropy, max_depth=4, n_estimators=100;, score=(train=0.818, test=0.797) total time= 1.0s
[CV 3/5] END criterion=entropy, max_depth=4, n_estimators=100;, score=(train=0.818, test=0.789) total time= 1.0s
[CV 4/5] END criterion=entropy, max_depth=4, n_estimators=100;, score=(train=0.813, test=0.788) total time= 1.0s
[CV 5/5] END criterion=entropy, max_depth=4, n_estimators=100;, score=(train=0.826, test=0.793) total time= 1.0s
[CV 1/5] END criterion=entropy, max_depth=4, n_estimators=200;, score=(train=0.810, test=0.781) total time= 2.1s
[CV 2/5] END criterion=entropy, max_depth=4, n_estimators=200;, score=(train=0.819, test=0.801) total time= 2.1s
[CV 3/5] END criterion=entropy, max_depth=4, n_estimators=200;, score=(train=0.829, test=0.798) total time= 2.1s
[CV 4/5] END criterion=entropy, max_depth=4, n_estimators=200;, score=(train=0.821, test=0.809) total time= 2.1s
[CV 5/5] END criterion=entropy, max_depth=4, n_estimators=200;, score=(train=0.841, test=0.811) total time= 2.1s
[CV 1/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.982, test=0.915) total time= 2.0s
[CV 2/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.980, test=0.908) total time= 2.0s
[CV 3/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.981, test=0.912) total time= 2.0s
[CV 4/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.977, test=0.905) total time= 2.0s
[CV 5/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.984, test=0.915) total time= 2.0s
[CV 1/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.984, test=0.907) total time= 4.0s
[CV 2/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.983, test=0.914) total time= 4.0s
[CV 3/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.982, test=0.914) total time= 4.0s
[CV 4/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.978, test=0.907) total time= 4.0s
[CV 5/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.982, test=0.915) total time= 4.1s
----------
iter: 1
n_candidates: 4
n_resources: 16665
Fitting 5 folds for each of 4 candidates, totalling 20 fits
[CV 1/5] END criterion=gini, max_depth=8, n_estimators=100;, score=(train=0.949, test=0.920) total time= 3.8s
[CV 2/5] END criterion=gini, max_depth=8, n_estimators=100;, score=(train=0.953, test=0.920) total time= 3.8s
[CV 3/5] END criterion=gini, max_depth=8, n_estimators=100;, score=(train=0.950, test=0.920) total time= 3.8s
[CV 4/5] END criterion=gini, max_depth=8, n_estimators=100;, score=(train=0.951, test=0.913) total time= 3.8s
[CV 5/5] END criterion=gini, max_depth=8, n_estimators=100;, score=(train=0.950, test=0.916) total time= 3.8s
[CV 1/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.953, test=0.918) total time= 7.4s
[CV 2/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.953, test=0.919) total time= 7.6s
[CV 3/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.951, test=0.921) total time= 7.6s
[CV 4/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.951, test=0.913) total time= 8.6s
[CV 5/5] END criterion=gini, max_depth=8, n_estimators=200;, score=(train=0.952, test=0.918) total time= 9.6s
[CV 1/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.953, test=0.922) total time= 5.8s
[CV 2/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.955, test=0.919) total time= 7.7s
[CV 3/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.953, test=0.920) total time= 7.0s
[CV 4/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.954, test=0.911) total time= 5.7s
[CV 5/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.954, test=0.921) total time= 5.8s
[CV 1/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.955, test=0.923) total time= 11.7s
[CV 2/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.957, test=0.919) total time= 11.9s
[CV 3/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.954, test=0.923) total time= 12.3s
[CV 4/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.956, test=0.920) total time= 11.6s
[CV 5/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.957, test=0.922) total time= 12.6s
----------
iter: 2
n_candidates: 2
n_resources: 49995
Fitting 5 folds for each of 2 candidates, totalling 10 fits
[CV 1/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.936, test=0.927) total time= 18.0s
[CV 2/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.936, test=0.924) total time= 18.2s
[CV 3/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.938, test=0.924) total time= 18.5s
[CV 4/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.937, test=0.921) total time= 16.3s
[CV 5/5] END criterion=entropy, max_depth=8, n_estimators=100;, score=(train=0.937, test=0.921) total time= 16.6s
[CV 1/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.937, test=0.928) total time= 33.8s
[CV 2/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.938, test=0.926) total time= 33.1s
[CV 3/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.938, test=0.923) total time= 32.3s
[CV 4/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.938, test=0.921) total time= 32.5s
[CV 5/5] END criterion=entropy, max_depth=8, n_estimators=200;, score=(train=0.938, test=0.921) total time= 34.5s

HalvingGridSearchCV(estimator=RandomForestClassifier(),

param_grid=[{'criterion': ['gini', 'entropy'],

'max_depth': [3, 4, 8],

'n_estimators': [100, 200]}],

verbose=3)

print("""I kind of regret setting the max_depth below 9. I should have set the max_depth to None and increased the number of estimators, which is what I will do below. This will allow different trees in the ensemble to over fit the data, and then the ensemble can take the most confident of the predictions.  
""")

e_tree_clf_2 = ExtraTreesClassifier(n_estimators=1000,criterion="gini")
random_forest_clf_2 = RandomForestClassifier(n_estimators=1000,criterion="entropy")
e_tree_clf_2.fit(X_train,y_train)
random_forest_clf_2.fit(X_train,y_train)
out[13]

I kind of regret setting the max_depth below 9. I should have set the max_depth to None and increased the number of estimators, which is what I will do below. This will allow different trees in the ensemble to over fit the data, and then the ensemble can take the most confident of the predictions.

RandomForestClassifier(criterion='entropy', n_estimators=1000)

# I forgot to set probability to true
svc = Pipeline( steps=[
    ("scale",StandardScaler()),   
    ("predict",SVC(kernel="rbf",C=10,probability=True,verbose=3))
])
svc.fit(X_train,y_train)


svm_pred_test = svc.predict(X_test)
e_tree_pred_test = e_tree_clf_2.predict(X_test)
rand_forest_pred_test = random_forest_clf_2.predict(X_test)
print("SVM Accuracy Score (Test): {}".format(accuracy_score(svm_pred_test,y_test)))
print("Extra Trees Accuracy Score (Test): {}".format(accuracy_score(e_tree_pred_test,y_test)))
print("Random Forest Accuracy Score (Test): {}".format(accuracy_score(rand_forest_pred_test,y_test)))

from sklearn.ensemble import VotingClassifier

## Soft Voting
soft_v_clf = VotingClassifier([('svc',svc),('e_tree',e_tree_clf_2),('rf',random_forest)],voting="soft",n_jobs=-1,verbose=True)
soft_v_clf.fit(X_train,y_train)
soft_v_clf_pred_test = soft_v_clf.predict(X_test)

## Hard Voting
hard_v_clf = VotingClassifier([('svc',svc),('e_tree',e_tree_clf_2),('rf',random_forest)],voting="hard",n_jobs=-1,verbose=True)
hard_v_clf.fit(X_train,y_train)
hard_v_clf_pred_test = hard_v_clf.predict(X_test)


print("Soft Voting Accuracy Score (Test): {}".format(accuracy_score(soft_v_clf_pred_test,y_test)))
print("Hard Voting Accuracy Score (Test): {}".format(accuracy_score(hard_v_clf_pred_test,y_test)))
out[14]

[LibSVM]SVM Accuracy Score (Test): 0.9717
Extra Trees Accuracy Score (Test): 0.9742
Random Forest Accuracy Score (Test): 0.97
Soft Voting Accuracy Score (Test): 0.9796
Hard Voting Accuracy Score (Test): 0.9751

Question 9

Run the individual classifiers from the previous exercise to make predictions on the validation set, and create a new training set with the resulting predictions: each training instance is a vector containing the set of predictions from all your classifiers for an image, and the target is the image’s class. Train a classifier on this new training set. Congratulations, you have just trained a blender, and together with the classifiers they form a stacking ensemble! Now let’s evaluate the ensemble on the test set. For each image in the test set, make predictions with all your classifiers, then feed the predictions to the blender to get the ensemble’s pre‐ dictions. How does it compare to the voting classifier you trained earlier?

svc_pred = svc.predict(X_val)
e_tree_clf_pred = e_tree_clf_2.predict(X_val)
random_forest_clf_pred = e_tree_clf_2.predict(X_val)
training_data = np.hstack((svc_pred.reshape(-1,1),e_tree_clf_pred.reshape(-1,1),random_forest_clf_pred.reshape(-1,1)))
target = y_val 
blender = ExtraTreesClassifier()
blender.fit(training_data,target)
test_svc_pred =svc.predict(X_test)
test_e_tree_clf_pred  = e_tree_clf_2.predict(X_test)
test_random_forest_clf_pred = random_forest_clf_2.predict(X_test)
test_data = np.hstack((test_svc_pred.reshape(-1,1),test_e_tree_clf_pred.reshape(-1,1),test_random_forest_clf_pred.reshape(-1,1)))
blender_pred = blender.predict(test_data)
print("SVC Accuracy Score:",accuracy_score(test_svc_pred,y_test))
print("Extra Trees Accuracy Score:",accuracy_score(test_e_tree_clf_pred,y_test))
print("Random Forest Accuracy Score:",accuracy_score(test_random_forest_clf_pred,y_test))
print("Blender Accuracy Score:",accuracy_score(blender_pred,y_test))
out[16]

SVC Accuracy Score: 0.9717
Extra Trees Accuracy Score: 0.9742
Random Forest Accuracy Score: 0.97
Blender Accuracy Score: 0.9747