Also known as Bootstrap Aggregating (Breiman 96). Bagging is an ensemble method.
Bagging Reduces Variance
Remember the Bias / Variance decomposition:
E[(hD(x)−y)2]⏟Error=E[(hD(x)−ˉh(x))2]⏟Variance+E[(ˉh(x)−ˉy(x))2]⏟Bias+E[(ˉy(x)−y(x))2]⏟Noise
Our goal is to reduce the variance term: E[(hD(x)−ˉh(x))2].
For this, we want hD→ˉh.
Weak law of large numbers
The weak law of large numbers says (roughly) for i.i.d. random variables xi with mean ˉx, we have,
1mm∑i=1xi→ˉx as m→∞
Apply this to classifiers: Assume we have m training sets D1,D2,...,Dn drawn from Pn. Train a classifier on each one and average result:
ˆh=1mm∑i=1hDi→ˉhasm→∞
We refer to such an average of multiple classifiers as an ensemble of classifiers.
Good news: If ˆh→ˉh the variance component of the error must also vanish, i.e.E[(ˆh(x)−ˉh(x))2]→0 Problem:We don't have m data sets D1,....,Dm, we only have D.
Solution: Bagging (Bootstrap Aggregating)
Simulate drawing from P by drawing uniformly with replacement from the set D.
i.e. let Q(X,Y|D) be a probability distribution that picks a training sample (xi,yi) from D uniformly at random. More formally, Q((xi,yi)|D)=1n∀(xi,yi)∈D with n=|D|.
We sample the set Di∼Qn, i.e. |Di|=n, and Di is picked with replacement from Q|D.
Q: What is E[|D∩Di|]?
Bagged classifier: ˆhD=1m∑mi=1hDi
Notice: ˆhD=1m∑mi=1hDi↛ˉh(cannot use W.L.L.N here, W.L.L.N only works for i.i.d. samples). However, in practice bagging still reduces variance very effectively.
Analysis
Although we cannot prove that the new samples are i.i.d., we can show that they are drawn from the original distribution P.
Assume P is discrete, with P(X=xi)=pi over some set Ω=x1,...xN (N very large)
(let's ignore the label for now for simplicity)
Q(X=xi)=n∑k=1(nk)pki(1−pi)n−k⏟Probability that arek copies of xi in Dkn⏟Probabilitypick one ofthese copies=1nn∑k=1(nk)pki(1−pi)n−kk⏟Expected value ofBinomial Distributionwith parameter piE[B(pi,n)]=npi=1nnpi=pi←TATAAA_!! Each data set D′l is drawn from P, but not independently.
There is a simple intuitive argument why Q(X=xi)=P(X=xi). So far we assumed that you draw D from Pn and then Q picks a sample from D. However, you don't have to do it in that order. You can also view sampling from Q in reverse order: Consider that you first use Q to reserve a "spot" in D, i.e. a number from 1,...,n, where i means that you sampled the ith data point in D. So far you only have the slot, i, and you still need to fill it with a data point (xi,yi). You do this by sampling (xi,yi) from P. It is now obvious that which slot you picked doesn't really matter, so we have Q(X=x)=P(X=x).
Bagging summarized
Sample m data sets D1,…,Dm from D with replacement.
For each Dj train a classifier hj()
The final classifier is h(x)=1m∑mj=1hj(x).
In practice larger m results in a better ensemble, however at some point you will obtain diminishing returns. Note that setting m unnecessarily high will only slow down your classifier but will not increase the error of your classifier.
Advantages of Bagging
Easy to implement
Reduces variance, so has a strong beneficial effect on high variance classifiers.
As the prediction is an average of many classifiers, you obtain a mean score and variance. Latter can be interpreted as the uncertainty of the prediction. Especially in regression tasks, such uncertainties are otherwise hard to obtain. For example, imagine the prediction of a house price is $300,000. If a buyer wants to decide how much to offer, it would be very valuable to know if this prediction has standard deviation +-$10,000 or +-$50,000.
Bagging provides an unbiased estimate of the test error, which we refer to as the out-of-bag error. The idea is that each training point was not picked and all the data sets Dk. If we average the classifiers hk of all such data sets, we obtain a classifier (with a slightly smaller m) that was not trained on (xi,yi) ever and it is therefore equivalent to a test sample. If we compute the error of all these classifiers, we obtain an estimate of the true test error. The beauty is that we can do this without reducing the training set. We just run bagging as it is intended and obtain this so called out-of-bag error for free.
More formally, for each training point (xi,yi)∈D let Si={k|(xi,yi)∉Dk} - in other words Si is a set of all the training sets Dk, which do not contain (xk,yk). Let the averaged classifier over all these data sets be
˜hi(x)=1|Si|∑k∈Sihk(x).
The-of-bag error becomes simply the average error/loss that all these classifiers yield
ϵOOB=1n∑(xi,yi)∈Dl(˜hi(xi),yi).
This is an estimate of the test error, because for each training point we used the subset of classifiers that never saw that training point during training. if m is sufficiently large, the fact that we take out some classifiers has no significant effect and the estimate is pretty reliable.
Random Forest
One of the most famous and useful bagged algorithms is the Random Forest! A Random Forest is essentially nothing else but bagged decision trees, with a slightly modified splitting criteria.
The algorithm works as follows:
Sample m data sets D1,…,Dm from D with replacement.
For each Dj train a full decision tree hj() (max-depth=∞) with one small modification: before each split randomly subsample k≤d features (without replacement) and only consider these for your split. (This further increases the variance of the trees.)
The final classifier is h(x)=1m∑mj=1hj(x).
The Random Forest is one of the best, most popular and easiest to use out-of-the-box classifier.
There are two reasons for this:
The RF only has two hyper-parameters, m and k. It is extremely insensitive to both of these. A good choice for k is k=√d (where d denotes the number of features). You can set m as large as you can afford.
Decision trees do not require a lot of preprocessing. For example, the features can be of different scale, magnitude, or slope. This can be highly advantageous in scenarios with heterogeneous data, for example the medical settings where features could be things like blood pressure, age, gender, ..., each of which is recorded in completely different units.
Useful variants of Random Forests:
Split each training set into two partitions Dl=DAl∪DBl, where DAl∩DBl=∅. Build the tree on DAl and estimate the leaf labels on DBl. You must stop splitting if a leaf has only a single point in DBl in it. This has the advantage that each tree and also the RF classifier become consistent.
Do not grow each tree to its full depth, instead prune based on the leave out samples. This can further improve your bias/variance trade-off.