Stable and Unstable Predictors
We have discussed how error measures play a role in the Supervised Learning problem and
assessing the predictors we work with. We now make a distinction between two classes of
predictors, unstable and stable. An unstable predictor is one that has a strong dependency
on its training data, therefore the hypothesis it forms during learning depends to a large
degree on what data it received. Examples of unstable predictors are decision trees [34]
and neural networks (see section 2.1.4). A stable predictor is one that has no such strong
dependency on the training data; examples of stable predictors are k-nearest neighbour
classifiers and the Fischer linear discriminant [34]. Unstable classifiers exhibit what is known
as high variance, while stable classifiers exhibit low variance (see [13] for more information).
It is this property that brings us to choose unstable classifiers as the focus of the thesis.
Variance is one half of the well known bias-variance decomposition of an error function,
which we will now discuss.