In this chapter, we investigate the hypothesis that StackingC is the most stable
ensemble learning scheme. We define the stability of a learner as its continued
good performance in the face of a reduction in training data, i.e. its capability
to find the best or a reasonably good model when confronted with less training
data, and – in case of insufficient training data – a graceful degradation in performance.