In Section 3.5 we address both meta classifier choice and type of meta data
to be used, on two subsets of base classifiers. We show that MLR is indeed the
best classifier for preds meta data, among those we considered. We notice that
the performance differerences of those variants using preds meta data are much
smaller than of the variants for class-probs meta data, which indicates that
the learning problem for preds is easier for most classifiers. NaiveBayes seems a
reasonable if somewhat arbitrary choice for preds meta data. At last we conclude
that Stacking with predictions meta data is competitive to using probability
distribution meta data. We point out that Ting & Witten (1999) may have
used a variation of MLR similar in spirit to StackingC in their experiments which
yields a biased comparison and may explain why their conclusion as to the
merits of the different meta data types differs from ours.
In Section 3.6, Related Research, we give a short overview on relevant research. We will now proceed to shortly characterize our experimental setup.