(Brazdil, Gama & Henry, 1994) have investigated meta-level learning to predict
the best classifier for a given dataset. They use a confidence interval around
the best accuracy to define applicable 14 and inapplicable classifiers for each
dataset. Our approach uses the statistical t-Test instead. While their approach
has to integrate possibly conflicting rules concerning applicability, making the
evaluation quite complex, our approach can predict significant differences directly.
Furthermore, the focus on using only decision trees and derived rules
may have lead to inferior results as we had to use a variety of machine learning
techniques to get best results. They also considered only one-against-all comparisons between candidate classifiers instead of the pairwise comparisons which we investigated.