All the learners used were part of the MLC++ [11] package1
apart from the complex Bayesian learner which was
part of the Hugin tool2, the Hugin tool was also used to run
the expert constructed BN.
The diVerent models do not all provide the same sort of
prediction. The MC4 and KNN learners usually give a prediction
in the form of an unqualiWed value from the possible
range of values. BNs do not make predictions in the
same format as the MC4 or KNN learners. Rather than
supply a simple answer they supply a probability for each
of the possible outcomes. This allows for a greater sensitivity
of prediction; the BN not only makes a prediction, but is
also able to provide some idea of conWdence in the prediction.
To make a direct comparison with the learners we had
to interpret the BN prediction as a deWnite result (win, lose,
or draw). Our approach was to choose the result with the
highest predicted probability, irrespective of how close two
or more results might be. In cases where two or more of the
outcomes of the BN were equally likely we deemed that the
prediction was incorrect (even if the actual result was one
of the two most likely). This approach clearly treats BNs
harshly in the analysis. In reality, a prediction involving
equal (or nearly equal) probabilities would be useful. For