This paper reviews ve statistical tests for determining whether one learning algorithm outperforms
another on a particular learning task. These tests are compared experimentally to
determine their probability of incorrectly detecting a dierence when no dierence exists (type
1 error). Two widely-used statistical tests are shown to have high probability of Type I error in
certain situations and should never be used. These tests are (a) a test for the dierence of two
proportions and (b) a paired-dierences t test based on taking several random train/test splits.
A third test, a paired-dierences t test based on 10-fold cross-validation, exhibits somewhat
elevated probability of Type I error. A fourth test, McNemar's test, is shown to have low Type
I error. The fth test is a new test, 5x2cv, based on 5 iterations of 2-fold cross-validation.
Experiments show that this test also has good Type I error. The paper also measures the power
(ability to detect algorithm dierences when they do exist) of these tests. The cross-validated t
test is the most powerful. The 5x2cv test is shown to be slightly more powerful than McNemar's
test. The choice of the best test is determined by the computational cost of running the learning
algorithm. For algorithms that can be executed only once, McNemar's test is the only test with
acceptable Type I error. For algorithms that can be executed ten times, the 5x2cv test is
recommended, because it is slightly more powerful and because it directly measures variation
due to the choice of training set.