A more careful look at the probability distribution of testing examples reveals why this is the case. Naive Bayes (and C4.5 with CF) does not produce true probabilities. However, with equal class distribution in training set, the probabilities of testing examples are evenly distributed between 0 and 1, thus reducing mistakes in ranking. If the class distribution is unequal, the probability distribution may be skewed within a narrow range in one end. Thus, errors in probability estimation affects the ranking more easily, causing a lower lift index.