However, it seems undesirable to use only 1% of the negative examples, throwing away information in the rest of the negative examples. One possibility is to oversample with replacement the existing positive examples to a few times (2x, 5x, 10x, 20x) of the original size, while keeping the same numbers of negative examples. This way, we naturally increase the information on negative examples in the training set while still keeping numbers of positive and negative examples the same. Table 4 lists the average lift index with boosted Naive Bayes on the three datasets.