are two common methods of ensembling classifiers. We use ada-boost applied to the Naive Bayes (Elkan, 1997) and to C4.5 with CF as our learning algorithms. Basically, ada-boost maintains a sampling probability distribution on the training set, and modifies the probability distribution after each classifier is built. The probability of examples with an incorrect prediction by the previous classifier is increased, so these examples will be sampled more heavily in the next round of boosting, to be learnt correctly.