In the third experiment, we again executed all the classification
algorithms using tenfold cross-validation and the
rebalanced training files (using The SMOTE algorithm)
with only the best 15 attributes. The results obtained after
re-executing the 10 classification algorithms using tenfold
cross-validation are summarized in Table VI. If we analyse
and compare this table with the previous IV and V, we
can observe that over half of the algorithms have increased
the values obtained in all the evaluation measures, and
some of them also obtain the new best maximum values
in almost all measures except accuracy. The algorithms that
have obtained the best results are Prism, OneR and ADTree
again.