This paper presents a careful analysis of arguments
for both methods. It also introduces a new method of
feature selection that is based on the concept of boosting
from computational learning theory and combines
the advantages of filter and wrapper methods. Like
filters, it is very fast and general, while at the same
time using knowledge of the learning algorithm to inform
the search and provide a natural stopping criterion.
We present empirical results using two different
wrappers and three variants of our algorithm. The experiments
use six datasets and three different learning
algorithms, namely Naive Bayes (NB), ID3 with χ2
pruning (ID3), and k-Nearest Neighbors (k-NN).