Abstract. The removal of irrelevant or redundant attributes could benefit us in making decisions and
analyzing data efficiently. Feature Selection is one of the most important and frequently used techniques in
data preprocessing for data mining. In this paper, special attention is made on feature selection for
classification with labeled data. Here an algorithm is used that arranges attributes based on their importance
using two independent criteria. Then, the arranged attributes can be used as input one simple and powerful
algorithm for construction decision tree (Oblivious Tree). Results indicate that this decision tree using
featured selected by proposed algorithm outperformed decision tree without feature selection. From the
experimental results, it is observed that, this method generates smaller tree having an acceptable accuracy.
Keywords: Decision Tree, Feature Selection, Classification Rules, Oblivious Tree