Evaluation We evaluated decision trees using their success rate, measured as the ratio of correctly classified methods to the total number of methods [16]. We found that decision trees could predict high or low coverage with success rates of 82% to 94%. When evaluating decision trees (or, indeed, any machine learning classifier), one must partition the data set into two disjoint subsets: one for training and the other for testing. Cross-validation repeats the train-test process for several divisions and reports the average success rate.