Several of the algorithms also prune subtrees of the generated decision tree
to reduce overfitting: A subtree is overfitted if it has been so highly tuned to the
specifics of the training data that it makes many classification errors on other
data. A subtree is pruned by replacing it with a leaf node. There are different
pruning heuristics; one heuristic uses part of the training data to build the tree
and another part of the training data to test it. The heuristic prunes a subtree if it
finds that misclassification on the test instances would be reduced if the subtree
were replaced by a leaf node.