In (Gama & Brazdil, 1999) and (Gama, 1999) a decision tree learner named
LTree is introduced that computes new attributes as linear, quadratic or logistic
discriminant functions of attributes at each node; these are then also passed
down the tree. The leaf nodes are still basically majority classifiers, although the
class probability distributions on the path from the root are taken into account
via smoothing techniques. Results indicate that this offers significant improvements,
comparable to that by Stacking, while retaining some comprehensibility