Sklansky and his students developed several piecewise linear discriminants based on the principle of locally opposed clusters of objects. Wassel and Sklansky [368,335] suggested a procedure to train a linear split to minimize the error probability. Using this procedure, Sklansky and Michelotti [334] developed a system to induce a piece-wise linear classifier. Their method identifies the closest-opposed pairs of clusters in the data, and trains each linear discriminant locally. The final classifier produced by this method is a piecewise linear decision surface, not a tree. Foroutan [107] discovered that the resubstitution error rate of optimized piece-wise linear classifiers is nearly monotonic with respect to the number of features. Based on this result, Foroutan and Sklansky [108] suggest an effective feature selection procedure for linear splits that uses zero-one integer programming. Park and Sklansky [280,281] describe methods to induce linear tree classifiers and piece-wise linear discriminators. The main idea in these methods is to find hyperplanes that cut a maximal number of Tomek links. Tomek links of a data set connect opposed pairs of data points for which the circle of influence between the points doesn't contain any other points.