This dissertation presents a family of inductive learning systems that derive general rules from specific examples. These systems combine the benefits of neural networks, ASOCS, and symbolic learning algorithms. The systems presented here learn incrementally with good speed and generalization. They are based on a parallel architectural model that adapts to the problem being learned. Learning is done without requiring user adjustment of sensitive parameters, and noise is tolerated with graceful degradation in performance. The systems described in this work are based on features. Features are subsets of the input space. One group of learning algorithms begins with general features and specializes those features to match the problem that is being learned. Another group creates specific features and then generalizes those features. The final group combines the approaches used in the first two groups to gain the benefits of both. The algorithms are O(m log m), where m is the number of nodes in the network, and the number of inputs and output values are treated as constants. An enhanced network topology reduces tune complexity to O(log m). Empirical results show that the algorithms give good generalization and that learning converges in a small number of training passes.