Connectionist networks and lookup tables lie at opposite ends of the stability-sensitivity
spectrum. While the latter are completely stable in the presence of new information, they
lack the crucial ability to generalize on new input or to function effectively with degraded
input. By contrast, standard backpropagation networks are highly sensitive to new input
because of their highly distributed, overlapping internal representations. Internal
representations of this kind are responsible for these networks’ much touted abilities to
generalize on previously unseen input, but they are also responsible for the radical loss of old
information when learning new information.