This paper addresses an important concern about the consolidation of information in
long-term memory. It has been shown by a number of authors that many of the current
connectionist models of memory suffer from catastrophic forgetting. In order to avoid this
problem, connectionist networks in general, and backpropagation networks in particular, must
be trained in a highly implausible manner. When new patterns must be incorporated into an
already-trained network, all of the previously learned patterns must be available and must be
re-presented to the network along with the new patterns to be learned. If this is not done, the
previously learned patterns may be overwritten completely by the new patterns, resulting in
catastrophic forgetting of the old patterns. It is unrealistic to suppose that we can continually
refresh long-term memory with previously learned patterns. In many instances, these patterns
simply are not available or, if they are available, they are only encountered very occasionally.
(Think about how long it’s been since you last saw a live giraffe, and yet, you would still
recognize a giraffe without a moment’s hesitation.) And yet, in spite of this lack of memory
refreshing from actual instances in the world, we are able to maintain memory traces of these
instances, or at least, prototypes of them. The long and the short of the problem is that we
humans can do sequential learning; connectionist models cannot. And, unless the problem of
catastrophic forgetting in connectionist models is overcome, true sequential learning will
remain impossible