In summary, in order to avoid severe interference in long-term memory, new input must
be mixed with some approximation of the originally learned patterns. The better the
approximation, the less the interference. In the ideal case where the originally learned
patterns are still available for interleaved presentation, forgetting is eliminated altogether. As
we have seen, sometimes the original patterns are available (as in the case of “car” or “child”)
but, more often, internally generated approximations of the original patterns must suffice.
This paper presents an implementation of a long-term memory model that uses internally
generated pseudopatterns as the means of mixing old and new information. This model is
shown to be capable of effective sequential pattern learning and produces gradual forgetting
rather than catastrophic forgetting. The use of pseudopatterns to improve performance of
connectionist networks on catastrophic forgetting was first proposed by Robins (1995) and
their plausibility has been further explored in Frean & Robins (1996) and Robins (1996)