Catastrophic forgetting is a very basic consequence of a
plastic representing medium, and most ANNs are vulnerable
to this problem. Consequently most learning algorithms
involve learning all information of interest concurrently. By
constraining changes to the function to be local to the new
item(s), pseudorehearsal provides a useful solution to this
problem that allows new information to be learned at any
time (without needing access to old information to rehearse).
We have previously claimed [Robins, 1996] that, unlike
other proposed solutions to the catastrophic problem,
pseudorehearsal has no detrimental effect on the ability of
the network to generalise. This claim was based on large
populations where performance is generally good. In this
paper we have explored generalisation in more detail,
considering a range of population sizes. We have shown that
localising changes to the function does in fact have a slightly
detrimental impact on generalisation, especially for small
populations. It remains to be seen if this proves to be a
significant problem in practical applications.