A pseudoitem is constructed by generating a new input
vector at random (setting input unit values randomly in the
range 0 to 1), and passing it forward through the network in
the standard way. Whatever output vector this input
generates becomes the associated target output. For a given
network we can construct a population of pseudoitems (input
/ output pairs) of any size in this way. Pseudorehearsal is the
use in a rehearsal process of a population of pseudoitems
instead of the actual previously learned items themselves. In
other words, rehearsal relearns the old population data points
to preserve the shape of the function during subsequent
learning, while pseudorehearsal samples the function at
random places to construct new data points which are then
relearned in the same way to preserve the original function.
The pseudoitems “approximate” the old population and “map
out” the function learned by the network. Relearning the
pseudoitems restricts any change in the function to be local
to the area of the new item(s) being learned. Frean and
Robins [1997] extend this account, providing a more
technical description of pseudorehearsal and a formal
analysis of a simple case for a linear network.
The effectiveness of pseudorehearsal at reducing
catastrophic forgetting has been proven using a range of
populations, including: randomly constructed autoassociative
and hetroassociative data sets [Robins, 1995]; the Iris data
set [Robins, 1996]; a classification task using the Mushroom
data set [French, 1997]; and an alphanumeric character set
using a Hopfield type network [Robins and McCallum,