Ideally the representations developed by a learning system
should be stable enough to preserve important information
over time, but plastic enough to incorporate new information
when necessary. The use of variable connection weights as a
medium for encoding information leads most ANNs to err on
the side of excessive plasticity – new learning changes the
weights and thus disrupts any old information (patterns
previously learned by the network). Grossberg [1987]
suggests the analogy of a human trained to recognise the
word “cat”, and subsequently to recognise the word “table”,
being then unable to recognise “cat”. This effect has been
identified in many guises in the ANN literature under
headings such as catastrophic forgetting, catastrophic
interference, or the serial learning problem. It is due to the
catastrophic forgetting problem that most ANN learning
algorithms are based on “concurrent” learning, i.e. the whole
population of interest is presented and trained as a single,
complete entity. Training is then “finished” and no further
information is learned by the network.