How do these models achieve stable, sensitive performance in the presence of new input?
All of them, in one way or another, rely on two principles: i) representational separation and
ii) explicit use of previously stored representations to influence the course of new learning.
In the case of convolution-correlation models, representational separation is achieved by
orthogonal recoding of the input and SDM does so by the use of sparse coding. In this way,
representational overlap between newly arriving and previously stored patterns is reduced.
But in order to produce the desired abilities to generalize, previously stored information is
used to affect the memory trace of incoming information. For example, in both CHARM and
SDM the new input vector is “folded into” an internal representation of the previously
learned input. These two principles form the basis of the pseudo-recurrent architecture
proposed in this paper