In this experiment, we trained the network to autoassociate on the training set of cats.
Various pseudo-recurrent networks were tested, varying the number of pseudopatterns used
in each case. As the number of pseudopatterns increases, the network’s internal
representation of the concept “cat” undergoes the same type of compaction observed with the
node-sharpening algorithm. But here the important difference is that there is no explicit
algorithm designed to produce these representations. They arise naturally from the continual
interaction of the two areas of the memory