When Hebbian learning was used in a feedforward network, the weight vector converged
to the eigenvector corresponding to the highest eigenvalue of the autocorrelation matrix
of the input data. When Sanger’s or Oja’s rule, extensions of the basic Hebb’s rule, were
used, the network was able to extract the top K eigenvectors of the autocorrelation
matrix of the input data.