The perceptron learning algorithm is an example of supervised learning. This
kind of approach does not seem very plausible from the biologist’s point of
view, since a teacher is needed to accept or reject the output and adjust
the network weights if necessary. Some researchers have proposed alternative
learning methods in which the network parameters are determined as a result
of a self-organizing process. In unsupervised learning corrections to the network
weights are not performed by an external agent, because in many cases
we do not even know what solution we should expect from the network. The
network itself decides what output is best for a given input and reorganizes
accordingly.
We will make a distinction between two classes of unsupervised learning:
reinforcement and competitive learning. In the first method each input produces
a reinforcement of the network weights in such a way as to enhance the
reproduction of the desired output. Hebbian learning is an example of a reinforcement
rule that can be applied in this case. In competitive learning, the
elements of the network compete with each other for the “right” to provide the
output associated with an input vector. Only one element is allowed to answer
the query and this element simultaneously inhibits all other competitors.
This chapter deals with competitive learning. We will show that we can
conceive of this learning method as a generalization of the linear separation
methods discussed in the previous two chapters.