As shown in Fig. 2(a), the rival penalized mechanism
tries to push its rival far away from the cluster towards
which the winner is moving, thus implicitly producing a force
which attempts to make sure that each cluster is learned by
only one weight vector. This force is just a balance to the
force generated by the conscience strategy of FSCL, which
encourages both weight vectors to share one cluster. This
balancing role can be more clearly seen from Fig. 2@).
Assuming that three weight vectors have already been brought
somewhere between two classes, the rival penalized force will
gradually drive away the weight vector 5 3 along a zig-zag
path as the input samples come randomly and alternatively
from both classes. Similarly,we can also imagine that the two
disturbing units given in Fig. l(c) can be driven away by this
force.
So, we see that the key point of using the rival penalized
mechanism is that the appropriate number of units will be
selected automatically for representing an input data set by
gradually driving extra units far away from the distribution of
data set in the case that the number of units in a competitive
learning net is larger than the number of clusters in the input
data set. Thus the crucial problem described in Section I can
be tackled. In addition, the extra units become now spare units
which are ready to learn some new clusters if some additional
data are input in the future.
Another important point is that the rival penalized mechanism may sometimes speed up the learning process: as shown
in Fig. 2(c), the de-learning of ti& caused by the learning of
$1, G2 will push $3 toward its correct cluster.