†Type I: after obtaining w1 ; the second component is
found by adding a second linear neuron to the first layer.
However, making w1 fixed during the training, training
only w2 and the weights of the other layers. In this case,
the input continues to be x as the components are not
independent. The objective in this case is to find the
second discrimination component best suited to work in
cooperation with the first. The procedure is similar to
obtain the third, fourth and mth component keeping
w2 ; w3 ; …; wm21 fixed. One way to accelerate the process
is to use x as the input of the first neuron and xj21 as
the input of the jth linear neuron of the first layer.
This is possible because, at the input of each neuron
of the hyperbolic tangent type of the second layer,
the excitation is a linear combination of inputs of the
network.
† Type II: the two components can also be obtained by
training simultaneously, that is to say, the two cooperate
between themselves during the network training. In a
similar way the same can be done for 3, 4 or m
components. In this case, a base for the space of the
reduced and optimized input for the classification is made.
In a set of data with various dimensions, it is difficult to
show the size of the class separation problem; however,
through the use of two principal nonlinear discrimination
components, it is possible to have an excellent visual of the
layout of the pattern classes. In this way, resorting to the two
PCD in independent action to see the separation graphs of
four and five classes treated together. These components
were obtained from a neural network as shown in Fig. 2,
trained using backpropagation error, instant and variable
learning rate [20,21].
The component p1 ; as with the ðp1 þ p2 Þ independents
and the two components obtained by the two types of
training with cooperative action, were used as nonlinear
classifier input vectors to evaluate performance