where transpose vector of x; vector representing the principal direction of discrimination. And the representation of x seeing only one principal component is The error of representation x1 is such that The vector x1 represents the information of x not projected in the direction of w1 : The breakdown of vector x in m components can be expressed as where 1 is the residual error vector [23] for representation using m principal components.
The first component of nonlinear discrimination may be obtained training a network of three layers in error backpropagation as shown in Fig. 2. The first layer is made up of only one neuron of the linear type and the
remainder of the neurons of the hyperbolic tangent type. The adjustment of the synaptic vectors of the three layers is done by the descending gradient method using the mean square error function as the objective function [20,21]. After training, the vector w1 represents the principal direction of nonlinear discrimination of the pattern classes studied. A special case of development of the remaining components is to make them orthogonal, that is to say, w1 ’ w2 ’ w3 ’ • • • ’ wm : And so retraining of the net-work after obtaining w1 in the same way is done but the new input used will be x1 ; obtained from Eq. (5), and proceeding successively until the component m (normally m , n). Here these are classed as the principal components of indepen-dent action, since each component works with the noise xj of information not used by the previous ones:
where transpose vector of x; vector representing the principal direction of discrimination. And the representation of x seeing only one principal component is The error of representation x1 is such that The vector x1 represents the information of x not projected in the direction of w1 : The breakdown of vector x in m components can be expressed as where 1 is the residual error vector [23] for representation using m principal components. The first component of nonlinear discrimination may be obtained training a network of three layers in error backpropagation as shown in Fig. 2. The first layer is made up of only one neuron of the linear type and theremainder of the neurons of the hyperbolic tangent type. The adjustment of the synaptic vectors of the three layers is done by the descending gradient method using the mean square error function as the objective function [20,21]. After training, the vector w1 represents the principal direction of nonlinear discrimination of the pattern classes studied. A special case of development of the remaining components is to make them orthogonal, that is to say, w1 ’ w2 ’ w3 ’ • • • ’ wm : And so retraining of the net-work after obtaining w1 in the same way is done but the new input used will be x1 ; obtained from Eq. (5), and proceeding successively until the component m (normally m , n). Here these are classed as the principal components of indepen-dent action, since each component works with the noise xj of information not used by the previous ones:
การแปล กรุณารอสักครู่..