The technique of principal components analysis (PCA) is well known as one of the techniques used for the size
reduction of a set of multivariable data. Originally [20], it was a technique of linear mapping of multidimensional
data in lower dimensions, minimizing the loss of information. However, as it is a linear mapping method, it becomes inadequate in many engineering problems, which are nonlinear, and since the components are smaller they may contain important information and therefore should not be discarded [23]. Consequentially, the analysis of the principal nonlinear components [23] was resorted to.
One of the ways to develop the principal components is using artificial neural networks, as described by Haykin [20]
in the case of auto-representation. In this case, they can be used as much for reducing the components of represen-
tation, as for discriminating pattern classes.
In the present work we resorted to the principal components for nonlinear discrimination, implemented with the neural networks, having been developed with the error backpropagation algorithm. Three types of develop-ment of the principal components were studied: the components that acted independently and those that acted cooperatively, which are divided into two types.
As the initial input vector is made up of four components, four feature defects, the two principal independent components of nonlinear discrimination were used to visualize the class separation problem of defects in two dimensions. Also the principal components found with the three forms are used as inputs of the nonlinear classifier to evaluate their performance in reduced dimension. The methodology developed is described below.
Taking a pattern classification system with a vector x of size n as an input, then in order to reduce the size of the input
vector for a vector z of size m; m , n; containing only the most relevant information of the set of data, it is represented thus:
The technique of principal components analysis (PCA) is well known as one of the techniques used for the size
reduction of a set of multivariable data. Originally [20], it was a technique of linear mapping of multidimensional
data in lower dimensions, minimizing the loss of information. However, as it is a linear mapping method, it becomes inadequate in many engineering problems, which are nonlinear, and since the components are smaller they may contain important information and therefore should not be discarded [23]. Consequentially, the analysis of the principal nonlinear components [23] was resorted to.
One of the ways to develop the principal components is using artificial neural networks, as described by Haykin [20]
in the case of auto-representation. In this case, they can be used as much for reducing the components of represen-
tation, as for discriminating pattern classes.
In the present work we resorted to the principal components for nonlinear discrimination, implemented with the neural networks, having been developed with the error backpropagation algorithm. Three types of develop-ment of the principal components were studied: the components that acted independently and those that acted cooperatively, which are divided into two types.
As the initial input vector is made up of four components, four feature defects, the two principal independent components of nonlinear discrimination were used to visualize the class separation problem of defects in two dimensions. Also the principal components found with the three forms are used as inputs of the nonlinear classifier to evaluate their performance in reduced dimension. The methodology developed is described below.
Taking a pattern classification system with a vector x of size n as an input, then in order to reduce the size of the input
vector for a vector z of size m; m , n; containing only the most relevant information of the set of data, it is represented thus:
การแปล กรุณารอสักครู่..
The technique of principal components analysis (PCA) is well known as one of the techniques used for the size
reduction of a set of multivariable data. Originally [20], it was a technique of linear mapping of multidimensional
data in lower dimensions, minimizing the loss of information. However, as it is a linear mapping method, it becomes inadequate in many engineering problems, which are nonlinear, and since the components are smaller they may contain important information and therefore should not be discarded [23]. Consequentially, the analysis of the principal nonlinear components [23] was resorted to.
One of the ways to develop the principal components is using artificial neural networks, as described by Haykin [20]
in the case of auto-representation. In this case, they can be used as much for reducing the components of represen-
tation, as for discriminating pattern classes.
In the present work we resorted to the principal components for nonlinear discrimination, implemented with the neural networks, having been developed with the error backpropagation algorithm. Three types of develop-ment of the principal components were studied: the components that acted independently and those that acted cooperatively, which are divided into two types.
As the initial input vector is made up of four components, four feature defects, the two principal independent components of nonlinear discrimination were used to visualize the class separation problem of defects in two dimensions. Also the principal components found with the three forms are used as inputs of the nonlinear classifier to evaluate their performance in reduced dimension. The methodology developed is described below.
Taking a pattern classification system with a vector x of size n as an input, then in order to reduce the size of the input
vector for a vector z of size m; m , n; containing only the most relevant information of the set of data, it is represented thus:
การแปล กรุณารอสักครู่..