3.2.3. Principal components of nonlinear discrimination
Fig. 6 shows the two bi-dimensional graphs of the two
principal nonlinear discrimination components in indepen-
dent action. Fig. 6a, refers to the separation for the four
classes, showing that the classes UC and LP present well-
defined regions and consequently are easily separated by
nonlinear separators. The classes PO and SI present a region
of confusion in the graph with some observations of SI
positioned in the PO region. This had been detected before
when using linear classifiers, since the error classifications
were references of these classes. It may be noted in Fig. 6b,
five classes, that the confusion between NLSI, LSI and PO is
even greater but this is easily explained, as the distinction
between the two classes of SI is complicated due to their
similar features. At this point, the question concerning the
high values of success may arise considering the confusion
of the class domains in the graph. However, the graph only
shows the problem in two dimensions but uses three or four
dimensions for the separation of classes, which increases the
capacity of discrimination. As nonlinear classifiers were
now used, the possibility of separation of these classes
became viable since the neuronal networks are capable of
developing extremely complex separation surfaces and
permit high performance success with the training data, as
was proved. However, coming back to the generalization
[20] problem of the network, the graphs obtained for the
distribution of the nonlinear classifier outputs show that
the success probability for the data not used during the
training is high, however, for the results to be even
more reliable, test patterns will have to be used, which
will certainly be done in future works with the acquisition of
new patterns.
It should be noted that the lack of a high number of
samples (with statistical representation) to have a greater
reliability in the classifiers, is a frequent problem in this area
of research, which can also be seen in the work of Liao [13],
who had a total of 147 samples for study covering six defect
classes and therefore resorted to the technique of Bootstrap
to produce training and test data.
The use of these graphs using the principal components is
very useful for systems with high dimensions, as with Mery
[26], who used 71 features to separate the defect and regular
structure classes in radioscopy of aluminum wheels.
The graph in Fig. 7 represents the results of perform-
ance obtained when using only the first component p1 as
an input vector of the classifier, also with ðp1 þ p2 Þ
independents (bi-dimensional vector) and ðp1 þ p2 Þ coop-
erative (for the two types). The results show that using only
the first component the indices of success reached 92.0% for
the training data with four classes, logically dropping for the
more complex situation of five classes (66.4%). With the use
of two components the indices come very close to those
obtained when using three or four features. It was also
confirmed that there is no significant difference in the
performance between the types of components used. These
results prove the efficiency of the principal components to
reduce the dimension of the original data, maintaining a high
capacity of classification success. For the case studied, the
reduction of size from three or four dimensions to two
dimensions was not justified. However, the results are
motivating for similar studies to be made with larger sized
systems such as the work of Mery [25], because the quantity
of calculations made by the neural network could be
significantly less. The same can be done when using gray
profiles of the weld beads, as in the work of Liao [6,7], where
the input vectors of the classifier could contain 500 or more
components. These techniques for the development of
discrimination components are recent and are not found, at
present in other research applications for the development of
automatic systems for the recognition of weld defects in
radiographic images.
3.2.3. Principal components of nonlinear discrimination Fig. 6 shows the two bi-dimensional graphs of the twoprincipal nonlinear discrimination components in indepen-dent action. Fig. 6a, refers to the separation for the fourclasses, showing that the classes UC and LP present well-defined regions and consequently are easily separated bynonlinear separators. The classes PO and SI present a regionof confusion in the graph with some observations of SIpositioned in the PO region. This had been detected beforewhen using linear classifiers, since the error classificationswere references of these classes. It may be noted in Fig. 6b,five classes, that the confusion between NLSI, LSI and PO iseven greater but this is easily explained, as the distinctionbetween the two classes of SI is complicated due to theirsimilar features. At this point, the question concerning thehigh values of success may arise considering the confusionof the class domains in the graph. However, the graph onlyshows the problem in two dimensions but uses three or fourdimensions for the separation of classes, which increases thecapacity of discrimination. As nonlinear classifiers werenow used, the possibility of separation of these classesbecame viable since the neuronal networks are capable ofdeveloping extremely complex separation surfaces andpermit high performance success with the training data, aswas proved. However, coming back to the generalization[20] problem of the network, the graphs obtained for thedistribution of the nonlinear classifier outputs show thatthe success probability for the data not used during thetraining is high, however, for the results to be evenmore reliable, test patterns will have to be used, whichwill certainly be done in future works with the acquisition ofnew patterns. It should be noted that the lack of a high number ofsamples (with statistical representation) to have a greaterreliability in the classifiers, is a frequent problem in this areaof research, which can also be seen in the work of Liao [13],who had a total of 147 samples for study covering six defectclasses and therefore resorted to the technique of Bootstrapto produce training and test data. The use of these graphs using the principal components isvery useful for systems with high dimensions, as with Mery[26], who used 71 features to separate the defect and regularstructure classes in radioscopy of aluminum wheels. The graph in Fig. 7 represents the results of perform-ance obtained when using only the first component p1 asan input vector of the classifier, also with ðp1 þ p2 Þindependents (bi-dimensional vector) and ðp1 þ p2 Þ coop-erative (for the two types). The results show that using onlythe first component the indices of success reached 92.0% forthe training data with four classes, logically dropping for themore complex situation of five classes (66.4%). With the useof two components the indices come very close to thoseobtained when using three or four features. It was alsoconfirmed that there is no significant difference in theperformance between the types of components used. Theseresults prove the efficiency of the principal components toreduce the dimension of the original data, maintaining a highcapacity of classification success. For the case studied, thereduction of size from three or four dimensions to twodimensions was not justified. However, the results aremotivating for similar studies to be made with larger sizedsystems such as the work of Mery [25], because the quantityof calculations made by the neural network could besignificantly less. The same can be done when using grayprofiles of the weld beads, as in the work of Liao [6,7], wherethe input vectors of the classifier could contain 500 or morecomponents. These techniques for the development ofdiscrimination components are recent and are not found, atpresent in other research applications for the development ofautomatic systems for the recognition of weld defects inradiographic images.
การแปล กรุณารอสักครู่..
3.2.3. Principal components of nonlinear discrimination
Fig. 6 shows the two bi-dimensional graphs of the two
principal nonlinear discrimination components in indepen-
dent action. Fig. 6a, refers to the separation for the four
classes, showing that the classes UC and LP present well-
defined regions and consequently are easily separated by
nonlinear separators. The classes PO and SI present a region
of confusion in the graph with some observations of SI
positioned in the PO region. This had been detected before
when using linear classifiers, since the error classifications
were references of these classes. It may be noted in Fig. 6b,
five classes, that the confusion between NLSI, LSI and PO is
even greater but this is easily explained, as the distinction
between the two classes of SI is complicated due to their
similar features. At this point, the question concerning the
high values of success may arise considering the confusion
of the class domains in the graph. However, the graph only
shows the problem in two dimensions but uses three or four
dimensions for the separation of classes, which increases the
capacity of discrimination. As nonlinear classifiers were
now used, the possibility of separation of these classes
became viable since the neuronal networks are capable of
developing extremely complex separation surfaces and
permit high performance success with the training data, as
was proved. However, coming back to the generalization
[20] problem of the network, the graphs obtained for the
distribution of the nonlinear classifier outputs show that
the success probability for the data not used during the
training is high, however, for the results to be even
more reliable, test patterns will have to be used, which
will certainly be done in future works with the acquisition of
new patterns.
It should be noted that the lack of a high number of
samples (with statistical representation) to have a greater
reliability in the classifiers, is a frequent problem in this area
of research, which can also be seen in the work of Liao [13],
who had a total of 147 samples for study covering six defect
classes and therefore resorted to the technique of Bootstrap
to produce training and test data.
The use of these graphs using the principal components is
very useful for systems with high dimensions, as with Mery
[26], who used 71 features to separate the defect and regular
structure classes in radioscopy of aluminum wheels.
The graph in Fig. 7 represents the results of perform-
ance obtained when using only the first component p1 as
an input vector of the classifier, also with ðp1 þ p2 Þ
independents (bi-dimensional vector) and ðp1 þ p2 Þ coop-
erative (for the two types). The results show that using only
the first component the indices of success reached 92.0% for
the training data with four classes, logically dropping for the
more complex situation of five classes (66.4%). With the use
of two components the indices come very close to those
obtained when using three or four features. It was also
confirmed that there is no significant difference in the
performance between the types of components used. These
results prove the efficiency of the principal components to
reduce the dimension of the original data, maintaining a high
capacity of classification success. For the case studied, the
reduction of size from three or four dimensions to two
dimensions was not justified. However, the results are
motivating for similar studies to be made with larger sized
systems such as the work of Mery [25], because the quantity
of calculations made by the neural network could be
significantly less. The same can be done when using gray
profiles of the weld beads, as in the work of Liao [6,7], where
the input vectors of the classifier could contain 500 or more
components. These techniques for the development of
discrimination components are recent and are not found, at
present in other research applications for the development of
automatic systems for the recognition of weld defects in
radiographic images.
การแปล กรุณารอสักครู่..