minimum of error with the training data for 17/18 neurons in
the intermediate layer (Fig. 3a). For the four classes, the
maximum performance was reached (100.0%) with 10
neurons. The confusion Tables 1 and 2 show the numerical
quantity and percentage of success obtained for the
conditions: without criterion of reclassification and with
criterion of reclassification for the five classes, respectively.
It was noted that only one observation of nonlinear inclusion
(NLSI) was not classified initially, being later confused with
the lack of penetration class (LP). For the four classes, the
performance of 100.0% obtained was managed without the
reclassification criterion and so considered unnecessary to
be presented.
The classic way to check the generalization of training
is to use a set of tests [20]. However, in our case the
quantity of observations was too small to divide into a set
for training and testing and maintaining statistical signifi-
cance. And so, once more the tools of statistical
interference were resorted to.
In the area where errors are most likely to occur, i.e.
when the input module of the neuron is very small, it can be
justified that the best choice for the PðUÞ distribution is a
Gaussian. In this case, it was confirmed by the evaluation of
the outputs obtained from the classifier using the test of Chi-
square and Komolgorov – Smirnov [24], confirming that it
followed a normal distribution. Then having been approved
by the normal distribution tests, it was possible to calculate
minimum of error with the training data for 17/18 neurons inthe intermediate layer (Fig. 3a). For the four classes, themaximum performance was reached (100.0%) with 10neurons. The confusion Tables 1 and 2 show the numericalquantity and percentage of success obtained for theconditions: without criterion of reclassification and withcriterion of reclassification for the five classes, respectively.It was noted that only one observation of nonlinear inclusion(NLSI) was not classified initially, being later confused withthe lack of penetration class (LP). For the four classes, theperformance of 100.0% obtained was managed without thereclassification criterion and so considered unnecessary tobe presented. The classic way to check the generalization of trainingis to use a set of tests [20]. However, in our case thequantity of observations was too small to divide into a setfor training and testing and maintaining statistical signifi-cance. And so, once more the tools of statisticalinterference were resorted to. In the area where errors are most likely to occur, i.e.when the input module of the neuron is very small, it can bejustified that the best choice for the PðUÞ distribution is aGaussian. In this case, it was confirmed by the evaluation ofthe outputs obtained from the classifier using the test of Chi-square and Komolgorov – Smirnov [24], confirming that itfollowed a normal distribution. Then having been approvedโดยการทดสอบการแจกแจงปกติ มันเป็นไปได้ในการคำนวณ
การแปล กรุณารอสักครู่..