The decision tree algorithm has been applied to the problem
under discussion. Input to the algorithm is set of statistical features
of the eighth scale Morlet coefficients of the vibration signatures of
all the twenty four classes. It is clear that the top node is the best
node for classification. The other features in the nodes of decision
tree appear in descending order of importance. It is to be stressed here that only features that contribute to the classification appear
in the decision tree and others do not. Features, which have less
discriminating capability, can be consciously discarded by deciding
on the threshold. This concept is made use for selecting good features.
The algorithm identifies the good features for the purpose of
classification from the given training data set, and thus reduces the
domain knowledge required to select good features for pattern
classification problem. Fig. 8 shows the sample decision tree obtained
for good-dry-no load Vs GTB, GTC, and TFW-dry-no load.
Based on decision trees obtained for all twenty four classes, it
was found that statistical features like standard error, kurtosis,
sample variance and minimum value play a dominant role in feature
classification using Morlet coefficients. These four predominant
features are fed as an input to multiclass proximal support
vector machine for further classification. These features were given
as input for training and testing of classifying features using multiclass
proximal support vector machine.
The decision tree algorithm has been applied to the problem
under discussion. Input to the algorithm is set of statistical features
of the eighth scale Morlet coefficients of the vibration signatures of
all the twenty four classes. It is clear that the top node is the best
node for classification. The other features in the nodes of decision
tree appear in descending order of importance. It is to be stressed here that only features that contribute to the classification appear
in the decision tree and others do not. Features, which have less
discriminating capability, can be consciously discarded by deciding
on the threshold. This concept is made use for selecting good features.
The algorithm identifies the good features for the purpose of
classification from the given training data set, and thus reduces the
domain knowledge required to select good features for pattern
classification problem. Fig. 8 shows the sample decision tree obtained
for good-dry-no load Vs GTB, GTC, and TFW-dry-no load.
Based on decision trees obtained for all twenty four classes, it
was found that statistical features like standard error, kurtosis,
sample variance and minimum value play a dominant role in feature
classification using Morlet coefficients. These four predominant
features are fed as an input to multiclass proximal support
vector machine for further classification. These features were given
as input for training and testing of classifying features using multiclass
proximal support vector machine.
การแปล กรุณารอสักครู่..
