We can find a couple of works carried out in this direction. Nilsback and Zisserman [2] designed a flower classification
system by extracting visual vocabularies which represent the color, shape, and texture features of flower images. In order
to segment a flower from the background, the RGB color distribution is determined by labeling pixels as foreground and
background on a set of training samples, and subsequently the flower is automatically segmented using the concept of
interactive graph cuts [3]. In order to extract the color vocabulary, each flower image is mapped onto HSV (hue, saturation,
and value) color space, and the HSV values of each pixel of the training images are clustered and treated as the color vocabulary.
Shift-invariant feature transform (SIFT) descriptors are used to represent the shape features and the responses of the
MR8 filter bank in different orientations are used as texture features. Also, the authors use the combination of all the three
visual vocabularies with different weights in order to study the effect of different features. Nilsback and Zisserman [2] considered
a dataset of 17 species, each containing 80 images, and achieved an accuracy of 71.76% for a combination of all three
features. In order to study the effect of classification accuracy on a large data set, Nilsback and Zisserman in their work [4]
considered a dataset of 103 classes, each containing 40 to 250 samples. The low-level features such as color, histogram of
gradient orientations, and SIFT features are used. They have achieved an accuracy of 72.8% with an SVM classifier using multiple
kernels. Nilsback and Zisserman [5] proposed a two-step model to segment the flowers in color images, one to separate
the foreground from background and the other to extract the petal structure of the flower. This segmentation algorithm is
tolerant to changes in viewpoint and petal deformation, and the method is applicable in general for any flower class.
We can find a couple of works carried out in this direction. Nilsback and Zisserman [2] designed a flower classificationsystem by extracting visual vocabularies which represent the color, shape, and texture features of flower images. In orderto segment a flower from the background, the RGB color distribution is determined by labeling pixels as foreground andbackground on a set of training samples, and subsequently the flower is automatically segmented using the concept ofinteractive graph cuts [3]. In order to extract the color vocabulary, each flower image is mapped onto HSV (hue, saturation,and value) color space, and the HSV values of each pixel of the training images are clustered and treated as the color vocabulary.Shift-invariant feature transform (SIFT) descriptors are used to represent the shape features and the responses of theMR8 filter bank in different orientations are used as texture features. Also, the authors use the combination of all the threevisual vocabularies with different weights in order to study the effect of different features. Nilsback and Zisserman [2] considereda dataset of 17 species, each containing 80 images, and achieved an accuracy of 71.76% for a combination of all threefeatures. In order to study the effect of classification accuracy on a large data set, Nilsback and Zisserman in their work [4]considered a dataset of 103 classes, each containing 40 to 250 samples. The low-level features such as color, histogram ofgradient orientations, and SIFT features are used. They have achieved an accuracy of 72.8% with an SVM classifier using multiplekernels. Nilsback and Zisserman [5] proposed a two-step model to segment the flowers in color images, one to separatethe foreground from background and the other to extract the petal structure of the flower. This segmentation algorithm istolerant to changes in viewpoint and petal deformation, and the method is applicable in general for any flower class.
การแปล กรุณารอสักครู่..