Conclusions
This paper describes an empirical study of texture-based fruit detection for green fruits on
plants in the field and describes experiments on two green fruit types: pineapple and bitter
melon. Image data is captured from web camera video. The method includes five main
steps: feature and descriptor extraction, feature classification, fruit point mapping, morphological closing and region extraction. The feature and descriptor methods tested
comprised 24 combinations. The classification step used SVMs. The feature type employed
was found to be more important than the descriptor type. The method is highly accurate on
the data sampled, with the best combinations being ORB?SURF128 for pineapple and
Harris?SURF128 for bitter melon. With the best parameter settings (a disc-shaped
structuring element with a radius of 10 pixels and a minimum region size threshold of 1600
pixels for pineapple, and an ellipse-shaped structuring element with a vertical major axis
length of 20 pixels, a horizontal minor axis of 8 pixels, and a minimum region size
threshold of 10500 pixels for bitter melon), the method obtains single-image detection rates
of 85 and 100 %, respectively. Robustness of the parameter settings on other data sets must
be further validated in future work.Future work will extend the method to work in a real time system. The method needs to
be improved to better handle some disadvantageous conditions such as strong sunlight and
occlusion. Temporary occlusions and fragmentations due to leaves can be handled by
tracking fruit regions from frame to frame then performing 3D modeling. The run time
may also need to be improved in order to increase the speed of processing and/or decrease
manufacturing costs. Finally, the detection system will be integrated into a prototype
automated fruit crop monitoring system, and an in-field real-time evaluation will be
performed.
Acknowledgments SC was supported by the Thailand National Science and Technology Development
Agency (NSTDA). The authors thank the members of the AIT Vision and Graphics Lab (VGL) for suggestions and help with data collection
ConclusionsThis paper describes an empirical study of texture-based fruit detection for green fruits onplants in the field and describes experiments on two green fruit types: pineapple and bittermelon. Image data is captured from web camera video. The method includes five mainsteps: feature and descriptor extraction, feature classification, fruit point mapping, morphological closing and region extraction. The feature and descriptor methods testedcomprised 24 combinations. The classification step used SVMs. The feature type employedwas found to be more important than the descriptor type. The method is highly accurate onthe data sampled, with the best combinations being ORB?SURF128 for pineapple andHarris?SURF128 for bitter melon. With the best parameter settings (a disc-shapedstructuring element with a radius of 10 pixels and a minimum region size threshold of 1600pixels for pineapple, and an ellipse-shaped structuring element with a vertical major axislength of 20 pixels, a horizontal minor axis of 8 pixels, and a minimum region sizethreshold of 10500 pixels for bitter melon), the method obtains single-image detection ratesof 85 and 100 %, respectively. Robustness of the parameter settings on other data sets mustbe further validated in future work.Future work will extend the method to work in a real time system. The method needs tobe improved to better handle some disadvantageous conditions such as strong sunlight andocclusion. Temporary occlusions and fragmentations due to leaves can be handled bytracking fruit regions from frame to frame then performing 3D modeling. The run timemay also need to be improved in order to increase the speed of processing and/or decreasemanufacturing costs. Finally, the detection system will be integrated into a prototypeautomated fruit crop monitoring system, and an in-field real-time evaluation will beperformed.Acknowledgments SC was supported by the Thailand National Science and Technology DevelopmentAgency (NSTDA). The authors thank the members of the AIT Vision and Graphics Lab (VGL) for suggestions and help with data collection
การแปล กรุณารอสักครู่..