Fig. 5. Details of the grading area showing the air ejectors and the outlets.
The control computer received the category and the position of each object in image coordinates via TCP/IP. The information about the category was used to establish which air ejector had to be used for each object. The positions of both the object in the image and the belt were employed to calculate the moment at which the ejector had to be activated. The control computer stored the number of encoder pulse counts necessary for the object to reach the ejector,and then decreased this value every time it received a pulse from the encoder. When the count reached zero, the object was supposed to pass just in front of the ejector and the computer opened the appropriate electro-valve in order to remove it from the belt. As has been said earlier, since synchronism was based on counting encoder pulses, it was independent of the speed of the belt. This design made it possible to achieve a position tracking accuracy of 0.3 mm. 3.4. Testing the prototype under commercial conditions. Global evaluation of its performance.
Once the prototype was ready, it was installed in commercial facilities for industrial testing and configured to separate arils
and unwanted material as described above. The prototype was tested in commercial conditions over a period of six months, between September and February 2005/2006, which was the pomegranate season in Spain. During the tests, the prototype inspected more than nine tons of objects. It was unviable to evaluate the results of the individual classification of each object, since working under commercial conditions made it impossible to stop the line in order to compare the classification of single objects made by the machine and human experts. For this reason, the evaluation was performed on the batches produced by the prototype. A panel of three experts analysed random samples obtained from the different outlets of the machine, each expert giving an overall subjective opinion of the performance of the inspection system for each category. Each expert decided whether the automatic classification was good, regular or poor and the final decision of the panel was considered to be the one on which two or three of them had agreed (Fig. 6).