3.2.1. On-line image acquisition and analysis
All the image analysis software was programmed in C language.
All source codes were specifically written for this application without the use of any commercial library in order to ensure the control of the operations and real-time responses.
One of the major achievements of the software that was developed is the possibility of working with two cameras at the same time, since the acquisition of the images is a very time-consuming process (40 ms per image). The software was designed to process one image obtained with one camera in parallel with the acquisition of another image with the other camera. The result is that the processing of one image and the acquisition of the next overlap in time, thus saving time and optimising the operation.
The acquisition of the images is triggered by pulses received from an optical encoder attached to the shaft of the carrier roller and connected to the serial port of the computer. Cameras are triggered as the belts move forward 350 mm. This design makes the acquisition of the image independent of the speed of the belts, and thus ensures that there are never any overlaps or gaps between consecutive images.
Segmentation consists in determining which regions of the image correspond to background and which represent the objects ofinterest. We opted for a pixel-oriented segmentation algorithm,because these algorithms are normally faster than other approaches (region-oriented algorithms, textural analysis, etc.). The conveyor belts were blue and consequently had high B values and low R values of the RGB coordinates. The colour of pomegranate arils varies between white and red, which correspond to high R values. Internal membranes are mostly white, and hence have high R, G and B values. Consequently, the segmentation algorithm used a pre-defined threshold in the R band. Pixels having R coordinates
below this threshold were considered to belong to the background. Fig. 3 illustrates this principle by showing the histogram of the R band of a typical aril surrounded by the blue background. The peak on the left corresponds to background pixels (low R values), while the peak on the right represents pixels from the aril. The value of this threshold was selected manually and at random between the two peaks by an expert.
Once the background had been removed, each connected region was labelled as a possible object of interest (under normal circumstances, it should be an aril or some other material). In the same operation, the program estimated the size and centroid of each of these objects and the average RGB coordinates of their pixels. Extremely small or large objects were classified as unwanted material.
Finally, the average colour coordinates were used to classify the object into one of four pre-defined categories. The procedure to determine the class is described below. After processing each image, the machine vision computer sent the category and position
Fig. 1. Scheme of the sorting machine.
Fig. 2. Prototype developed for the inspection of pomegranate arils.
Fig. 3. Histogram of the R component of a small window containing a typical aril (right peak) and the background (left peak). J. Blasco et al. / Journal of Food Engineering 90 (2009) 27–34 29 of the object to a second computer (called the control computer), which tracked the object until it was sorted. This communication was implemented via TCP/IP.