In visual surveillance systems, a common first step is
background subtraction. In general, background subtraction or
background segmentation refers to the classification of all
pixels in a frame as either foreground or background. There
have been many algorithms developed ranging from simple to
complex, as seen in [3, 4, 5, 6]. Our system utilizes the
algorithm described in [3], modeling each pixel as a mixture
of multiple Gaussian distributions of pixel intensity values.
The background model is initialized by integrating the pixel
values for the first 100 frames. This reduces the amount of
erroneous pixel classification that occurs early in the sequence
due to the background being unknown. Figure 1 shows the
results of the segmentation using background subtraction. The
black pixels indicate the background, and the white areas
indicate the foreground, i.e. where the bees in the frame are.
Figure 2 shows the same frame with contour and bounding