Our current bee detector performs adaptive background
subtraction using a background model derived
from a running average of the most recent 300 video
frames. We then match an elliptical, graduated template
at 16 orientations across each background-subtracted
video frame. We presently consider only one size template.
Adding sizes would be a straightforward extension.
The graduated template encourages centering of
the detection region on each bee and penalizes oval objects
which do not exhibit bees’ characteristic round appearance
in depth as well as outline.