We have presented a method for converting a fundamental
tracking technique (particle filtering using colour histograms)
to perform continuously adaptive data fusion with colour and
deep-IR thermal cameras.
This new algorithm exhibits several useful properties:
• Local background models are rapidly re-learned at every
frame, for every particle, and for both imaging modalities.
• These background models are used to continually re-
assess how much confidence should be assumed for each
modality, when computing particle weights.
• If the target becomes camouflaged in one or other modal-
ity, this is automatically detected, and the data fusion
process is weighted in favour of data from the other
modality.
• Where the target may be partially camouflaged in both
modalities, a best compromise blending of the data from
both modalities is found, by adaptively re-computing
confidences in each modality.
• The adaptation is extremely fast, because background
models are completely re-learned from scratch for every
image. This enables the technique to track difficult
sequences, where the background scene rapidly changes.
• Rapidly changing backgrounds often occur in situations
where the camera itself is moving. Hence this rapid adap-
tation technique is especially useful for visual servoing
of a pan-tilt camera rig, or for tracking from cameras
mounted on a moving robot.
• Although the technique has here been demonstrated for
merely fusing simple thermal and colour pixel intensities,
it can readily be extended to fusing an arbitrary num-
ber of different pixel features (e.g. edginess or texture
plus colour etc.) from one or arbitrarily many imaging
modalities. Future work will experiment with using this
method to incorporate and fuse various additional kinds
of features to improve tracking robustness.