In some sense, computer graphics and computer vision are inverses
of one another. Special purpose computer vision hardware
is rarely found in typical mass-produced personal computers, but
graphics processing units (GPUs) found on most personal computers,
often exceed (in number of transistors as well as in compute
power) the capabilities of the Central Processing Unit (CPU). This
paper shows speedups attained by using computer graphics hardware
for implementation of computer vision algorithms by effi-
ciently mapping mathematical operations of computer vision onto
modern computer graphics architecture. As an example computer
vision algorithm, we implement a real–time projective camera motion
tracking routine on modern, GeForce FX class GPUs. Algorithms
are implemented using OpenGL and the nVIDIA Cg fragment
shaders. Trade–offs between computer vision requirements
and GPU resources are discussed. Algorithm implementation is
examined closely, and hardware bottlenecks are addressed to examine
the performance of GPU architecture for computer vision.
It is shown that significant speedups can be achieved, while leaving
the CPU free for other signal processing tasks. Applications of
our work include wearable, computer mediated reality systems that
use both computer vision and computer graphics, and require realtime
processing with low–latency and high throughput provided
by modern GPUs