In some sense, computer graphics and computer vision are inverses
of one another. Special purpose computer vision hardware
is rarely found in typical mass-produced personal computers, but
graphics processing units (GPUs) found on most personal computers,
often exceed (in number of transistors as well as in compute
power) the capabilities of the Central Processing Unit (CPU). This
paper shows speedups attained by using computer graphics hardware
for implementation of computer vision algorithms by effi-
ciently mapping mathematical operations of computer vision onto
modern computer graphics architecture. As an example computer
vision algorithm, we implement a real–time projective camera motion
tracking routine on modern, GeForce FX class GPUs. Algorithms
are implemented using OpenGL and the nVIDIA Cg fragment
shaders. Trade–offs between computer vision requirements
and GPU resources are discussed. Algorithm implementation is
examined closely, and hardware bottlenecks are addressed to examine
the performance of GPU architecture for computer vision.
It is shown that significant speedups can be achieved, while leaving
the CPU free for other signal processing tasks. Applications of
our work include wearable, computer mediated reality systems that
use both computer vision and computer graphics, and require realtime
processing with low–latency and high throughput provided
by modern GPUs
In some sense, computer graphics and computer vision are inversesof one another. Special purpose computer vision hardwareis rarely found in typical mass-produced personal computers, butgraphics processing units (GPUs) found on most personal computers,often exceed (in number of transistors as well as in computepower) the capabilities of the Central Processing Unit (CPU). Thispaper shows speedups attained by using computer graphics hardwarefor implementation of computer vision algorithms by effi-ciently mapping mathematical operations of computer vision ontomodern computer graphics architecture. As an example computervision algorithm, we implement a real–time projective camera motiontracking routine on modern, GeForce FX class GPUs. Algorithmsare implemented using OpenGL and the nVIDIA Cg fragmentshaders. Trade–offs between computer vision requirementsand GPU resources are discussed. Algorithm implementation isexamined closely, and hardware bottlenecks are addressed to examinethe performance of GPU architecture for computer vision.It is shown that significant speedups can be achieved, while leavingthe CPU free for other signal processing tasks. Applications ofour work include wearable, computer mediated reality systems thatuse both computer vision and computer graphics, and require realtimeprocessing with low–latency and high throughput providedby modern GPUs
การแปล กรุณารอสักครู่..