Current vision-based trackers are based on
tracking of markers. The use of markers increases
robustness and reduces computational requirements.
However, their use can be very complicated,
as they require certain maintenance. Direct
use of scene features for tracking, therefore, is
desirable. To this end, we describe a general system
that tracks the position and orientation of a
camera observing a scene without any visual markers.
Our method is based on a two-stage process.
In the first stage, a set of features is learned with
the help of an external tracking system while in
action. The second stage uses these learned features
for camera tracking when the system in the
first stage decides that it is possible to do so. The
system is very general so that it can employ any
available feature tracking and pose estimation system
for learning and tracking. We experimentally
demonstrate the viability of the method in real-life
examples.