We focus on real-time real life AR interfaces that should
still leave a wearable computer sufficient computing power
for the execution of application logic. A variety of fingertip detection algorithms have been proposed [20][32][2], each
one with different benefits and limitations, especially regarding wearable computing environments with widely varying
backgrounds and variable fingertip sizes and shapes. Careful
evaluation of this work led us to implement a novel hybrid algorithm for robust real-time tracking of a specific hand pose.
Wearable computers are important enabling technology
for “Anywhere Augmentation” applications [12], in which
the entry cost to experiencing AR is drastically reduced by
getting around the need for instrumenting the environment
or creating complex environment models off-line. Hand interfaces are another important piece of the Anywhere Augmentation puzzle, as they help users establish a local coordinate systems within arm’s length and enable the user to easily
jump-start augmentations and inspect AR objects of interest.
An important problem in AR is how to determine the camera pose in order to render virtual objects in correct 3D perspective. When seeing the world through a head-worn [11] or
magic-lens tablet display [24], the augmentations should register seamlessly with the real scene. When a user is inspecting a virtual object by “attaching it” to a reference pattern in
the real world, we need to establish the camera pose relative
to this pattern in order to render the object correctly. Camera calibration can be done with initialization patterns [33]
for both intrinsic and extrinsic parameters. In order to compute extrinsic parameters of camera pose on-the-fly, metric
information is required for the matching correspondences.
In AR research, marker-based camera pose estimation approaches [15][8] have shown successful registration of virtual
objects with the help of robust detection of fiducial markers.
We replace such markers with the user’s outstretched hand.
The rest of this paper is structured as follows: In Section
2, fingertip tracking and camera pose estimation are described
in detail. In Section 3, we show experimental results regarding the speed and robustness of the system and present examples of AR applications employing the hand user interface.
In Section 4, we discuss benefits and limitations of our implemented method. We present our conclusions and ideas for
future work in Section 5.
We focus on real-time real life AR interfaces that should
still leave a wearable computer sufficient computing power
for the execution of application logic. A variety of fingertip detection algorithms have been proposed [20][32][2], each
one with different benefits and limitations, especially regarding wearable computing environments with widely varying
backgrounds and variable fingertip sizes and shapes. Careful
evaluation of this work led us to implement a novel hybrid algorithm for robust real-time tracking of a specific hand pose.
Wearable computers are important enabling technology
for “Anywhere Augmentation” applications [12], in which
the entry cost to experiencing AR is drastically reduced by
getting around the need for instrumenting the environment
or creating complex environment models off-line. Hand interfaces are another important piece of the Anywhere Augmentation puzzle, as they help users establish a local coordinate systems within arm’s length and enable the user to easily
jump-start augmentations and inspect AR objects of interest.
An important problem in AR is how to determine the camera pose in order to render virtual objects in correct 3D perspective. When seeing the world through a head-worn [11] or
magic-lens tablet display [24], the augmentations should register seamlessly with the real scene. When a user is inspecting a virtual object by “attaching it” to a reference pattern in
the real world, we need to establish the camera pose relative
to this pattern in order to render the object correctly. Camera calibration can be done with initialization patterns [33]
for both intrinsic and extrinsic parameters. In order to compute extrinsic parameters of camera pose on-the-fly, metric
information is required for the matching correspondences.
In AR research, marker-based camera pose estimation approaches [15][8] have shown successful registration of virtual
objects with the help of robust detection of fiducial markers.
We replace such markers with the user’s outstretched hand.
The rest of this paper is structured as follows: In Section
2, fingertip tracking and camera pose estimation are described
in detail. In Section 3, we show experimental results regarding the speed and robustness of the system and present examples of AR applications employing the hand user interface.
In Section 4, we discuss benefits and limitations of our implemented method. We present our conclusions and ideas for
future work in Section 5.
การแปล กรุณารอสักครู่..