Within this work we investigate the arising potential when
automated robotic inspection is combined with augmented
reality technologies and through that the path planning capabilities
and cognitive understanding of the robot is shared
with the human, while at the same time human intelligence
empowers the robot reactive path planning through head
motion–based intuitive teleoperation. The envisioned scenario
refers to industrial inspection and the utilization of
aerial robots within it. In such applications, a prior model
of the structure to be inspected might be available a priori,
but it is typically only approximate, potentially outdated,
or generally different from the real–life structure to be
inspected by the aerial robot. At the same time, autonomous
exploration of aerial robots is not always able to identify
the information gain of areas with critical visual information
(e.g. written signs, local color change). Based on this fact
we want to combine methods and for automated optimized
coverage path planning given the prior model of the structure,
together with an AR–interface to the human inspector that
provides feedback of real–time stereo views, as well as how
the online 3D reconstruction is evolving. Essentially, the
AR–interface enables the operator to reactively command the
robot such that it collects the required additional views of
the environment. Through such an approach, the robot can
ensure the general coverage of the structure, while when a
change or an area of special interest is detected, the human
operator can intuitively command new viewpoints through
the AR–interface and also verify their successful 3D mapping
in real–time. The proposed approach and implementation
are evaluated experimentally based on an autonomous aerial
robot capable of navigation and 3D mapping in GPS–
denied environments. Figure 1 presents an instance of this
experimental study.