In the 3D docking task, raising the ball without having it fall
out of the remote hands, was a challenge for many participants.
The claw tool was more popular here, as it constrained
the ball well during 3D movement.
Real-world collaborative workspaces tend to have many objects
in use. We were interested in how users would handle
additional obstacles, without haptic feedback. As expected,
participants found the obstacle task the most challenging,
while also the most realistic. “Two hands were surprisingly
frustrating in comparison to the prior tasks because they take
up more space”. But participants were able to use a variety
of different representations to more easily complete the tasks.
The claw’s small footprint simplified navigation around objects.
“It enabled me to move the ball without having my arm
get in the way. Rotating the view while using one hand also
worked fairly well because it enabled me to adjust my trajectory
to avoid the obstacle”
Other participants preferred the invisible hand for obstacle
avoidance. “...most useful to go around the obstacles.” But in
general this technique was harder for people to discover. “It
took me a while to get a hang of the transparent feature, but
it was good because it made my interaction faster, and give it
a mouse like feel where I could hover and not click as when
you use a mouse.” Participants also commented on picking
up objects: “I also found myself wanting a way to come from
below an object – and didnt realize until much later that this
is what the ghost/invisible mode enables.”
Observed patterns of use
A number of techniques were used to manipulate the remote
object and we outline the most prevalent. To avoid bias, we
had not given any instructions on gestures.
• Push: Hand (sideways like paddle), back of the hand, index
finger, back of the arm, V-shape with fingers (thumb +
index).
• Hook: Hook shape with hand (bring object closer).
• Flicking: Flick object with index finger.
• Ghost: Switch to virtual, move inside object, switch to
shape to pick up.
• Scoop: Cup hands, scoop and move around. “Two hands
were easier to use when I had my palms close together. ”
• Capture: Approach with two opposing V-shaped hands.
Our overall observation on the gestures emerging during the
study was that participants adapted quickly to the degrees of
freedom supported by the system and did not try to grasp the
object. Instead, everyone interacted as if the remote sphere
was a slippery object; pushing it sideways to translate on the
surface, and scooping it with cupped hands to move in 3D.
LIMITATIONS AND FUTURE WORK
Many of the limitations of Physical Telepresence are related
to the hardware used to implement it.
2.5D rendering using physical pins limits our system’s DOFs
to rendering reliefs and prevents overhangs. This is especially
important for telemanipulation, since it only allows application
of vertical forces to surface objects. The system can thus
not grasp objects [3], only lift them or translate them by tilting
and sliding. Robotic grippers could, however, be combined
with shape displays to provide more dexterous manipulation.
Shape displays with more DOFs of output can also
be explored to allow lateral forces with more intricate control.
Latency, as discussed, is another parameter that limits
remote manipulation, and further improvements in hardware
and interaction techniques could address this.
Limited resolution of current shape displays affect what content
can be represented. Due to sampling requirements we
need to have a resolution twice that of the smallest feature to
be able to clearly display it. We observed this issue when rendering
fingers (2 cm wide) on our shape display with 2.54 cm
spacing. Increasing the resolution and pin travel will allow
for more complex content and more realistic representation
of remote users.
Collisions between remote and local objects can affect the
possible physical rendering. We implement techniques to address
this, such as not physically rendering geometry where
users’ hands are on the local surface. More compliant shape
displays could be built using soft actuators [6, 8]. In addition,
users cannot reach inside a rendered object, as is possible
with virtual graphics [10]. Our previously introduced
Sublimate system, however, provides an approach that combines
AR and shape rendering [14].
Network latency was not investigated with our current systems,
as these were not deployed as distributed setups. However,
the effect of latency is a critical factor for effective remote
manipulation, and further studies are required to investigate
how it will affect operator performance.
In addition to the limitations of shape display hardware, further
work and evaluation is needed to explore new types of
interaction techniques to control these different renderings of
remote participants.
Beyond these improvements to the system, we would also
like to further explore other application domains, such as education,
medical, and industrial. We envision that teachers
could show remote students how to play instruments, and
slow down the playback of their hands so pupils could see
more clearly, while doctors could use Physical Telepresence
for telepalpation of remote patients.
CONCLUSIONS
We have introduced the space of Physical Telepresence
enabled by shape capture and shape display in a shared
workspace context. Physical Telepresence allows for the
physical embodiment and manipulation of remote users, environments,
objects, and shared digital models. By loosening
the 1:1 link in shape capture and remote shape output, we
have begun to explore how remote users can go beyond physically
being there. In addition, we highlighted a number of
application domains and example applications that show how
Physical Telepresence can be put to use for shared work on
3D models, remote assistance, or communication.