be aligned with the shape rendering to provide the illusion
that the physical hands are coming out of the screen.
Switching between Representations. It may be advantageous
to switch between these representations. A tangible token,
e.g., could be represented by a remote tangible object, but
if the remote user does not have enough tokens a physical
rendering can be used instead (see Figure 3). Hands can be
represented as physical renderings when manipulating an object,
or as a virtual rendering when pointing or annotating (see
Figure 7). These state transitions are in part inspired by our
Sublimate concept [14].
PHYSICALLY MANIPULATING REMOTE OBJECTS
Prior work [7, 5] has shown how shape displays can be used
to manipulate physical objects, e.g., using translation and rotation.
Here, we explore different control schemes to allow a
user to manipulate remote objects. Using shape capture and
display, users can reach through the network and pick up a remote
physical object. These interaction techniques were developed
through iterative prototyping with our inFORM system.
Hundreds of visitors to our lab tried different techniques
to manipulate objects remotely, and their comments and our
observations lead us to develop and improve the following
techniques.
Gesture Control
Direct Gesture Control allows a user to interact directly with
a remote tangible object through transmitted physical embodiment,
which is rendered on the remote shape display. The
rendered shape of the user directly applies a force to objects
placed on the surface. For example, if the user forms a cup
with the hands and raises them, this will be rendered and the
pins could move a ball upwards. By observing the reaction of
the physical object to their transmitted gestures, users can improvise
to expressively manipulate a variety of objects. While
our shape displays currently apply vertical forces, objects can
still be translated laterally by tilting and sliding [7]. In addition
to users’ hands and arms, any object that is placed in
the shape capture area can be transmitted and used to manipulate
remote objects (see Figure 4). Because our shape
display, based on inFORM [7], can only render shapes 0-100
mm in height, there is a question of how to render the remote
environment, which most likely extends beyond 100 mm in
height. We explored two mappings: Scaled and 1:1 with gesture
zones. The scaled mapping takes all depth data and maps
its height values to the shape display’s maximum extent. The
second mapping, 1:1 with gesture zones, take some portion of
the input space that is the same height as the maximum height
travel and renders it on the shape display. This can be directly
Figure 5. Pushing against the side of a shape-rendered objects with the
brush tool moves it.
above the surface, or mid air, allowing users to touch the pin
array without manipulating remote objects. In our teleoperation
scenario we use a 1:1 mapping, with the gesture zone of
physical rendering starting right above the horizontal screen.
Mediated Gesture Control exploits that the remote object’s
pose is tracked and can be updated and moved to keep it
synchronized with its underlying digital model [7]. Physicsbased
Gestures detect the user’s collisions with the model,
to update and move it, using a physics library. The updated
model then causes the remote object to be physically moved.
This is similar to the proxy-based approach in HoloDesk [10],
but with actuated physical output. Iconic Gestures provide
users with more abstract control. The user can pinch over the
representation of the remote object to grab it, move their arm
to another location, and open the pinch gesture to release it.
The remote object is then actuated by the system to move to
that location.
Interface Elements
Interface elements, such as virtual menus, can be projected
around the rendering of a remote object to provide access to
different operations. Dynamically rendered physical affordances
[7], such as buttons or handles, that appear around the
remote objects can be used for control. The user could press,
push or pull such affordances to move the object.
Tangibles and Physical Tools
Tangible Tokens can be used to control a remote object.
As the user moves the token, the model of the remote object
is updated, and the remote object is moved to reflect the
changed state. Two tangible tokens can be linked such that
moving one causes the other to move, and vice versa, allowing
for bi-directional control [19].
Tools allow users to manipulate remote objects by interacting
with the local physical rendering. Tools can provide additional
degrees of freedom (DOFs) of input, when interacting
with the rendering. Our brush tool, for example, allows users
to push remote objects (see Figure 5). The bristles of the
brush serve two purposes. First, they decrease the friction between
the tool and the pins, which may get stuck when a lateral
force is applied. Second, they smooth the haptic feedback
resulting from the discrete, jerky motion when a physically
rendered object is translated on a limited resolution shape display.
To determine the direction in which to move the object,
the brush tool’s pose is tracked by an overhead camera, while
a mechanical switch senses pressure.