Background
The theory behind pen and touch interaction is based
on Guiard’s seminal work, which introduced the kinematic chain model for asymmetric bimanual interaction
[5, 8]. According to this model, the roles of the two
hands are such that the non-dominant hand (NDH) sets
the frame of reference in which the dominant hand
(DH) operates. In our pen and touch context, this generally translates into a more or less strict division of labour,
as studied by several authors [2, 7-9]. In this role distribution, the pen executes fine-precision actions such as
drawing and handwriting, touch performs coarser manipulations such as panning, zooming, tapping etc. and combined pen and touch provides "new tools" [7]. Those new
tools can materialise in several ways. For instance, the
NDH can be used to constrain or set the parameters of
pen tracing [2] or to activate a particular function, whose
expression is articulated by the stylus [8] etc.
A common characteristic of those bimanual actions is that
they do not require simultaneous motions of the two
hands. In most cases, and as per the kinematic chain
model, the NDH fixes the operational context and the DH
moves in that context producing a particular response,
e.g. a finger of the NDH pins an object and the pen is
used to drag off a copy of it [4, 7]. This suggests that
NDH postures can be an effective technique to enhance
the vocabulary of pen interactions [8].
On the document creation front, as has been mentioned
above, tabletop applications developed so far have mostly
limited themselves to supporting annotating and basic
editing functions [9]. Furthermore, we observe that latest
consumer office products (Microsoft Office at the forefront), while showing some level of adaptation to
touchscreen devices, are still very apparently rooted in
their WIMP
1
legacy [3].