Introduction
Unlike the majority of other manual
operations, driving
has a constrained operation space
[1]
. In contrast to
the limited hand control space in the vehicle, the
functions available in modern vehicles have become
almost unlimited due to today’s computer technology
.
Layers in center stacks are not unusual for most cars;
numerous buttons are dedicated to advanced functions
that have been recently added. The increased visual
clutter of the dashboard and center consoles seems
unavoidable if car manufacturers continuou
sly make
cars smarter. Given that controls and displays of high
-
priority functions have to remain within the full hand
grasp area
[2]
and within a 30 degree viewing angle
from the straight ahead direction to ensure driving
safe
ty, some advanced functions are cumbersome or
impossible to access while the vehicle is in motion.
Even though many controls are located within the
driver’s reach envelope, it does not mean they have
good usability. For example, the control for adjusting
o
utside rear
-
view mirrors may be located on the
driver’s left
-
hand side in order to be placed within the
full grasp envelope
[2]
, yet when it is used to adjust to
the right
-
side mirror, the positional changes of that
mirror occu
r on the side of the driver that is opposite
from where the control is placed. This is to say that
sometimes drivers cannot easily manipulate controls
that are co
-
located next to the function they are
controlling
–
but may instead need to exert mental
effo
rt to match, translate, or transform their actions
spatially so that they will achieve the desired effect
[3]
.
Gestures may provide design options to address
ergonomics problems such as the ones mentioned
above. As described b
y Rehm
[4]
“
gestures can provide
additional or redundant information
to
accompany
verbal utterance, they can have a meaning in
themselves, or they can provide the addresses with
subtle clues about personality or cultural backgr
ound
.”
Natural gestures also do not require physical contact
between operator and controlled entities and potentially
act as an extension of the human body. The rich source
of user intention surrounding natural gestures has
already shown its power in provi
ding natural, efficient
and flexible interaction in virtual environments
[5]
[6]
.
The gaming industry has already achieved great
success through the use of free form gesture
interaction (e.g., Micros
oft Kinect and
Nintendo Wii
). As
contact
-
based or near
-
contact
-
based gestures are
widely adopted in smart phone and tablets, car
manufactures have also applied similar gesture
interaction to both touchscreen
-
based and remote
-
controller
-
based infotainment s
ystems. For example,
the Cadillac User Experience (CUE) system will display
additional buttons when a proximity sensor detects a
driver’s hand near a region of the touchscreen. Both
the
Audi Multi Media Interface (MMI)
system and
Mercedes C
-
class center co
ntroller incorporate common
surface gestures, such as character recognition. Beyond
these, almost all major automobile enterprises already
use gaze behaviors to detect drowsiness, which also
could be conceptualized as belonging to the broad
concept of gest
ure
-
recognition. In addition to improving
in
-
vehicle interaction for healthy drivers who have no
physical or sensory limitations, gesture interaction may
open up another interaction channel for people with
speech and hearing difficulties
Error! Reference
source not found.
[5]
[7]
. Although not every attempt
of using gesture is perfect, the presence of gesture
interaction clearly has the potential to contribute
toward making
today’s car user
-
friendly. In addition,
many successful speech
-
based interfaces, such as