User control modalities to assist people with disabilities to
operate a robotic arm have been previously studied. A physical
joystick is a widely accepted modality for the control of robotic
arms. Joysticks are standard components for most
commercially-available robotic arms, which allow the user to
operate the end effector through directed selection [5]. The
physical joystick is inexpensive, simple in design and can
provide accurate control. However, many robotic arm joysticks
[6] provide two-dimensional control for x and y directional
control and to control z direction by using a twisting control
knob or separate controller. This is sufficient for able-bodied
individuals, but has limited use for people with limited or no
finger or hand mobility, such as those with upper level SCIs
[7]. Another very popular interface for robotic control is
automatic speech recognition system. It is considered as a
solution to the problem of traditional joystick. For example, a system called FRIEND , operates a robotic arm attached to an
electric wheelchair using a speech interface with simple
commands [8]. Unfortunately, speech control is limited to
discrete commands and not robust in noisy environments. This
makes user control extremely difficult outside or where there is
significant background noise. There is also significant safety
issues present with speech recognition systems due to
unintentional activation. Other interfaces that have been
proposed to operate a robotic arm system to assist individuals
with disabilities include gesture-based interface [9], BCI [10],
and eye gazing control [11] are in their infancy and still have
significant technical challenges to overcome.