A wealth of sensing opportunities. A vast body of literature has focused on how to make the most of smartphone sensors [76].
A great deal of work on mobile phone sensing was contributed by Andrew Campbell of Dartmouth and his group. Campbell’s
group developed CenceMe [77], a human-centric system that could infer context information about users of sensor-enabled
mobile phones by applying machine learning techniques to the sensory input from the microphone, the accelerometer,
the camera, and the Bluetooth radio (to assess how social the user is based on the number of Bluetooth contacts). In the
NeuroPhone system [78], neural signals are employed to control mobile phones to achieve hands-free human–mobile
interaction using cheap off-the-shelf wireless electroencephalography (EEG) headsets. The EyePhone system [79] tracks
the user’s eye movement across the phone’s display using the camera mounted on the front of the phone and machine
learning algorithms are used to infer the eye’s position on the display and detect eye blinks that emulate mouse clicks.
SoundSense [80] focuses on the microphone as a sensor and offers a sound classification system designed to cope with
the computational limitations of smartphones. Other notable contributions include Darwin [81], a collaborative sensing
system to pool the collective sensing capabilities of multiple smartphones, VibN [82], which uses accelerometer, audio, and
localization sensor data to determine what is happening around the user, and Jigsaw [83], a continuous sensing engine that
addresses the challenges of long-term sensing, such as energy-efficiency.