Instead, the MIT Media Lab proto type combines a pocket projector, mirror, and video camera into a pendant-like wearable device. The project evokes Chris Schmandt’s “Put That There” MIT effort from the late 1970s. Like Imaginary Interfaces, the camera recognizes and tracks the user’s hand gestures and physical objects using computer vision. Unlike Imaginary Interfaces, SixthSense relies on a larger display by projecting visual information onto surfaces, walls, or other physical objects. The camera requires visual-tracking fi ducials placed at the tip of the user’s fi ngers to decode gestures. Applications are plentiful. For example, SixthSense can project live video news onto a newspaper, and users can employ gestures to manipulate a virtual page. As in all gesture systems, defining a broad, yet simpleto-learn gesture vocabulary is a challenge.