This research also foresaw that augmented reality brings the possibility of not only enhancing our current senses, but of possibly “making up” for missing ones. In this thesis, the author designed and implemented an augmented reality application for hearing augmentation where hearing-impaired users can see visual cues of what is being said to them in a natural and intuitive way to understand. The application, dubbed iHeAR, uses the iOS platform and an iPad2 as the supporting device. It is implemented using current algorithms for speech recognition and face detection in order to output the “heard” speech in real time next to the speaker’s face in a “text-bubble”. The speech recognition system used is the open source OpenEars which is a wrapper for iOS application of the PocketSphinx system for device speech recognition. A detailed explanation of OpenEars was provided in section 4.3. Face detection is achieved using OpenCV’s Viola-Jones method implementation for face detection, whose explanation was provided in section