Smartphone app for visually impaired users
SCIENTISTS from IBM Research and Carnegie Mellon University have developed an open platform designed to support the creation of smartphone apps to enable visually impaired users to better navigate their surroundings.
The IBM and CMU researchers used the platform to create a pilot app, called NavCog, which draws on existing sensors and cognitive technologies to inform the visually impaired on the Carnegie Mellon campus about their surroundings by "whispering" into their ears through earbuds or by creating vibrations on smartphones.
The app analyses signals from Bluetooth beacons located along walkways and from smartphone sensors to help enable users to move without human assistance, whether inside campus buildings or outdoors. Researchers are exploring additional capabilities for future versions of the app to detect who is approaching and what is their mood. NavCog app is now available at no cost on the App Store.
The first set of cognitive assistance tools for developers is now available via the cloud through IBM Bluemix at http://hulop.mybluemix.net. The open toolkit consists of an app for navigation, a map editing tool and localisation algorithms that can help the blind identify in near real time where they are, which direction they are facing and additional surrounding environmental information. The computer vision navigation application tool turns smartphone images of the surrounding environment into a 3-D space model to help improve localisation and navigation for the visually impaired.
The combination of these multiple technologies is known as "cognitive assistance," an accessibility research field dedicated to helping the visually impaired regain information by augmenting missing or weakened abilities. Researchers plan to add various localisation technologies, including sensor fusion, which integrates data from multiple environmental sensors for highly sophisticated cognitive functioning, such as facial recognition in public places. Researchers also are exploring the use of computer vision to characterise the activities of people in the vicinity and ultrasonic technology to help identify locations more accurately.
Martial Hebert, director of the Robotics Institute at Carnegie Mellon, said that from localisation information to understanding of objects, they have been creating technologies to make the real-world environment more accessible for everyone.
"With our long history of developing technologies for humans and robots that will complement humans’ missing abilities to sense the surrounding world, this open platform will help expand the horizon for global collaboration to open up the new real-world accessibility era for the visually impaired in the near future," said Hebert.
Smartphone app for visually impaired usersSCIENTISTS from IBM Research and Carnegie Mellon University have developed an open platform designed to support the creation of smartphone apps to enable visually impaired users to better navigate their surroundings.The IBM and CMU researchers used the platform to create a pilot app, called NavCog, which draws on existing sensors and cognitive technologies to inform the visually impaired on the Carnegie Mellon campus about their surroundings by "whispering" into their ears through earbuds or by creating vibrations on smartphones. The app analyses signals from Bluetooth beacons located along walkways and from smartphone sensors to help enable users to move without human assistance, whether inside campus buildings or outdoors. Researchers are exploring additional capabilities for future versions of the app to detect who is approaching and what is their mood. NavCog app is now available at no cost on the App Store. The first set of cognitive assistance tools for developers is now available via the cloud through IBM Bluemix at http://hulop.mybluemix.net. The open toolkit consists of an app for navigation, a map editing tool and localisation algorithms that can help the blind identify in near real time where they are, which direction they are facing and additional surrounding environmental information. The computer vision navigation application tool turns smartphone images of the surrounding environment into a 3-D space model to help improve localisation and navigation for the visually impaired.The combination of these multiple technologies is known as "cognitive assistance," an accessibility research field dedicated to helping the visually impaired regain information by augmenting missing or weakened abilities. Researchers plan to add various localisation technologies, including sensor fusion, which integrates data from multiple environmental sensors for highly sophisticated cognitive functioning, such as facial recognition in public places. Researchers also are exploring the use of computer vision to characterise the activities of people in the vicinity and ultrasonic technology to help identify locations more accurately. Martial Hebert, director of the Robotics Institute at Carnegie Mellon, said that from localisation information to understanding of objects, they have been creating technologies to make the real-world environment more accessible for everyone."With our long history of developing technologies for humans and robots that will complement humans’ missing abilities to sense the surrounding world, this open platform will help expand the horizon for global collaboration to open up the new real-world accessibility era for the visually impaired in the near future," said Hebert.
การแปล กรุณารอสักครู่..
