JULY 13, 2009--Researchers from the University of Bristol (Bristol, UK; www.bristol.ac.uk) have developed real-time image-processing and some clever algorithms to identify objects and obstacles, such as trees, street furnishings, vehicles, and people to help blind people better navigate the world. The system uses the stereo images to create a "depth map" for calculating distances. The system can also analyze moving objects and predict where they are going.
So how do you present this visual information to a blind person? Technology developed at the University of Laguna (La Laguna, Spain; www.ull.es) makes it possible to transform spatial information into 3-D acoustic maps.
The EU-funded CASBLiP project was conceived to integrate the image-processing and acoustic-mapping technologies into a single, portable device that could be worn by blind people and help them to navigate outdoors. The device also incorporates a gyroscopic sensor developed by the Polytechnic University of Marche (Ancona, Italy; www.univpm.it/English/Engine/RAServePG.php). This head-positioning sensor detects how the wearer moves his head. It feeds back the position of the head and the direction it is facing so the relative position of the sounds being played to the wearer also move as expected. For example, if the wearer turns his head toward a sound on the right, the sound must move left toward the center of the sound picture.
For more information, visit ICT Results.
-- Posted by Conard Holton and Carrie Meadows, Vision Systems Design, www.vision-systems.com