• Small text Medium text Large text
  • Default colours Black text on white background Yellow text on black background
Cardiac-eu.org

A System for Wearable Audio Navigation Integrating Advanced Localization and Auditory Display

Principal researcher

Name: Dr. Bruce Walker

Contact details: School of Psychology, Georgia Institute of Technology, MS-0170, 654 Cherry Street, Atlanta, GA 30332-0170, United States of America.
Tel: +1 404 894 8265
Email: bruce.walker@psych.gatech.edu

Website:

Project details

Start date: 15/10/2005
End date: 30/09/2008

Description: For the millions of visually impaired people in the United States, over a million of whom have no usable vision, spatial orientation and navigation are a major problem leading to loss of mobility and reduced participation in community activities, not to mention serious safety concerns for those who do attempt to be fully mobile. There is thus a critical need for an aid that would provide the orientation and navigation information and spatial cues the rest of us take for granted. The goal of this project is to achieve this objective by bringing to bear technologies and development expertise in the fields of geographic information system (GIS) database development and maintenance, real-time tracking, and psycho-acoustics and audio presentation. The PI and his team will develop a seamless, spatialized audio presentation system with which a person can obtain the additional orientation cues and navigation information needed to move about successfully and safely in familiar and unfamiliar outdoor and indoor environments. To these ends, a GIS database for visually impaired pedestrians will be developed to define accessible walking paths and provide a means of route planning. This database will complement conventional, surveyed map data with paths frequented by users of the system, including annotations of information relevant to these paths such as easily recognized landmarks along the path and hazardous obstacles to avoid. Achieving an effective presentation will require real-time tracking of the person's "head pose" (location and orientation of the head in space). This will be accomplished by integrating data from GPS and inertial sensing with high frequency head pose data. Novel probabilistic localization algorithms will be developed to improve the positional accuracy of outdoor GPS using map-based priors and historical modeling of user intent. Visual 3D reconstruction will be used to create indoor and outdoor databases to support this purpose and obviate the need for an indoor sensing infrastructure. A novel vision-based localization sensor will be developed to provide head pose based on this 3D database. In developing an audio presentation system the use of non-speech sounds will be investigated as an alternative to speech for situations where immediate environmental spatial awareness and orientation is more of a concern than particular semantic information about that environment. Clearly, when the task at hand is to move through the environment effectively and safely to a particular destination, being told the name of every doorway and object being passed is more of a distraction than an aid. However, an awareness of the presence and relative location of doorways and objects along the way is useful as a means of monitoring progress and maintaining spatial orientation. The PI's solution is to develop generic characteristic sounds, analogous to international signage, which are easily recognized and associated with particular classes of objects (e.g., the sound of a closing door for an entrance). To avoid covering the ears, bone conduction headsets will be employed, and a bone-related transfer function developed to present effective stereo imaging of sounds. Experiments will be conducted to determine what sounds are most intuitive and easily localized, what variety and number of different sounds can be easily learned, and how to produce such sounds so they do not distract attention from important natural sound cues. System integration for seamless operation will be accomplished by developing an overarching operating system that treats each of the first two systems as modules from which data must be fused for presentation by the third system. This operating system will also respond to control commands made through user input, whereby selections may be made regarding the type, amount, detail and radius of information being presented. Broader Impacts: Fundamental questions relating to perception and the cognitive interpretation of auditory presentations will be answered in the course of this research; these results will be applicable to a much broader field of audio display technologies. In addition, methods for effectively and efficiently presenting a wide range of spatial information quickly and in real time will be developed; this knowledge will be applicable to any field where large amounts of such information must be assimilated quickly though non-visual channels. Overall, this research will lead to more effective auditory presentation and information delivery systems that can be used not only by people who are visually impaired, but wherever and whenever timely situational awareness is needed, from better systems for air traffic controllers to faster assimilation of awareness in complex situations. Such systems could prove invaluable to military personnel, firefighters, jet pilots, first responders, etc., and in general to anyone who needs to acquire more information about his or her surroundings in a manner that augments rather than impedes other sensory processes.

Other organisations involved in this project

Funded by the National Science Foundation.

Last updated: 20/03/2010