• Small text Medium text Large text
  • Default colours Black text on white background Yellow text on black background
Cardiac-eu.org

SeeStar: A Mobile PDA Phone-based Assistance System for Visually Impaired Users

Principal researcher

Name: Tim Dorcey

Contact details: IVISIT, LLC, 2043 Colorado Avenue, Suite 3-4, Santa Monica, CA 9040, United States of America.
Email: orang@ivisit.com

Website:

Project details

Start date: 30/09/2005
End date: 31/07/2010

Description: The purpose of the proposed research is to continue development of the "SeeStar" system. Participant experiences with SeeStar under a variety of user conditions demonstrated the value of the system, and indicated where further development efforts will be required.

SeeStar offers persons who are blind a "pair of eyes" via the use of a cell phone camera and data link to a remote assistant who can look at live images through that cell phone camera and then describe the person's surroundings, indicating the location of salient landmarks and pathways to specific destinations. Also, using GPS, the assistant can employ Google Maps to orient the person to their farther reaching surroundings.

The experiences of Phase 1 participants, as well as remote operators, made it clear that continued SeeStar development should focus on static rather than dynamic use scenarios - that is, not attempt to provide "blow-by-blow" directions as the person is walking. Further, participants made it clear that in the absence of an available remote assistant, an automated means of detecting salient visual landmarks and specific destination points would be most useful. The participants were most enthusiastic when landmark and text recognition algorithms were demonstrated in this regard. The experience of the remote operators of SeeStar also highlighted specific issues to be addressed in Phase 2:

(1) difficulties in communicating caused by video and two-way voice reception latencies greater than 1 second caused by fluctuations in data channel capacity,
(2) difficulty in orienting to a setting given the very restricted field of view of the camera lens,
(3) low video image resolutions made it difficult to read signs and recognize objects easily, and
(4) GPS coverage and errors.

Further, participants were split over whether they felt it was easier to hold the camera in their hands, or wear it around their neck, or perhaps even on their head.

Participants also wanted a means of verifying their intended routes (indoor and outdoor) and destinations, locating and recognizing pedestrian crossing buttons, bus stop signs, bus or train numbers, elevator buttons, exits, bathrooms, office numbers, and street signs.

The development of the automated capabilities will include a significant amount of existing state of the art but off the shelf, image based landmark recognition, text sign detection and recognition, and automatic panorama creation software. Such landmarks and text signs when referenced to a GIS database can provide a redundant method of positioning to GPS which may suffer from coverage and availability issues. The combination of object and text recognition will also enable location and identification of critical salient features in the absence of an assistant such locating and recognizing bus numbers, street signs, elevator buttons, or pedestrian crossing buttons.

Finally, more extensive and statistically relevant focus group and end user tests will be conducted to insure we have adequately addressed issues of concern for this very heterogeneous population.

Other organisations involved in this project

Last updated: 20/03/2010