There are now many control interfaces that require the user to follow a sequence of operations in order to make something work. This often involves the use of remote control devices and the need to look at both the control device and some form of display. If the person is not familiar with the system or under time pressure a command based sequence would be appropriate. However, if the person has a good understanding of the system, they perform well with status displays. So discriminable presentation of both status and command information may be the most effective format. Some devices have buttons that have more than one function, some involve time delays or the need to be able to hear an audible response. If no consideration is given to the needs of people with disabilities then even a simple operation sequence can be unworkable and leave people excluded. In the ideal world, systems will automatically learn from the way the user controls a system, and modify the user interface to optimally meet their needs.
For the visually impaired user it is essential that they understand the operation of the user interface since they are likely to have less feedback from visual displays than a fully sighted person. An important aspect is the need for a consistent user interface.
It is helpful if the user can pre-set sequences or settings they commonly use, and that these can be selected by a single key press or from a favourites list. On a washing machine, most of the time users will only want three or four settings, but machines seem to be sold on the basis of the maximum number of options available on the front panel.
Between the control and display elements there are invisible links that form within people's minds. To encourage the designer's intended association, attention to people's intuitive processes enables designs to be constructed to factilitate their use.
This compatibility can be achieved through static means e.g. location - where controls are located in relation to their displays. This is highest - colocation - when controls are located next to the relevant display. This decreases the possibility of confusion and errors. Unfortunately this is not always possible when displays are placed at a distance from the operator. When this occurs 'congruence' should be applied so the spatial array of the controls corresponds to that of the displays. When information is presented to one side and controlled with the other hand, performance has been found to decrease. People tend to respond to the location where the information is presented. If congruence is not possible, there are still 'rules' that can guide the designer. If a horizonal display is mapped to a vertical control panel, the top control should correspond to the far right display. To resolve any potential mapping ambiguity, angling the controls may help.
Another form of compatibility is through dynamic aspects e.g. movement - how people expect displays to move in relation to their controls. The congruence principle also applies here so linear controls should move in an axis and direction parallel to the displays movement which should follow the operator's expectations. Dial displays are compatible with rotary controls and linear displays with linear controls. If congruence can not be applied, then an increase on the display should be indicated by either up, right, forward, or clockwise by the control. The same control and display elements in different locations can imply different motions. This is represented by movement proximity where the closest moving parts of the control and display should move in the same direction.
Various cultures may be used to different design stereotypes and so their expectations may differ. Peoples' experience, education level and characteristics will also affect their expectations. The menu systems should also use a consistent structure and language.
Redundancy of information benefits human performance because it captures the strengths of different people, modalities, or other variables. By coding information using two dimensions e.g. shape and colour, the knowledge is easier to retrieve.
Should displays simple show the status of certain factors or should instructions be given as to how to proceed? Research is not consistant as to how to tackle this so redundancy is recommended. Both the status and command information should be presented but only to different senses e.g. a visual status display and auditory instructions.
The trend is towards selecting items from a menu shown on a visual display. These can be arranged so that each push of a button steps down one item; if speech output is provided, blind people can operate this type of system. Another arrangement is to use 'soft' keys where the function currently controlled by a key is indicated on a screen. It is more likely for a blind user to get confused with this type of system. Sub-menus can create further problems since an inexperienced user can often get lost. Scrolling menus are particularly difficult to use by people with low vision. Major problems occur for many visually impaired persons when they are required to operate a pointer to select or drag items on a screen.
If a large number of functions need to be selected, there can be:
(a) a large number of buttons
(b) fewer buttons but each button handling more than one function
(c) selection from menus
A large number of buttons are not inherently difficult to use if they are laid out clearly and grouped logically. It is best to avoid multimodal systems however there is not always sufficient space, so multi-function keys are often used. It should be clear which mode the device is operating in. With multi-function keys it is essential that the user can easily reset to the default setting (e.g. by a quick double press of a key).
As people get older, handling more than one task at a time becomes more difficult. For younger people displaying redundant information in more than one modality (e.g. both visually and aurally) is often beneficial, but this is not true for many ageing population.
Controls whose function changes with time pose problems for many visually impaired and older persons. Ideally the user should be able to re-configure the user interface so that the controls operate in a different manner or the time between changes can be extended.
Feedback should be provided after the user has entered information. If there is a time delay between operating a control and obtaining feedback that the command has been accepted, users can become confused. So if the response time is longer than 2 seconds, a waiting message should be displayed.
- When a timed response is required, the user should be alerted and given sufficient time to indicate that more time is required.
- No single design for a control - display configuration is best. Each is compatible only when mapped appropriately.
- Designing User Interfaces for People with Visual Impairments
- Wickens, C.D. (1992). Engineering Psychology and Human Performance. Harper Collins Publishers. USA.
- Hartzell, I., Wickens, C.D. & Sarno, K. (1990). Quantifying Stimulus Response Compatibility for the Army/NASA A3I Display Layout Analysis Tool. Proceedings of the 5th Mid-Central Human Factors/Ergnomics Conference. Dayton. OH.