With thoughtful design, user interfaces for embedded systems can avoid some user errors altogether and helpfully guide the user through the rest.
Nothing aggravates the problem of a painful user interface more than error messages that don't get the message across. Even with a well-designed user interface, you have to assume that the users will occasionally make errors. When designing software, a programmer is inclined to think that error states are the exception and can be given limited thought, because this path through the code will not be traversed often. That may be true for software errors but user errors should be treated as normal conditions, not the exception. This article illustrates different types of errors, and how to guide the user when they occur. We'll also look at some strategies for avoiding user error paths in the interface.
Anticipate User Errors
In the 2000 presidential election, touch-screen voting machines in Texas gave voters trouble. Many users complained that they attempted to vote for one candidate, but the device displayed that they had voted for the other. A representative from the e-voting machine vendor tried to explain this away as "It's not a machine issue. It's voters not properly following the instructions."1 This is a perfect example of a designer deciding that user error is beyond the scope and definition of the product. If the voting result was wrong due to an interface that's difficult to use, that is just as serious as a voting error due to a software bug. But the company did not see it this way.
Designing a system without considering the user-error cases is like designing a weather station that only works within a limited temperature range. Imagine such a designer claiming "weather should not really have done that—the weather should stay within the parameters that we specify." Weather variations, just like user errors, are inevitable, and a good designer will consider these scenarios as a fundamental part of the design.
When analyzing error possibilities, it's important to distinguish a slip from a bad decision. A slip is an action that was not intended. Often caused by a lack of attention, slips are more common when we perform familiar activities during which we might not pay the same attention as we would for a new and more challenging activity.
The simplest form of this error is reaching out to press a button and instead pressing the button next to it. Sometimes this error is made more likely by having buttons with a similar look and position although their functions are very different. During the industrial design of a product, it's tempting to arrange the buttons in a symmetrical grid with all the buttons the same size. This makes sense for buttons of similar meaning, for example a numeric keypad. However, if the functions and frequency of use of buttons vary dramatically, so should their size and position.
Symmetry and repetition of sizes add to the aesthetics of the design but can detract from the usability. A regular appearance can still be maintained in some cases. For example, if a frequently-used key is to be made larger than the other keys, making it exactly twice the size of the neighboring keys will help maintain a balanced appearance. See my piece on "Looking Good—A Lesson in Layout" for further discussion of layout issues.2
The more diverse the controls, the less likely the slip. Many embedded devices restrict the control surface to a set of buttons. The physical action of pressing any button is the same as the action of pressing any other. However, if you mix buttons with dials and sliders, the physical activity required to control one is different from another. Consider the controls in the car you drive. It's unlikely that you'll press the brake when you intended to turn on the windscreen wipers, mainly because the physical actions involved are so different.
Some keys are made deliberately inaccessible to prevent slips. The U.S. Food and Drug Administration (FDA) mandates that design must prevent the accidental turning off of life-support devices. One common technique is to place a cover over the on/off switch. The cover has to be removed before the switch can be used. An alternative approach, if the on/off button is monitored by software, is to prompt the user to confirm the action once the on/off button has been pressed. Either measure helps prevent the user from turning off the machine by accidentally pressing or bumping against the on/off button.
Another example of a physically inaccessible control is used for ejection seats on fighter jets. The lever to activate the ejection is placed over the pilot's shoulder, far away from all of the other controls, so it's less likely to be pressed in error.
It's important to note that the measures taken in the last two examples only prevent the user from unintentionally turning off the machine or ejecting from the aircraft. They don't prevent the user from making a bad decision.
Capture errors are mistakes in which users have repeated the same action so often that they'll follow that familiar path even when their initial intentions are to take a different route—like the commuter who takes the turn for work on a Saturday morning when he really intended to drive to the beach. Habits, once formed, can be hard to break.
In GUIs, a wizard often leads to the same problem. PCs use wizards to provide a step-by-step sequence of questions for the user. Embedded devices sometimes use a similar pattern, though the presentation may be quite different. If I add a contact on my mobile phone, the device will ask me for the number, name, type of contact (personal or business), and maybe some other contact information. Similar sessions allow devices to be configured for communication or to be calibrated. Each step permits the user to proceed once the information has been entered or to reverse if he changes his mind.
Sometimes these wizard interfaces present defaults to the user, allowing the user to select "Next" repeatedly. Often, the last step is the one that permits the user to commit the changes. If the user presses the "Next" button repeatedly to accept each default, he may press the same button to commit the entire configuration. It's better to make the user pause at the final step. In a GUI it's a good idea to put the "Finish" button in a different location on the screen than the "Next" button. That way the user will be forced to relocate the pointer, which may be a mouse or a finger, before selecting the "Finish" button. If an off-screen key is being used then it may be possible to arrange it so that the key used to end the sequence is different from the one that performed the "Next" function.
Figure 1: A popup where the user will automatically answer 'Yes'.
Capture errors can alter the effectiveness of confirmation messages and warnings. If a request for confirmation appears often, the user will automate his response to it, eliminating the value of the question. Consider the popup shown in Figure 1. Microsoft Office asks the users whether they wish to save a file when exiting an application. The question is asked so often that most users automate their "Yes" response. In the rare case where the user wants to exit without saving, the user will probably click on "Yes" before he stops to consider his answer.
The popup would work slightly better if the actions were described inside of the buttons. A user choosing buttons with the captions "Save and Exit" and "Exit without Saving" would have a better chance of making the right choice. Parsing the question and then deciding on a "Yes" or "No" response makes the action and the consequence one cognitive step further apart.
The existence of the popup could have been completely avoided by having a separate "Exit without saving" feature available under a menu. There are other alternative solutions discussed in the book About Face by Alan Cooper.3
Information displayed in ambiguous or non-intuitive ways can lead users to misinterpret that information. The number 10.0 might be misread as 100 if the user didn't notice the single dot that represented the decimal point. In some cases, a number that's different by an order of 10 would be absurd and, therefore, this mistake would be unlikely. However, if both values are possible, then the size of the discrepancy could lead the user to take an inappropriate action. Displaying the value without a decimal place would avoid this problem, but at the cost of accuracy. Showing the information graphically as a bar chart would make large-scale mistakes unlikely, but again we have lost accuracy. The ideal solution comprises both a graphical and numeric output.
In some cases, you can remove the need for the user to read the output accurately. I used to own a callerID unit that displayed the phone number of the last caller. To ring the person back, I dialed the number displayed. Occasionally, I would misdial the number. After apologizing to the person I reached, I would return to my callerID unit only to discover that my new call had reset the display—so the original number was lost. More modern landline phones and almost all cell phones allow the user to ring back a displayed number with a single key press, eliminating the possibility that the user will misread and, therefore, misdial the number.
Another cause of recognition errors is when data for two different purposes looks very similar. A machine I worked on recently displayed oxygen mixture as a percentage and temperature in degrees Celsius. Values in the range 21 to 40 were reasonable for either field. A user looking at the values 25 and 35 might assume that the first was the temperature and the second was the oxygen mixture. If the user had looked more closely at the labels next to the values, he might have realized that he had transposed them. This is another case where a graphical display can make the nature of a value obvious—by showing temperature as a thermometer, for example.
In some cases, data that doesn't need to be similar is made similar by accident. To logon to my bank's web site I have to enter a user ID, which is six digits. I then have to enter a secret identification code, effectively a numeric password. This number if also six digits making it very easy to confuse it with the user ID. The bank could easily have chosen the fields to be of different lengths or inserted a letter into one of them for the sole purpose of making the difference obvious.
A similar problem arises on some candy vending machines. Some of these machines use a two-digit code to allow the user to choose which treat he wants to buy. The code is written next to the bar, which the user can see through the glass front. The price is also quoted. However the price is also often two digits. If the price is 75 cents and the selection code is 65, then it's annoyingly easy to type in 75 to select the candy bar, only to then realize that the price has been entered instead of the selection code. The user either gets an error message, because the selection is not valid, or receives the wrong candy. The simple fix is use a pair of letters as the selection code.
Other recognition errors can be caused by ambiguous presentation unintentionally leading the user in the wrong direction. In the message shown in Figure 2, the word "Accept" appears before and, therefore, to the left of the word "Clear." However, the buttons are arranged in the opposite order, with "Accept" to the right. This increases the chances that the user will pick the left-hand button when he really meant to select "Accept."
Figure 2: Why would the user be likely to select the wrong button?
In many cases, embedded systems describe or display information that relates to the physical world where similar issues arise. Consider a patient monitor mounted on the side of the hospital bed. If the monitor shows a diagram of the patient, the picture on screen should show the patient in the same orientation as the real patient. So, if the monitor is on the right-hand side of the bed, the patient's head should be to the left on the display. If the same monitor were moved to the other side of the bed the diagram should be mirrored; otherwise the diagram will show the patient's head to the left, although the real patient's head is to the right. Ideally, the device could detect the left or right bed rail and automatically alter the diagram to match.
As well as distinguishing left from right, it's important to use matching orientation. Some automated medical syringes display the syringe on their GUI to show how full the syringe is. If the real syringe is mounted vertically then the image on the GUI should not be shown horizontally.
Another common cause of user errors occurs when users are in one mode but think they're in a different mode. For example, if the user of a stereo thinks that he is in CD-player mode and presses Button 1, he might expect to play the first track of the CD. However, if the actual mode was radio tuner, the first button will most likely tune to the first of the preset channels. If that channel is playing music, it might take a couple of seconds for the user to realize what has happened.
There are two key elements to removing mode errors. The first is to make the current mode obvious to the user. The second is to eliminate modes where possible. This topic is explored in greater detail in my article "Modes and the User Interface."4
Design out User Errors
A good way to avoid having users follow an erroneous path is to eliminate the path completely. The simplest example of designing out an error is to remove a button from a GUI when that button is not valid. This avoids the user pressing the button, only to be greeted by the message "Not valid at this time." If the action is not valid, don't make the option available in the first place. On the PC, the convention is to make the unavailable option gray. An even better solution on the PC would be to have the bubble help, which appears when the mouse hovers over a button, inform the user why the option is grayed out at this time. Unfortunately, the bubble-help mechanism that works so well with a mouse, doesn't work with a touch screen, which is the input mechanism of choice on the majority of embedded systems' GUIs.
More complex cases of designing out errors involve reordering the interaction with the user. Consider the calibration procedure of a lung ventilator that requires two reference pressures. The first reference is atmospheric pressure. A reading is taken for atmosphere when the system has been allowed to equalize with room air. The second reference reading is taken using pressure from a gas supply, which is available in the intensive-care unit as a piped high-pressure air supply. Each reading takes a few seconds since the readings have to be averaged to eliminate any pneumatic or electrical noise in the system.
In the initial design the first reading is taken. Then a solenoid is opened to allow the pressure to rise. The second reading is taken at this higher pressure. Now consider the case where the user failed to connect the piped gas supply to the device. Since the higher pressure is not reached the device displays a message informing the user that the calibration has failed, due to lack of a gas supply. The user must press the "Accept" key to acknowledge the failure and then restart the procedure by selecting it from the menu again.
In this example, the user is allowed to start the calibration even though the gas supply is not available. The error message added insult to injury by making the user feel that he was to blame for the mistake. You may wish to blame the user, since the specification and user manual says that the gas supply must be connected in order to perform the calibration. However, being able to blame the user and point out which paragraph of which subsection of the user manual he violated will not endear the user to your product.
The improved interface has a step at the start of the calibration that instructs the user to connect the gas supply. If the device detects that the supply is already present, this step is skipped. Now the case of a missing gas supply is treated as a normal part of the interaction and not as a user failure.
The philosophy being applied here is that presenting the user with an error message is like a safety belt—it only limits the damage after the accident has happened. The engineer who wants to avoid the accident in the first place implements preventative measures, like antilock brakes that keep the user on the road and out of trouble.
Designing out errors means that the product will be perceived as having better reliability. A device that gives lots of error messages will be perceived as one that fails often, even if those failures are not technical break-downs or bugs in the software. Although the programmer can easily make the case that these errors are due to user errors and not a failure of the device, there are two flaws in this argument. One is that the root cause of the user error is that the user interface was not clear enough to allow the user to do the right thing the first time. The second flaw is that attributing the blame to the user doesn't help the user to like (and therefore buy) the product.
Simplify User Interaction
Numeric entry by the user usually has to be range checked to ensure that the number lies between some high and low limits. An out-of-range number will cause the entry to be rejected. This type of error can be eliminated if the number is entered with a dial rather than with a numeric keypad. When the dial reaches its minimum or maximum value, no further changes are allowed. This avoids the awkward interaction where the GUI informs the user that the number he has entered is not acceptable and sends him back, like a scolded schoolchild, to try again.
There are other cases where numeric entry is necessary. I recently worked on a calibration sequence where the user had to type in the pressure read from an external pressure meter. When the value was entered on the GUI, the user selected the "Continue" button to proceed with the calibration. One of the testers found a bug where the device got into an odd state if an illegal value was entered. When the "Continue" button was pressed, a warning appeared to tell the user that the value was out of range, and the user interface reverted to the numeric entry screen. However, a timer was also started to control the next part of the calibration procedure. This timer should never have been started in the case where the value was out of range. The result was that whenever the timer expired, one part of the software thought it was at a later step in the calibration, while the GUI was still prompting the user to enter the external pressure.
The first fix I considered was to avoid starting the timer until after the pressure entered by the user had been validated. This seemingly simple change turned out to be tricky because I was trying to start all of my timers from a similar point in the code. It took a while for me to realize that the whole problem only existed because I had allowed a button to be available even when its function was not valid. It only took a couple of lines of code to disable the "Continue" button whenever the numeric value was out of range and enable it whenever the value was in range. A little extra CPU time was utilized because we now have to range check the value every time a numeric key is pressed, but this is trivial for any processor capable of supporting a GUI.
After putting in this fix, the warning for the value being out of range was no longer necessary. This is because any time the value is invalid, the user can not activate the disabled "Continue" button. The "Continue" button now leads to only one path in the code. Previously there was one path if the value was valid and a separate one if the value was out of range. More paths leads to more testing and, in this case, it led to more bugs. Avoiding the warning to the user is also a good example of designing out errors. The moral of this story is that simplifying the interaction for the user often simplifies the code for the programmer, so everyone wins.
Friendly Error Messages
Consider a new colleague at work who glances over your shoulder and notices that you forgot to increment a counter in a while-loop. He then points out the consequences of the infinite loop that would have resulted and gives a derogatory giggle. Then he asks "Isn't that right?," forcing you to say "Yes" to acknowledge your stupidity, when he could have just as easily passed it off with a less pointed "I'm sure you would have spotted it yourself fairly soon."
There is no doubt that this colleague spotted a genuine mistake, and he probably saved you a bunch of time that would have been spent debugging, but somehow you don't like him much. He was too smug about what you got wrong, when the same message could have been delivered with more humility and more understanding that all humans fail in small ways and big everyday.
A good working relationship is going to depend not just on what information is exchanged, but how it is presented. The same is true for your embedded device. If the error messages accuse and insult, the user will dislike the product. In many cases, some simple rewording can completely change the style of the error message and the tone of the whole interface. For example, the message "Illegal Key" suggests that you have broken the law. Replacing it with "Please accept or cancel the value entered" does a far better job. Instead of focusing attention on the user's mistake, it transfers the emphasis onto the solution, which is far more interesting to the user. The device's role should be to guide the user onto the right path, rather than scolding him for taking the wrong one. In some cases, this means making the error messages context-sensitive, but the extra effort on the part of the programmer has a big payoff for the user.
Error Message Design
When you design error messages, you have a number of considerations to think about. The amount of information required will vary widely, but you should consider if any or all of the following three questions are answered: What? Why? and How?
Inform the users what happened. Tell them why it happened. And then tell them how to solve the problem. In some cases, one or more of the answers is obvious. If the printer is "Out of Paper," it's hardly necessary to explain why that might happen. The solution of adding more paper may seem obvious, but it may be necessary to specify which tray is out of paper so the user adds paper of the correct size.
Always consider whether the error message is specific enough. Generic messages such as "File not found" may be easy to reuse, but in some contexts the user is left wondering which file is in question—the last operation may have referenced more than one file.
Ideally, your interface provides enough display space to print an informative error message, but many embedded devices have a very limited interface. Sometimes it takes a little imagination to let the user know that something has gone wrong. On my toaster, the bread pops up immediately if it's lowered when the toaster is not plugged into the wall. Since the bread pops up again immediately and obviously untoasted, I know that something has gone wrong. While the details are not included in this "message," at least it's better than leaving the bread in the toaster, to be discovered, cold and untoasted, several minutes later.
Lastly, it's often tempting to draw a user's attention to an error by sounding an error beep. Rising tones are optimistic and convey the message that something has gone well. Falling tones indicate that something has gone badly. These two basic types of tone can be used to distinguish valid from invalid key presses. Be careful to not make the error tone too intrusive. Bear in mind that the device may be used in an environment where there are many other people present. A sound that indicates an error or a wrong key-press will announce the user's slip to the rest of the room. A novice user, making a large number of mistakes, may not appreciate his struggle being advertised so widely.
When designing and avoiding error cases, remember the proverb "To err is human, to forgive divine." While it may be a tall order to make our user interfaces divine, we can at least make them a little more forgiving.
Related Barr Group Courses:
For a full list of Barr Group courses, go to our Course Catalog.
3. Cooper, Alan. About Face: The Essentials of User Interface Design. Foster City, CA: IDG Books Worldwide Inc., 1995. [back]