The key innovation of the graphic user interface is that input is performed directly on top of output. Today, we’re going to explore what makes an interface easy, hard, or “natural,” and we’re going to do it by starting with something really simple: a measuring cup. It has a user interface: it’s got label readout here. There’s a task: Fill up a liquid to a desired amount. So, how might you be able to improve this standard measuring cup design here? What kind of things can you think of? OXO is a company that sells kitchen utencils. At the Gel Conference in 2008, OXO’s Alex Lee talked about their measuring cup: The first prototype came unsolicited in the mail; it inspired them to go out into the field and ask people about problems with their measuring cups and suggestions for improvements. They got things like: “It’s sometimes slippery” or “The handle gets hot”. Nobody complained about the measuring part; but, when they watched people measure things, they would do this: Grab some liquid; Pour it in; Hold it up; Pour some more in; Hold it up; Pour a little more; There we go! This stuttered-step measurement seems kind of inelegant, so, to address this, OXO released a design with a clever insight: The numbers are visible from above. This provides continuous interactive feedback on your actions. Pour once; and when you get to the right level, you can see it directly and stop. It’s a great design in lots of ways, but there’s still some room for further improvement. For starters, the numbers that are written are sideways from the reader’s perspective. Not a huge deal, but still, it’s kind of like when you have to tip your head to look at a table. And also — some of you may like this, but I’ve always found this ridge here to be a little inelegant: This feels like a medical device more than something that belongs in the kitchen. And, when I went to the grocery store today, I actually found this measuring cup by Pyrex: It takes the insight from the OXO measuring cup — that you can measure from above — and improves upon it in two ways: The first is that the numbers right here are readable from the reader’s perspective. Hold it in the right hand if you want ounces, or in your left hand if you want millilitres. The second is that, for me at least, this design is a whole lot cleaner and nicer-looking. So, in this humble story of a better measuring cup, Alex Lee at OXO has provided us with four important lessons about human-centred design. There’s a common story that the carmaker Henry Ford once said, “If I asked people what they wanted, they would have said ‘a faster horse.’” [It] turns out that they probably didn’t say that — it’s probably made up — but it makes a good point. Our first lesson is that in both transit and in cookware, simply asking people what they want can often miss important opportunities. The second is that you can find these problems and opportunities — like poor ergonomics — by going out into the field and discovering what people actually do. And, third, you can get extra power out of this observation by bringing prototypes along with you. Fourth, the world is full of people tinkering in their garage, and everyone wins when the stars align to bring those things into the world. With measuring cups, like user interfaces, you can think of there as being two steps: The first is having some action. And the second is evaluating the outcome. At each of these steps, you, as a designer, has an opportunity for success or for failure. In our first step, what you have to cross is the gulf of execution: How does the user know what to do? And the gulf that you have to help the user cross is the gulf of evaluation: How does the user know what happened? My sketch here is based on a diagram that my colleague Bill Verplank often uses. As a designer, here are six powerful questions that you can ask to ascertain what challenges may arise: How usually can someone determine the function of a device, or what actions are possible? And then, how can I determine the mapping from their goals to their physical movements — what they actually do with the user interface? Once they have the plan, how usually can they execute that action? And, after they’ve executed that action, how usually can they tell what state the system is in or whether it’s in a desired state? And, more generally, can they figure out the mapping between the system state and what they should interpret that to mean? And, as a designer, here’s several things that you can do to actually make these easier: The first and simplest is, if there’s functionality that a system can do, the best way to communicate that to the user is to put that on the screen — make it visible in one way or another. And, if it’s in the physical world, give it a handle, or knob, or button. These cues to action are called “affordances”. Second, provide users with clear, continuous feedback, so they know exactly what’s happening. Third, all things equal, be consistent with existing standards, platform conventions, and other idioms. Fourth, whenever you can, make it possible for people to back up, through Undo or some other mechanism: Very few things should be permanent and irreversible. A benefit of having all operations being non-destructive is that it also enables users to explore and try out things; and especially in creative applications, you see that people will try down branches and use Undo to come back. Fifth, provide a systematic way for people to discover all of the functionality available in an application. If you watched somebody use a new website, for example, one thing that you might see them do when they’re trying to figure out how to navigate it, is [to] roll over all the menu bars, to see what every option that’s out there is. And, finally, reliable software is a usability issue: The thing that’s supposed to happen should [happen]; and random stuff that isn’t, shouldn’t. I know that reliability is easier said than done, but it’s still really important; and a lot of the interfaces that people really like have this property of reliability. So, now that we have these new conceptual tools of a gulf of execution and evaluation — of knowing what you can do and whether what you’ve wanted is what happened — let’s take a look at a video that Bill Moggridge and colleagues at IDEO put together about someone trying to buy a soda from a vending machine in Japan using their mobile phone. While you’re watching this video, think about what’s causing the breakdowns, and how you might fix them by using the bullet points that we’ve outlined in this lecture today. We started out today looking at physical products, and we moved on to explore smart products and mobile phones. Now let’s think about what direct manipulation and the gulfs of execution and evalution mean for building software. What’s better? A command line or a graphical user interface? Well, I think the answer is going to have to be “it depends.” Nearly every design is good for something and bad for something else. But let’s try to figure out what makes them different: think about moving a file between two folders. On a graphical user interface, you simply pick it up in one location, and drop it in another: You have continuous feedback about what’s happening. All of the objects that are of interest to you are available onscreen, and you can simply point at what you want — “input on output”. With a command line, you have to know, a priori, what the name of the command that is that you’d like to use. You also have to remember the syntax for using that move command there’s minimal feedback about whether the operators that you’re dealing with — like files or folders — are actually available, and there’s not much in confirmation that you’ve moved it ultimately to the right place — it can be easy to make errors. What we see with the graphical interface is the power of direct manipulation — immediate feedback on all actions, continuous representations of the objects, and they leverage metaphors that we have from the physical world. These metaphors are especially important when you’re learning a new interface, and so when the graphical interface was first released, being able to rely on what people knew about desks, folders, and files was really helpful; those become less valuable once you’ve become accustomed to the computer world. So, give some thought to which of these principles hold for both the command-line interface and the graphical interface. From my perspective, the graphical interface does a much better job in terms of visibility, feedback, and consistency. In either interface style, Undo is possible, although the GUI generally does a better job of exposing when Undo is available and what undoing will actually do. One place that a graphical interface really shines is in terms of discoverability — you can leaf through every single menu in a system, and see all of the operations that are there; with a command-line interface, there’s really no way to know what’s the full set of operations that you could possible type into a terminal. There’s no mater list. In terms of reliability, either interface style can be made to be reliable. “But how can this be?” some of you may be saying, “The command-line interface can be so much better sometimes!” And it is! When is that? The command-line interface works better when the indirection that it offers is a benefit rather than a drawback. And in fact that indirection that it offers is a kind of abstraction, and that’s what gives programming its power, and so the command-line interface, like programming — because in some way it kind of is programming — gets its power when you can express stuff more abstractly and thereby do work more efficiently. So, for example, moving all of the files that match a certain set of criteria from one place to another, that can sometimes be more efficiently accomplished with a command line. And even that wonderful discoverability of the graphical user interface has a dark side: This is a picture from my colleague Takeo Igarashi. Takeo made this picture by turning on all of the menu bars in a certain version of Microsoft [PowerPoint]. And you can see that, on his laptop screen, all of those menu bars crowded things out so much that there was almost no screen left over for the slides. So, is this interface discoverable? Absolutely! All of the functionality is right there. But you pay a cost in terms of screen real estate. In this introductory course, we won’t have a chance to talk much about gestural interfaces. But, given their recent popularity, and tablet interfaces, game controllers, and movies like « the Minority Report », I want to conclude today’s lecture with an eye to the future: Gestural interfaces offer a wonderful solution to menu creep, in that they don’t take up any permanent screen real estate. The downside, of course, is that they are non–self-disclosing — you don’t know what functionality might be available. And, at their best, gestural interfaces can be even more direct than their graphical user interface counterpart. In recent years, I’ve watched in amazement at people of all ages and from all backgrounds have figured out how to use tablet computers really effectively. And the directness, I think, is one big reason for that. Learning to use a mouse is actually kind of a big deal: Once you’ve got the hang of it you forget that, but there is this indirectness that you’ll have to learn that what you’re doing over here matters over here. With a tablet, the thing that you touch is the thing that you’re interacting with. And the best gestures in these interfaces, like pinching to zoom, just feels so natural — the metaphor is very clear, the feedback is immediate, the outcome is predictable. In other cases, when the mapping is more arbitrary or harder to guess, I think these interfaces are less effective. For example, if I swipe with four fingers, that will bring up a menu bar on an iPad. How on earth would I ever figure that out? So, in gestural interfaces, like in all others, things work best when the feedback is clear, and the interface provides cues through metaphors or visual signals that help you know what you can do within the user interface. To learn more about direct manipulation interfaces and the psychology of design, I strongly recommend Don Norman’s book « The Design of Everyday Things ». We’ll see you next time.