< Return to Video

Lecture 1.3: Evaluating Designs (12:15)

  • 0:01 - 0:06
    在很多方面來說,『與他人一起評估設計』這點
  • 0:06 - 0:09
    是在互動設計中最有創意,最具挑戰性,卻也是時常不被重視的一環
  • 0:09 - 0:12
    測試人們與你的設計互動,可以讓你得到洞察心得
  • 0:12 - 0:16
    得到新的想法,變更設計,明智的決策,以及修正錯誤
  • 0:16 - 0:20
    我認為設計是如此有趣的領域的一個原因,是它的真實與客觀的關係。
  • 0:20 - 0:26
    我發現設計如此令人著迷,因為我們在回答問題時說:
  • 0:26 - 0:29
    「我們怎麼測量會成功?」這樣的問題
  • 0:29 - 0:33
    而不只是「這是個人喜好」或「就是跟著感覺走」
  • 0:33 - 0:37
    同時,這個答案是複雜、開放、主觀的
  • 0:37 - 0:41
    需要「不只是知道使用7或3這樣的數字」的智慧
  • 0:41 - 0:44
    在這堂課中,我們要學的其中一件事情
  • 0:44 - 0:48
    就是用各種不同的方法來得到各種不同的知識
  • 0:48 - 0:53
    為何要與別人一起評估設計?為什麼要了解人們是如何使用互動系統的?
  • 0:53 - 0:58
    我想主要原因之一是「『一個使用者介面有多好』用說的難說服別人」
  • 0:58 - 1:01
    除非你實際找個使用者來測試,
  • 1:01 - 1:06
    因為客戶,設計師和工程師通常在使用者介面這個領域已經了解得太多
  • 1:06 - 1:11
    或者在參與設計和建造使用者介面時,已經產生盲點了
  • 1:11 - 1:15
    同時,他們可能不夠了解使用者真正想做什麼
  • 1:15 - 1:22
    雖然經驗及理論可以提供協助,仍然很難預測使用者真實會做的行為
  • 1:22 - 1:25
    可能想知道:「人們看的出來這玩意兒怎麼用嗎?」
  • 1:25 - 1:31
    「使用這介面時,使用者是會大罵髒話還是ㄎㄎ笑呢?」「這個設計如何和另一個設計比較呢?」
  • 1:31 - 1:35
    「如果我改動這個介面,這個變更是如何影響人們的行為呢?」
  • 1:35 - 1:39
    「會出現哪些新的設計?」「設計如何隨時間演進呢?」
  • 1:39 - 1:45
    這些都是很棒的關於介面的問題,而且要使用不同的方法來得到答案
  • 1:44 - 1:49
    不同方法擁有用途很廣的工具箱的價值,
  • 1:49 - 1:56
    類似行動通訊和社交軟體的領域裡特別寶貴的,在那裡人們使用習慣,特別是前後的差異而且,
  • 1:56 - 2:00
    也從人們如何使用軟體上網之類的事也會隨時間有明顯演進。
  • 2:00 - 2:03
    也隨時間明顯演進,其它其它的人們如何使用軟體的回響
  • 2:03 - 2:08
    為了讓大家稍微了解一下, 我想快速的講幾個HCI研究裡實際的類型
  • 2:08 - 2:11
    我提供的例子幾乎是常見的,
  • 2:11 - 2:14
    因為這最容易分享,
  • 2:14 - 2:18
    如果你有相關的例子,歡迎貼在論壇
  • 2:18 - 2:21
    我保持使用者介面案例收集,
  • 2:21 - 2:24
    並且我和其它的學生將期待看見你可以帶來什麼。
  • 2:24 - 2:27
    一個值得學的方法,是一個設計的使用者經驗
  • 2:27 - 2:30
    把人帶進你的實驗室或辦公室,試用看看
  • 2:30 - 2:32
    我們通常稱之為「易用性研究(usability studies)」
  • 2:32 - 2:37
    這「看著別人使用我設計的介面」是一個在HCI領域中常見的方法
  • 2:37 - 2:43
    這種傳統的以使用者為中心設計基本的策略是
  • 2:43 - 2:48
    反覆帶人們進入實驗室或辦公室除非時間到。才會放出來。
  • 2:48 - 2:52
    而且,若你預算充足,準備一間單向鏡的房間
  • 2:52 - 2:54
    而讓開發小組在鏡子的另一面
  • 2:54 - 2:59
    在一個精簡的環境,這可能只是帶人進來你宿舍辦公室。
  • 2:59 - 3:01
    透過這個過程,你會學習到很多很多。
  • 3:01 - 3:04
    每一次,我還是一個學生,朋友,或同事
  • 3:04 - 3:07
    看著受測者使用一個新的互動系統
  • 3:07 - 3:14
    我們發現,設計師的盲點,引起系統的怪行為、錯誤、錯誤的假設
  • 3:15 - 3:19
    然而,這種方法有一些重大的缺點。
  • 3:19 - 3:24
    特別是,實驗屬性不是如此貼近真實世界
  • 3:24 - 3:29
    在現實世界中,人們可能有不同的任務,目標,動機,物理特性
  • 3:29 - 3:32
    相對於你單純的辦公室或實驗室
  • 3:32 - 3:35
    你猜想人在使用這樣的使用者介面時,更複雜的情況
  • 3:35 - 3:38
    比如在一個公車站牌排隊等候的情況
  • 3:38 - 3:40
    第二,有"Please me"(刻意,不自然)的實驗偏見,
  • 3:40 - 3:44
    就在當你帶了人來測試你的介面
  • 3:44 - 3:47
    他們知道他們是來測試你開發的科技
  • 3:47 - 3:50
    並且所以他們可能努力測試或特別用心的測試
  • 3:50 - 3:54
    比起他們一直沒有被實驗室的條件限制
  • 3:54 - 3:58
    和那些看著他們受測的人一起待著
  • 3:58 - 4:03
    第三,最基本的形式在於你只想用一個使用者介面,沒有比較的對象。
  • 4:03 - 4:09
    所以,當你追縱人的開心或生氣或喜悅的微笑
  • 4:09 - 4:12
    你不知道他們是否可以更開心或更不生氣或更多微笑
  • 4:12 - 4:14
    如果你從來沒有使用過不同的使用者介面
  • 4:14 - 4:18
    最後,需要將人帶到實際的地方
  • 4:18 - 4:20
    這通常比人們想得要來得輕鬆很多很多
  • 4:20 - 4:23
    它可以是心理負擔,即使沒有別的
    It can be a psychological burden, even if nothing else.
  • 4:24 - 4:28
    從人們獲得回饋的一個非常不同方式,是做一項調查
  • 4:28 - 4:31
  • 4:31 - 4:34
    asking about different street light designs.
  • 4:34 - 4:38
    Surveys are great because you can quickly get feedback from a large number of responses.
  • 4:38 - 4:41
    And it’s relatively easy to compare multiple alternatives.
  • 4:41 - 4:44
    You can also automatically tally the results.
  • 4:44 - 4:48
    You don’t even need to build anything; you can just show screen shots or mock-ups.
  • 4:48 - 4:50
    One of the things that I’ve learned the hard way, though,
  • 4:50 - 4:55
    is the difference between what people say they’re going to do and what they actually do.
  • 4:55 - 4:59
    Ask people how often they exercise and you’ll probably get a much more optimistic answer
  • 4:59 - 5:02
    than how often they really do exercise.
  • 5:02 - 5:05
    The same holds for the street light example here.
  • 5:05 - 5:08
    Try to imagine what a number of different street light designs might be
  • 5:08 - 5:12
    is really different than actually observing them on the street
  • 5:12 - 5:15
    and having them become part of normal everyday life.
  • 5:15 - 5:18
    Still, it can be valuable to get feedback.
  • 5:18 - 5:20
    Another type of responder strategy is focus groups.
  • 5:20 - 5:26
    In a focus group, you’ll gather together a small group of people to discuss a design or idea.
  • 5:26 - 5:31
    The fact that focus groups involve a group of people is a double-edged sword.
  • 5:31 - 5:37
    On one hand, you can get people to tease out of their colleagues things that they might not have thought
  • 5:37 - 5:44
    to say on their own; on the other hand, for a variety of psychological reasons, people may be inclined
  • 5:44 - 5:48
    to say polite things or generate answers completely on the spot
  • 5:48 - 5:53
    that are totally uncorrelated with what they believe or what they would actually do.
  • 5:54 - 5:59
    Focus groups can be a particularly problematic method when you are looking at trying to gather data
  • 5:59 - 6:04
    about taboo topics or about cultural biases.
  • 6:04 - 6:06
    With those caveats — right now we’re just making a laundry list, and —
  • 6:06 - 6:12
    I think that focus groups, like almost any other method, can play an important role in your toolbelt.
  • 6:13 - 6:16
    Our third category of techniques is to get feedback from experts.
  • 6:16 - 6:22
    For example, in this class we’re going to do a bunch of peer critique for your weekly project assignments.
  • 6:22 - 6:25
    In addition to having users try your interface,
  • 6:25 - 6:29
    it can be important to eat your own dog food and use the tools that you built yourself.
  • 6:29 - 6:35
    When you are getting feedback from experts, it can often be helpful to have some kind of structured format,
  • 6:35 - 6:38
    much like the rubrics you’ll see in your project assignments.
  • 6:38 - 6:44
    And, for getting feedback on user interfaces, one common approach to this structured feedback
  • 6:44 - 6:48
    is called heuristic evaluation, and you’ll learn how to do that in this class;
  • 6:48 - 6:51
    it’s pioneered by Jacob Nielson.
  • 6:51 - 6:53
    Our next genre is comparative experiments:
  • 6:53 - 6:57
    taking two or more distinct options and comparing their performance to each other.
  • 6:57 - 7:00
    These comparisons can take place in lots of different ways:
  • 7:00 - 7:04
    They can be in the lab; they can be in the field; they can be online.
  • 7:04 - 7:06
    These experiments can be more-or-less controlled,
  • 7:06 - 7:10
    and they can take place over shorter or longer durations.
  • 7:10 - 7:14
    What you’re trying to learn here is which option is the more effective,
  • 7:14 - 7:16
    and, more often, what are the active ingredients,
  • 7:16 - 7:21
    what are the variables that matter in creating the user experience that you seek.
  • 7:22 - 7:26
    Here’s an example: My former PhD student Joel Brandt, and his colleague at Adobe,
  • 7:26 - 7:30
    ran a number of studies comparing help interfaces for programmers.
  • 7:32 - 7:38
    In particular they compared a more traditional search-style user interface for finding programming help
  • 7:38 - 7:43
    with a search interface that integrated programming help directly into your environment.
  • 7:43 - 7:46
    By running these comparisons they were able to see how programmers’ behaviour differed
  • 7:46 - 7:50
    based on the changing help user interface.
  • 7:50 - 7:53
    Comparative experiments have an advantage over surveys
  • 7:53 - 7:57
    in that you get to see the actual behaviour as opposed to self report,
  • 7:57 - 8:02
    and they can be better than usability studies because you’re comparing multiple alternatives.
  • 8:02 - 8:06
    This enables you to see what works better or worse, or at least what works different.
  • 8:06 - 8:10
    I find that comparative feedback is also often much more actionable.
  • 8:11 - 8:13
    However, if you are running controlled experiments online,
  • 8:13 - 8:18
    you don’t get to see much about the person on the other side of the screen.
  • 8:18 - 8:20
    And if you are inviting people into your office or lab,
  • 8:20 - 8:24
    the behaviour you’re measuring might not be very realistic.
  • 8:24 - 8:30
    If realistic longitudinal behaviour is what you’re after, participant observation may be the approach for you.
  • 8:30 - 8:36
    This approach is just what it sounds like: observing what people actually do in their actual work environment.
  • 8:36 - 8:40
    And this more long-term evaluation can be important for uncovering things
  • 8:40 - 8:44
    that you might not see in shorter term, more controlled scenarios.
  • 8:44 - 8:48
    For example, my colleagues Bob Sutton and Andrew Hargadon studied brainstorming.
  • 8:48 - 8:51
    The prior literature on brainstorming had focused mostly on questions like
  • 8:51 - 8:54
    “Do people come up with more ideas?”
  • 8:54 - 8:56
    What Bob and Andrew realized by going into the field
  • 8:56 - 9:00
    was that brainstorming served a number of other functions also,
  • 9:00 - 9:05
    like, for example, brainstorming provides a way for members of the design team
  • 9:05 - 9:08
    to demonstrate their creativity to their peers;
  • 9:08 - 9:13
    it allows them to pass along knowledge that then can be reused in other projects;
  • 9:13 - 9:19
    and it creates a fun, exciting environment that people like to work in and that clients like to participate in.
  • 9:19 - 9:22
    In a real ecosystem, all of these things are important,
  • 9:22 - 9:25
    in addition to just having the ideas that people come up with.
  • 9:26 - 9:32
    Nearly all experiments seek to build a theory on some level — I don’t mean anything fancy by this,
  • 9:32 - 9:37
    just that we take some things to be more relevant, and other things less relevant.
  • 9:37 - 9:39
    We might, for example, assume
  • 9:39 - 9:43
    that the ordering of search results may play an important role in what people click on,
  • 9:43 - 9:46
    but that the batting average of the Detroit Tigers doesn’t,
  • 9:46 - 9:49
    unless, of course, somebody’s searching for baseball.
  • 9:49 - 9:55
    If you have a theory that sufficiently, formal mathematically that you may make predictions,
  • 9:55 - 10:00
    then you can compare alternative interfaces using that model, without having to bring people in.
  • 10:00 - 10:05
    And we’ll go over that in this class a little bit, with respect to input models.
  • 10:05 - 10:10
    This makes it possible to try out a number of alternatives really fast.
  • 10:10 - 10:12
    Consequently, when people use simulations,
  • 10:12 - 10:16
    it’s often in conjunction with something like Monte Carlo optimization.
  • 10:16 - 10:19
    One example of this can be found in the ShapeWriter system,
  • 10:19 - 10:22
    where Shuman Zhai and colleagues figured out how to build a keyboard
  • 10:22 - 10:26
    where people could enter an entire word in a single stroke.
  • 10:26 - 10:31
    They were able to do this with the benefit of formal models and optimization-based approaches.
  • 10:31 - 10:34
    Simulation has mostly been used for input techniques
  • 10:34 - 10:39
    because people’s motor performance is probably the most well-quantified area of HCI.
  • 10:39 - 10:42
    And, while we won’t get much to it in this intro course,
  • 10:42 - 10:46
    simulation can also be used for higher-level cognitive tasks;
  • 10:46 - 10:48
    for example, Pete Pirolli and colleagues at PARC
  • 10:48 - 10:51
    had built impressive models of people’s web-searching behaviour.
  • 10:52 - 10:57
    These models enable them to estimate, for example, which links somebody is most likely to click on
  • 10:57 - 11:00
    by looking at the relevant link texts.
  • 11:00 - 11:05
    That’s our whirlwind tour of a number of empirical methods that this class will introduce.
  • 11:05 - 11:09
    You’ll want to pick the right method for the right task, and here’s some issues to consider:
  • 11:09 - 11:13
    If you did it again, would you get the same thing?
  • 11:13 - 11:18
    Another is generalizability and realism — Does this hold for people other than 18-year-old
  • 11:18 - 11:23
    upper-middle-class students who are doing this for course credit or a gift certificate?
  • 11:23 - 11:28
    Is this behaviour also what you’d see in the real world, or only in a more stilted lab environment?
  • 11:28 - 11:30
    Comparisons are important, because they can tell you
  • 11:30 - 11:34
    how the user experience would change with different interface choices,
  • 11:34 - 11:38
    as opposed to just a “people liked it” study.
  • 11:38 - 11:42
    It’s also important to think about how to achieve how these insights efficiently,
  • 11:42 - 11:48
    and not chew up a lot of resources, especially when your goal is practical.
  • 11:48 - 11:54
    My experience as a designer, researcher, teacher, consultant, advisor and mentor has taught me
  • 11:54 - 12:01
    that evaluating designs with people is both easier and more valuable than many people expect,
  • 12:01 - 12:04
    and there’s an incredible lightbulb moment that happens
  • 12:04 - 12:08
    when you actually get designs in front of people and see how they use them.
  • 12:08 - 12:12
    So, to sum up this video, I’d like to ask what could be the most important question:
  • 12:12 -
    “What do you want to learn?”
Title:
Lecture 1.3: Evaluating Designs (12:15)
Description:

其實還沒翻完,但是不知道為什麼他外面會寫 100%(汗)

more » « less
Video Language:
English

Chinese, Traditional subtitles

Incomplete

Revisions