< Return to Video

RailsConf 2014 - Cognitive Shortcuts: Models, Visualizations, Metaphors, and Other Lies

  • 0:17 - 0:18
    SAM LIVINGSTON-GRAY: Hello. Welcome to the
    very
  • 0:18 - 0:20
    last session of RailsConf.
  • 0:20 - 0:21
    When I leave this stage,
  • 0:21 - 0:22
    they are gonna burn it down.
  • 0:22 - 0:24
    AUDIENCE: Yeah!
  • 0:25 - 0:26
    S.L.: I have a couple of items of business
  • 0:26 - 0:30
    before I launch into my presentation proper.
    The first
  • 0:30 - 0:33
    of which is turning on my clicker. I work
  • 0:33 - 0:36
    for LivingSocial. We are hiring. If this fact
    is
  • 0:36 - 0:38
    intriguing to you, please feel free to come
    and
  • 0:38 - 0:41
    talk to me afterwards. Also, our recruiters
    brought a
  • 0:41 - 0:44
    ton of little squishy stress balls that are
    shaped
  • 0:44 - 0:47
    like little brains. As far as I know, this
  • 0:47 - 0:49
    was a coincidence, but I love the tie-in so
  • 0:49 - 0:50
    I brought the whole bag. I had them leave
  • 0:50 - 0:52
    it for me. So if you would like an
  • 0:52 - 0:55
    extra brain, please come talk to me after
    the
  • 0:55 - 0:56
    show.
  • 0:56 - 0:59
    A quick note about accessibility. If you have
    any
  • 0:59 - 1:02
    trouble seeing my slides, hearing my voice,
    or following
  • 1:02 - 1:05
    my weird trains of thought, or maybe you just
  • 1:05 - 1:07
    like spoilers, you can get a PDF with both
  • 1:07 - 1:09
    my slides and my script at this url. It's
  • 1:09 - 1:14
    tinyurl dot com, cog dash shorts dash railsconf.
    I
  • 1:14 - 1:16
    also have it up here on a thumb drive,
  • 1:16 - 1:18
    so if the conference wi-fi does what it usually
  • 1:18 - 1:20
    does, please go see Evan Light up in the
  • 1:20 - 1:21
    second row.
  • 1:21 - 1:24
    I'm gonna leave this up for a couple more
  • 1:24 - 1:26
    minutes. And I also want to give a shoutout
  • 1:26 - 1:30
    to the opportunity scholarship program here.
    To quote the
  • 1:30 - 1:33
    RailsConf site, this program is for people
    who wouldn't
  • 1:33 - 1:35
    usually take part in our community or who
    might
  • 1:35 - 1:37
    just want a friendly face during their first
    time
  • 1:37 - 1:41
    at RailsConf. I'm a huge fan of this program.
  • 1:41 - 1:43
    I think it's a great way to welcome new
  • 1:43 - 1:46
    people and new voices into our community.
    This is
  • 1:46 - 1:48
    the second year that I've volunteered as a
    guide
  • 1:48 - 1:49
    and this is the second year that I've met
  • 1:49 - 1:52
    somebody with a fascinating story to tell.
    If you're
  • 1:52 - 1:55
    a seasoned conference veteran, I strongly
    encourage you to
  • 1:55 - 1:57
    apply next year.
  • 1:57 - 2:04
    OK. Programming is hard. It's not quantum
    physics. But
  • 2:04 - 2:07
    neither is it falling off a log. And if
  • 2:07 - 2:09
    I had to pick just one word to explain
  • 2:09 - 2:13
    why programming is hard, that word would be
    abstract.
  • 2:13 - 2:16
    I asked Google to define abstract, and here's
    what
  • 2:16 - 2:17
    it said.
  • 2:17 - 2:19
    Existing in thought or as an idea, but not
  • 2:19 - 2:22
    having a physical or concrete existence.
  • 2:22 - 2:24
    I usually prefer defining things in terms
    of what
  • 2:24 - 2:26
    they are, but in this case I find the
  • 2:26 - 2:31
    negative definition extremely telling. Abstract
    things are hard for
  • 2:31 - 2:34
    us to think about precisely because they don't
    have
  • 2:34 - 2:36
    a physical or a concrete existence, and that's
    what
  • 2:36 - 2:38
    our brains are wired for.
  • 2:38 - 2:41
    Now, I normally prefer the kind of talk where
  • 2:41 - 2:43
    the speaker just launches right in and forces
    me
  • 2:43 - 2:46
    to keep up, but this is a complex idea,
  • 2:46 - 2:48
    and it's the last talk of the last day,
  • 2:48 - 2:49
    and I'm sure you're all as fried as I
  • 2:49 - 2:54
    am. So, here's a little background. I got
    the
  • 2:54 - 2:57
    idea for this talk when I was listening to
  • 2:57 - 3:01
    the Ruby Rogues podcast episode with Glenn
    Vanderburg. This
  • 3:01 - 3:03
    is lightly edited for length, but in that
    episode,
  • 3:03 - 3:06
    Glenn said, The best programmers I know all
    have
  • 3:06 - 3:09
    some good techniques for conceptualizing or
    modeling the programs
  • 3:09 - 3:11
    that they work with. And it tends to be
  • 3:11 - 3:14
    sort of a spatial/visual model, but not always.
    And
  • 3:14 - 3:16
    he says, What's going on is our brains are
  • 3:16 - 3:18
    geared towards the physical world and dealing
    with our
  • 3:18 - 3:22
    senses and integrating that sensory input.
  • 3:22 - 3:23
    But the work we do as programmers is all
  • 3:23 - 3:25
    abstract. And it makes perfect sense that
    you would
  • 3:25 - 3:27
    want to find techniques to rope the physical
    sensory
  • 3:27 - 3:30
    parts of your brain into this task of dealing
  • 3:30 - 3:32
    with abstractions. And this is the part that
    really
  • 3:32 - 3:34
    got my attention. He says, But we don't ever
  • 3:34 - 3:36
    teach anybody how to do that or even that
  • 3:36 - 3:38
    they should do that.
  • 3:38 - 3:40
    When I heard this, I started thinking about
    the
  • 3:40 - 3:44
    times that I've stumbled across some technique
    for doing
  • 3:44 - 3:46
    something like this, and I've been really
    excited to
  • 3:46 - 3:49
    find a way of translating a programming problem
    into
  • 3:49 - 3:51
    some form that my brain could really get a
  • 3:51 - 3:53
    handle on. And I was like yeah, yeah, brains
  • 3:53 - 3:55
    are awesome. And we should be teaching people
    that
  • 3:55 - 3:58
    this is a thing they can do.
  • 3:58 - 4:00
    And I thought about it, and some time later,
  • 4:00 - 4:03
    I was like, wait a minute. No. Brains are
  • 4:03 - 4:07
    horrible. And teaching people these tricks
    would be totally
  • 4:07 - 4:09
    irresponsible, if we also didn't warn them
    about cognitive
  • 4:09 - 4:13
    bias. I get to that in a little bit.
  • 4:13 - 4:17
    This is a talk in three parts. Part one,
  • 4:17 - 4:19
    brains are awesome. And as Glenn said, you
    can
  • 4:19 - 4:21
    rope the physical and sensory parts of your
    brain
  • 4:21 - 4:23
    as well as a few others I'll talk about
  • 4:23 - 4:26
    into helping you deal with abstractions. Part
    two, brains
  • 4:26 - 4:28
    are horrible and they lie to us all the
  • 4:28 - 4:31
    time. But if you're on the look out for
  • 4:31 - 4:33
    the kinds of lies that your brain will tell
  • 4:33 - 4:36
    you, in part three I have an example of
  • 4:36 - 4:37
    the kind of amazing hack that you just might
  • 4:37 - 4:42
    be able to come up with.
  • 4:42 - 4:45
    Our brains are extremely well-adapted for
    dealing with the
  • 4:45 - 4:50
    physical world. Our hindbrains, which regular
    respiration, temperature, and
  • 4:50 - 4:52
    balance, have been around for half a billion
    years
  • 4:52 - 4:57
    or so. But when I write software, I am
  • 4:57 - 4:59
    leaning hard on parts of the brain that are
  • 4:59 - 5:02
    relatively new in evolutionary terms, and
    I'm using some
  • 5:02 - 5:04
    relatively expensive resources.
  • 5:04 - 5:06
    So over the years I have built up a
  • 5:06 - 5:09
    small collection of techniques and shortcuts
    that engage specialized
  • 5:09 - 5:11
    structures of my brain to help me reason about
  • 5:11 - 5:14
    programming problems. Here's the list.
  • 5:14 - 5:18
    I'm gonna start with a category of visual
    tools
  • 5:18 - 5:20
    that let us leverage our spatial understanding
    of the
  • 5:20 - 5:24
    world and our spatial reasoning skills to
    discover relationships
  • 5:24 - 5:26
    between different parts of a model. Or just
    to
  • 5:26 - 5:28
    stay oriented when we're trying to reason
    through a
  • 5:28 - 5:29
    complex problem.
  • 5:29 - 5:32
    I'm just gonna list out a few examples of
  • 5:32 - 5:34
    this category quickly, because I think most
    developers are
  • 5:34 - 5:37
    likely to encounter these, either in school
    or on
  • 5:37 - 5:39
    the job. And they all have the same basic
  • 5:39 - 5:42
    shape. They're boxes and arrows.
  • 5:42 - 5:46
    There's Entity-Relationship Diagrams, which
    help us understand how our
  • 5:46 - 5:49
    data is modeled. We use diagrams to describe
    data
  • 5:49 - 5:53
    structures like binary trees, linked lists,
    and so on.
  • 5:53 - 5:56
    And for state machines of any complexity,
    diagrams are
  • 5:56 - 5:58
    often the only way to make any sense of
  • 5:58 - 6:00
    them. I could go on, but like I said,
  • 6:00 - 6:02
    most of us are probably used to using these,
  • 6:02 - 6:03
    at least occasionally.
  • 6:03 - 6:05
    There are three things that I like about these
  • 6:05 - 6:10
    tools. First, they lend themselves really
    well to standing
  • 6:10 - 6:12
    up in front of a white board, possibly with
  • 6:12 - 6:15
    a co-worker, and just standing up and moving
    around
  • 6:15 - 6:17
    a little bit will help get the blood flowing
  • 6:17 - 6:20
    and, and get your brain perked up.
  • 6:20 - 6:23
    Second, diagrams help us offload some of the
    work
  • 6:23 - 6:26
    of keeping track of things, of different concepts,
    by
  • 6:26 - 6:29
    attaching those concepts to objects in a two-dimensional
    space.
  • 6:29 - 6:31
    And our brains have a lot of hardware support
  • 6:31 - 6:34
    for keeping track of where things are in space.
  • 6:34 - 6:37
    And third, our brains are really good at pattern
  • 6:37 - 6:40
    recognition, so visualizing our designs can
    give us a
  • 6:40 - 6:43
    chance to spot certain kinds of problems just
    by
  • 6:43 - 6:46
    looking at their shapes before we ever start
    typing
  • 6:46 - 6:49
    code in an editor, and I think that's pretty
  • 6:49 - 6:50
    cool.
  • 6:50 - 6:53
    Here's another technique that makes use of
    our spatial
  • 6:53 - 6:55
    perception skills, and if you saw Sandi's
    talk yesterday,
  • 6:55 - 6:58
    you'll know this one. It's the squint test.
    It's
  • 6:58 - 7:00
    very straight forward. You open up some code
    and
  • 7:00 - 7:02
    you either squint your eyes at it or you
  • 7:02 - 7:05
    decrease the font size. The point is to look
  • 7:05 - 7:07
    past the words and check out the shape of
  • 7:07 - 7:08
    the code.
  • 7:08 - 7:11
    This is a pathological example that I used
    in
  • 7:11 - 7:15
    a refactoring talk last year. You can use
    this
  • 7:15 - 7:18
    technique as an aid to navigation, as a way
  • 7:18 - 7:20
    of zeroing in on high-risk areas of code,
    or
  • 7:20 - 7:22
    just plain to get oriented in a new code
  • 7:22 - 7:25
    base. There are a few specific patterns that
    you
  • 7:25 - 7:27
    can look for, and you'll find others as you,
  • 7:27 - 7:29
    as you do more of it.
  • 7:29 - 7:32
    Is the left margin ragged, as it is here?
  • 7:32 - 7:34
    Are there any ridiculously long lines? There's
    one towards
  • 7:34 - 7:38
    the bottom. What does your syntax highlighting
    tell you?
  • 7:38 - 7:39
    Are there groups of colors or are colors sort
  • 7:39 - 7:44
    of spread out? And there's a lot of information
  • 7:44 - 7:48
    you can glean from this. Incidentally, I have
    only
  • 7:48 - 7:50
    ever met one blind programmer, and we didn't
    really
  • 7:50 - 7:52
    talk about this stuff. If any of you have
  • 7:52 - 7:56
    found that a physical or a cognitive disability
    gives
  • 7:56 - 7:59
    you a, an interesting way of looking at code,
  • 7:59 - 8:02
    or understanding code I suppose, please come
    talk to
  • 8:02 - 8:06
    me, because I'd love to hear your story.
  • 8:06 - 8:08
    Next up, I have a couple of techniques that
  • 8:08 - 8:11
    involve a clever use of language. The first
    one
  • 8:11 - 8:14
    is deceptively simple, but it does require
    a prop.
  • 8:14 - 8:18
    Doesn't have to be that big. You can totally
  • 8:18 - 8:20
    get away with using the souvenir edition.
    This is
  • 8:20 - 8:25
    my daughter's duck cow bath toy. What you
    do
  • 8:25 - 8:27
    is you keep a rubber duck on your desk.
  • 8:27 - 8:28
    When you get stuck, you put the rubber deck
  • 8:28 - 8:31
    on top- rubber duck, excuse me, on top of
  • 8:31 - 8:33
    your keyboard, and you explain your problem
    out loud
  • 8:33 - 8:34
    to the duck.
  • 8:34 - 8:35
    [laughter]
  • 8:35 - 8:40
    Really. I mean, it sounds absurd, right. But
    there's
  • 8:40 - 8:41
    a good chance that in the process of putting
  • 8:41 - 8:44
    your problem into words, you'll discover that
    there's an
  • 8:44 - 8:47
    incorrect assumption that you've been making
    or you'll think
  • 8:47 - 8:48
    of some other possible solution.
  • 8:48 - 8:51
    I've also heard of people using teddy bears
    or
  • 8:51 - 8:54
    other stuffed animals. And one of my co-workers
    told
  • 8:54 - 8:57
    me that she learned this as the pet-rock technique.
  • 8:57 - 8:59
    This was a thing in the seventies. And also
  • 8:59 - 9:01
    that she finds it useful to compose an email
  • 9:01 - 9:04
    describing the problem. So for those of you
    who,
  • 9:04 - 9:07
    like me, think better when you're typing or
    writing
  • 9:07 - 9:08
    than when you're speaking, that can be a nice,
  • 9:08 - 9:13
    a nice alternative.
  • 9:13 - 9:15
    The other linguistic hack that I got, I got
  • 9:15 - 9:19
    from Sandi Metz, and in this book, Practical
    Oriented
  • 9:19 - 9:22
    Design in Ruby, PODR for short, she describes
    a
  • 9:22 - 9:25
    technique that she uses to figure out which
    object
  • 9:25 - 9:29
    a method should belong. I tried paraphrasing
    this, but
  • 9:29 - 9:31
    honestly Sandi did a much better job than
    I
  • 9:31 - 9:33
    would, describing it, so I'm just gonna read
    it
  • 9:33 - 9:33
    verbatim.
  • 9:33 - 9:36
    She says, How can you determine if a Gear
  • 9:36 - 9:39
    class contains behavior that belongs somewhere
    else? One way
  • 9:39 - 9:41
    is to pretend that it's sentient and to interrogate
  • 9:41 - 9:44
    it. If you rephrase every one of its methods
  • 9:44 - 9:46
    as a question, asking the question ought to
    make
  • 9:46 - 9:47
    sense.
  • 9:47 - 9:49
    For example, "Please Mr. Gear, what is your
    ratio?"
  • 9:49 - 9:52
    seems perfectly reasonable, while "Please
    Mr. Gear, what are
  • 9:52 - 9:54
    your gear_inches?" is on shaky ground, and
    "Please Mr.
  • 9:54 - 10:00
    Gear, what is your tire(size)?" is just downright
    ridiculous.
  • 10:00 - 10:02
    This is a great way to evaluate objects in
  • 10:02 - 10:04
    light of the single responsibility principle.
    Now I'll come
  • 10:04 - 10:07
    back to that thought in just a minute, but
  • 10:07 - 10:10
    first, I described the rubber duck and Please,
    Mr.
  • 10:10 - 10:14
    Gear? as techniques to engage linguistic reasoning,
    but that
  • 10:14 - 10:17
    doesn't quite feel right. Both of these techniques
    force
  • 10:17 - 10:20
    us to put our questions into words, but words
  • 10:20 - 10:23
    themselves are tools. We use words to communicate
    our
  • 10:23 - 10:26
    ideas to other people.
  • 10:26 - 10:29
    As primates, we've evolved a set of social
    skills
  • 10:29 - 10:31
    and behaviors for getting our needs met as
    part
  • 10:31 - 10:35
    of a community. So, while these techniques
    do involve
  • 10:35 - 10:37
    using language centers of your brain, I think
    they
  • 10:37 - 10:40
    reach beyond those centers to tap into our
    social
  • 10:40 - 10:42
    reasoning.
  • 10:42 - 10:44
    The rubber duck technique works because, putting
    your problem
  • 10:44 - 10:47
    into words forces you to organize your understanding
    of
  • 10:47 - 10:49
    a problem in such a way that you can
  • 10:49 - 10:54
    verbally lead another mind through it. And
    Please, Mr.
  • 10:54 - 10:56
    Gear? let's us anthropomorphize an object
    and talk to
  • 10:56 - 10:59
    it to discover whether it conforms to the
    single
  • 10:59 - 11:01
    responsibility principle.
  • 11:01 - 11:04
    To me, the tell-tale phrase in Sandi's description
    of
  • 11:04 - 11:06
    this technique is, asking the question ought
    to make
  • 11:06 - 11:13
    sense. Most of us have an intuitive understanding
    that
  • 11:14 - 11:16
    it might not be appropriate to ask Alice about
  • 11:16 - 11:20
    something that is Bob's responsibility. Interrogating
    an object as
  • 11:20 - 11:22
    though it were a person helps us use that
  • 11:22 - 11:25
    social knowledge, and it gives us an opportunity
    to
  • 11:25 - 11:28
    notice that a particular question doesn't
    make sense to
  • 11:28 - 11:31
    ask any of our existing objects, which might
    prompt
  • 11:31 - 11:32
    us to ask if we should create a new
  • 11:32 - 11:35
    object to fill that role instead.
  • 11:35 - 11:40
    Now, personally, I would have considered PODR
    to have
  • 11:40 - 11:42
    been a worthwhile purchase if Please, Mr.
    Gear was
  • 11:42 - 11:44
    the only thing I got from it. But, in
  • 11:44 - 11:46
    this book, Sandi also made what I thought
    was
  • 11:46 - 11:49
    a very compelling case for UML Sequence Diagrams.
  • 11:49 - 11:53
    Where Please, Mr. Gear is a good tool for
  • 11:53 - 11:56
    discovering which objects should be responsible
    for a particular
  • 11:56 - 11:59
    method, a Sequence Diagram can help you analyze
    the
  • 11:59 - 12:04
    runtime interaction between several different
    objects. At first glance,
  • 12:04 - 12:06
    this looks kind of like something in the boxes
  • 12:06 - 12:08
    and arrows category of visual and spatial
    tools, but
  • 12:08 - 12:10
    again, this feels more like it's tapping into
    that
  • 12:10 - 12:13
    social understanding that we have. This can
    be a
  • 12:13 - 12:14
    good way to get a sense for when an
  • 12:14 - 12:18
    object is bossy or when performing a task
    involves
  • 12:18 - 12:22
    a complex sequence of several, several interactions.
    Or if
  • 12:22 - 12:25
    there are just plain too many different things
    to
  • 12:25 - 12:27
    keep track of.
  • 12:27 - 12:29
    Rather than turn this into a lecture on UML,
  • 12:29 - 12:31
    I'm just gonna tell you to go buy Sandi's
  • 12:31 - 12:34
    book, and if for whatever reason, you cannot
    afford
  • 12:34 - 12:35
    it, come talk to me later and we'll work
  • 12:35 - 12:36
    something out.
  • 12:36 - 12:41
    Now for the really hand-wavy stuff. Metaphors
    can be
  • 12:41 - 12:45
    a really useful tool in software. The turtle
    graphic
  • 12:45 - 12:47
    system in Logo is a great metaphor. Has anybody
  • 12:47 - 12:51
    used Logo at any point in their life? About
  • 12:51 - 12:53
    half the people. That's really cool.
  • 12:53 - 12:55
    We've probably all played with drawing something
    on the
  • 12:55 - 12:58
    screen at some point, but most of the rendering
  • 12:58 - 13:00
    systems that I've used are based on a Cartesian
  • 13:00 - 13:05
    coordinate system, a grid. And this metaphor
    encourages the
  • 13:05 - 13:08
    programmer to imagine themselves as the turtle,
    and to
  • 13:08 - 13:09
    use that understanding to figure out, when
    they get
  • 13:09 - 13:12
    stuck, what they should be doing next.
  • 13:12 - 13:14
    One of the original creators of Logo called
    this
  • 13:14 - 13:18
    Body Syntonic Reasoning, and specifically
    developed it to help
  • 13:18 - 13:22
    children solve problems. But the turtle metaphor,
    the turtle
  • 13:22 - 13:24
    metaphor works for everybody, not just for
    kids.
  • 13:24 - 13:31
    Cartesian grids are great for drawing boxes.
    Mostly great.
  • 13:32 - 13:33
    But it can take some very careful thinking
    to
  • 13:33 - 13:37
    figure out how to, how to use x, y
  • 13:37 - 13:41
    coordinate pairs to draw a spiral or a star
  • 13:41 - 13:45
    or a snowflake or a tree. Choosing a different
  • 13:45 - 13:48
    metaphor can make different kinds of solutions
    easy, where
  • 13:48 - 13:49
    before they seemed like too much trouble to
    be
  • 13:49 - 13:51
    worth bothering with.
  • 13:51 - 13:58
    James Ladd, in 2008, wrote a couple of interesting
  • 13:59 - 14:04
    blog posts about what he called East-oriented
    code. Imagine
  • 14:04 - 14:07
    a compass overlaid on top of your screen.
    In
  • 14:07 - 14:10
    this, in this model, messages that an object
    sends
  • 14:10 - 14:13
    to itself go South, and any data returned
    from
  • 14:13 - 14:17
    those calls goes North. Communications between
    objects is the
  • 14:17 - 14:20
    same thing, rotated ninety degrees. Messages
    sent to other
  • 14:20 - 14:23
    objects go East, and the return values from
    those
  • 14:23 - 14:27
    messages flow West.
  • 14:27 - 14:30
    What James Ladd suggests is that, in general,
    code
  • 14:30 - 14:33
    that sends messages to other objects, code
    where information
  • 14:33 - 14:36
    mostly flows East, is easier to extend and
    maintain
  • 14:36 - 14:38
    than code that looks at data and then decides
  • 14:38 - 14:40
    what to do with it, which is code where
  • 14:40 - 14:42
    information flows West.
  • 14:42 - 14:45
    Really, this is just the design principle,
    tell, don't
  • 14:45 - 14:49
    ask. But, the metaphor of the compass, compass
    recasts
  • 14:49 - 14:52
    this in a way that helps us use our
  • 14:52 - 14:55
    background spatial awareness to keep this
    principle in mind
  • 14:55 - 14:59
    at all times. In fact, there are plenty of
  • 14:59 - 15:02
    ways we can use our background level awareness
    to
  • 15:02 - 15:03
    analyze our code.
  • 15:03 - 15:09
    Isn't this adorable? I love this picture.
  • 15:09 - 15:11
    Code smells are an entire category of metaphors
    that
  • 15:11 - 15:14
    we use to talk about our work. In fact,
  • 15:14 - 15:17
    the name code smell itself is a metaphor for
  • 15:17 - 15:20
    anything about your code that hints at a design
  • 15:20 - 15:25
    problem, which I suppose makes it a meta-metaphor.
  • 15:25 - 15:29
    Some code smells have names are extremely
    literal. Duplicated
  • 15:29 - 15:31
    code, long method and so on. But some of
  • 15:31 - 15:36
    these are delightfully suggestive. Feature
    envy. Refused bequest. Primitive
  • 15:36 - 15:39
    obsession. To me, the names on the right have
  • 15:39 - 15:42
    a lot in common with Please, Mr. Gear. They're
  • 15:42 - 15:45
    chosen to hook into something in our social
    awareness
  • 15:45 - 15:48
    to give a name to a pattern of dysfunction,
  • 15:48 - 15:50
    and by naming the problems it suggests a possible
  • 15:50 - 15:52
    solution.
  • 15:52 - 15:55
    So, these are most of the shortcuts that I've
  • 15:55 - 15:57
    accumulated over the years, and I hope that
    this
  • 15:57 - 15:59
    can be the start of a similar collection for
  • 15:59 - 16:04
    some of you.
  • 16:04 - 16:05
    Now the part where I try to put the
  • 16:05 - 16:10
    fear into you. Evolution has designed our
    brains to
  • 16:10 - 16:14
    lie to us. Brains are expensive. The human
    brain
  • 16:14 - 16:16
    accounts for just two percent of body mass,
    but
  • 16:16 - 16:19
    twenty percent of our caloric intake. That's
    a huge
  • 16:19 - 16:23
    energy requirement that has to be justified.
  • 16:23 - 16:26
    Evolution, as a designer, does one thing and
    one
  • 16:26 - 16:30
    thing only. It selects for traits that allow
    an
  • 16:30 - 16:32
    organism to stay alive long enough to reproduce.
    It
  • 16:32 - 16:35
    doesn't care about getting the best solution.
    Only one
  • 16:35 - 16:38
    that's good enough to compete in the current
    landscape.
  • 16:38 - 16:40
    Evolution will tolerate any hack as long as
    it
  • 16:40 - 16:43
    meets that one goal.
  • 16:43 - 16:44
    As an example, I want to take a minute
  • 16:44 - 16:46
    to talk about how we see the world around
  • 16:46 - 16:50
    us. The human eye has two different kinds
    of
  • 16:50 - 16:52
    photo receptors. There are about a hundred
    and twenty
  • 16:52 - 16:55
    million rod cells in each eye. These play
    little
  • 16:55 - 16:58
    or no role in color vision, and they're mostly
  • 16:58 - 17:00
    used for night time and peripheral vision.
  • 17:00 - 17:02
    There are also about six or seven million
    cone
  • 17:02 - 17:04
    cells in each eye, and these give us color
  • 17:04 - 17:06
    vision, but they require a lot more light
    to
  • 17:06 - 17:10
    work. And the vast majority of cone cells
    are
  • 17:10 - 17:12
    packed together in a tight little cluster
    near the
  • 17:12 - 17:14
    center of the retina. This area is what we
  • 17:14 - 17:16
    use to focus on individual details, and it's
    smaller
  • 17:16 - 17:21
    than you might think. It's only fifteen degrees
    wide.
  • 17:21 - 17:24
    As a result, our vision is extremely directional.
    We
  • 17:24 - 17:26
    have a very small area of high detail and
  • 17:26 - 17:27
    high color, and the rest of our field of
  • 17:27 - 17:31
    vision is more or less monochrome. So when
    we
  • 17:31 - 17:38
    look at this, our eyes see something like
    this.
  • 17:40 - 17:43
    In order to turn the image on the left
  • 17:43 - 17:45
    into the image on the right, our brains are
  • 17:45 - 17:47
    doing a lot of work that we're mostly unaware
  • 17:47 - 17:49
    of.
  • 17:49 - 17:51
    We compensate for having such highly directional
    vision by
  • 17:51 - 17:54
    moving our eyes around a lot. Our brains combine
  • 17:54 - 17:57
    the details from these individual points of
    interest to
  • 17:57 - 18:00
    construct a persistent mental model of whatever
    we're looking
  • 18:00 - 18:03
    at. These fast point to point movements are
    called
  • 18:03 - 18:06
    saccades. And they're actually the fastest
    movements that the
  • 18:06 - 18:09
    human body can make. The shorter saccades
    that you
  • 18:09 - 18:11
    make, might make when you're reading, last
    for twenty
  • 18:11 - 18:14
    to forty milliseconds. Longer ones that travel
    through a
  • 18:14 - 18:17
    wider arc might take two hundred milliseconds,
    or about
  • 18:17 - 18:19
    a fifth of a second.
  • 18:19 - 18:21
    What I find so fascinating about this is that
  • 18:21 - 18:25
    we don't perceive saccades. During a saccade,
    the eye
  • 18:25 - 18:27
    is still sending data to the brain, but what
  • 18:27 - 18:29
    it's sending is a smeary blur. So the brain
  • 18:29 - 18:33
    just edits that part out. This process is
    called
  • 18:33 - 18:36
    saccadic masking. You can see this effect
    for yourself.
  • 18:36 - 18:38
    Next time you're in front of a mirror, lean
  • 18:38 - 18:40
    in close and look back and forth from the
  • 18:40 - 18:43
    reflection of one eye to the other. You won't
  • 18:43 - 18:46
    see your eyes move. As far as we can
  • 18:46 - 18:48
    tell, our gaze just jumps instantaneously
    from one reference
  • 18:48 - 18:50
    point to the next. And here's where I have
  • 18:50 - 18:52
    to wait for a moment while everybody stops
    doing
  • 18:52 - 18:57
    this.
  • 18:57 - 18:59
    When I was preparing for this talk, I found
  • 18:59 - 19:02
    an absolutely wonderful sentence in the Wikipedia
    entry on
  • 19:02 - 19:06
    saccades. It said, Due to saccadic masking,
    the eye/brain
  • 19:06 - 19:08
    system not only hides the eye movements from
    the
  • 19:08 - 19:10
    individual, but also hides the evidence that
    anything has
  • 19:10 - 19:12
    been hidden.
  • 19:12 - 19:18
    Hides. The evidence. That anything has been
    hidden. Our
  • 19:18 - 19:20
    brains lie to us. And they lie to us
  • 19:20 - 19:23
    about having lied to us. And this happens
    to
  • 19:23 - 19:26
    you multiple times a second, every waking
    hour, every
  • 19:26 - 19:29
    day of your life. Of course, there's a reason
  • 19:29 - 19:30
    for this.
  • 19:30 - 19:32
    Imagine if, every time you shifted your gaze
    around,
  • 19:32 - 19:34
    you got distracted by all the pretty colors.
    You
  • 19:34 - 19:36
    would be eaten by lions.
  • 19:36 - 19:40
    But, in selecting for this design, evolution
    made a
  • 19:40 - 19:43
    trade off. The trade off is that we are
  • 19:43 - 19:46
    effectively blind every time we move our eyes
    around.
  • 19:46 - 19:48
    Sometimes for up to a fifth of a second.
  • 19:48 - 19:51
    And I wanted to talk about this, partly because
  • 19:51 - 19:53
    it's a really fun subject, but also to show
  • 19:53 - 19:56
    that just one of the ways that our brains
  • 19:56 - 19:58
    are doing a massive amount of work to process
  • 19:58 - 20:01
    information from our environment and present
    us with an
  • 20:01 - 20:03
    abstraction.
  • 20:03 - 20:06
    And as programmers, if we know anything about
    abstractions,
  • 20:06 - 20:09
    it's that they're hard to get right. Which
    leads
  • 20:09 - 20:12
    me to an interesting question. Does it make
    sense
  • 20:12 - 20:13
    to use any of the techniques that I talked
  • 20:13 - 20:16
    about in part one, to try to coral different
  • 20:16 - 20:18
    parts of our brains into doing our work for
  • 20:18 - 20:20
    us, if we don't know what kinds of shortcuts
  • 20:20 - 20:27
    they're gonna take?
  • 20:27 - 20:31
    According to the Oxford English Dictionary,
    the word bias
  • 20:31 - 20:33
    seems to have entered the English language
    around the
  • 20:33 - 20:36
    1520s. It was used as a, a technical term
  • 20:36 - 20:39
    in the game of lawn bowling, and it referred
  • 20:39 - 20:40
    to a ball that was constructed in such a
  • 20:40 - 20:43
    way that it would curve, it would roll in
  • 20:43 - 20:45
    a curved path instead of in a straight line.
  • 20:45 - 20:47
    And since then, it's picked up a few additional
  • 20:47 - 20:50
    meanings, but they all have that same different
    connotation
  • 20:50 - 20:54
    of something that's skewed or off a little
    bit.
  • 20:54 - 20:57
    Cognitive bias is a term for systematic errors
    in
  • 20:57 - 20:59
    thinking. These are patterns of thought that
    diverge in
  • 20:59 - 21:03
    measurable and predictable ways from what
    the answers that
  • 21:03 - 21:07
    pure rationality might give are. We have some
    free
  • 21:07 - 21:09
    time. I suggest that you go have a look
  • 21:09 - 21:12
    at the Wikipedia page called List of cognitive
    biases.
  • 21:12 - 21:14
    There are over a hundred and fifty of them
  • 21:14 - 21:17
    and they are fascinating reading.
  • 21:17 - 21:19
    And this list of cognitive biases has a lot
  • 21:19 - 21:21
    in common with the list of code smells that
  • 21:21 - 21:23
    I showed earlier. A lot of these names are
  • 21:23 - 21:25
    very literal. But there are a few that stand
  • 21:25 - 21:29
    out, like cursive knowledge, or the Google
    effect or,
  • 21:29 - 21:33
    and I kid you not, the Ikea effect. But
  • 21:33 - 21:34
    the parallel goes deeper than that.
  • 21:34 - 21:38
    This lif - excuse me - this list gives
  • 21:38 - 21:41
    names to patterns of dysfunction, and once
    you have
  • 21:41 - 21:42
    a name for a thing, it's a lot easier
  • 21:42 - 21:44
    to recognize it and figure out what to do
  • 21:44 - 21:46
    about it. I do want to call your attention
  • 21:46 - 21:48
    to one particular item on this list. It's
    called
  • 21:48 - 21:52
    the bias blind spot. This is the tendency
    to
  • 21:52 - 21:55
    see oneself as less biased than other people,
    or
  • 21:55 - 21:57
    to be able to identify more cognitive biases
    in
  • 21:57 - 22:04
    others than in oneself. Sound like anybody
    you know?
  • 22:11 - 22:18
    Just let that sink in for a minute. Seriously,
  • 22:24 - 22:26
    though.
  • 22:26 - 22:28
    In our field, we like to think of ourselves
  • 22:28 - 22:30
    as more rational than the average person,
    and it
  • 22:30 - 22:34
    just isn't true. Yes, as programmers, we have
    a
  • 22:34 - 22:37
    valuable, marketable skill that depends on
    our ability to
  • 22:37 - 22:40
    reason mathematically. But we do ourselves
    and others a
  • 22:40 - 22:42
    disservice if we allow ourselves to believe
    that being
  • 22:42 - 22:45
    good at programming means anything other than,
    we're good
  • 22:45 - 22:49
    at programming. Because as humans we are all
    biased.
  • 22:49 - 22:51
    It's built into us, in our DNA. And pretending
  • 22:51 - 22:54
    that we aren't biased only allows our biases
    to
  • 22:54 - 22:55
    run free.
  • 22:55 - 22:58
    I don't have a lot of general advice for
  • 22:58 - 23:00
    how to look for bias, but I think an
  • 23:00 - 23:03
    obvious and necessary first step is just to
    ask
  • 23:03 - 23:08
    the question, how is this biased? Beyond that,
    I
  • 23:08 - 23:10
    suggest that you learn about as many specific
    cognitive
  • 23:10 - 23:11
    biases as you can so that your brain can
  • 23:11 - 23:13
    do what it does, which is to look for
  • 23:13 - 23:17
    patterns and make associations and classify
    things.
  • 23:17 - 23:19
    I think everybody should understand their
    own biases, because
  • 23:19 - 23:22
    only by knowing how you're biased can you
    then
  • 23:22 - 23:24
    decide how to con- how to correct for that
  • 23:24 - 23:27
    bias in the decisions that you make. If you're
  • 23:27 - 23:29
    not checking your work for bias, you can look
  • 23:29 - 23:31
    right past a great solution and you'll never
    know
  • 23:31 - 23:32
    it was there.
  • 23:32 - 23:35
    So for part three of my talk, I have
  • 23:35 - 23:38
    an example of a solution that is simple, elegant,
  • 23:38 - 23:41
    just about the last thing I ever would have
  • 23:41 - 23:45
    thought of.
  • 23:45 - 23:46
    For the benefit of those of you who have
  • 23:46 - 23:49
    yet to find your first gray hair, Pac-Man
    was
  • 23:49 - 23:52
    a video game released in 1980 that let people
  • 23:52 - 23:55
    maneuver around a maze eating dots while trying
    to
  • 23:55 - 23:58
    avoid four ghosts. Now, playing games in fun,
    but
  • 23:58 - 24:00
    we're programmers. We want to know how things
    work.
  • 24:00 - 24:03
    So let's talk about programming Pac-Man.
  • 24:03 - 24:04
    For the purposes of this discussion, we'll
    focus on
  • 24:04 - 24:11
    just three things. The Pac-Man, the ghosts,
    and the
  • 24:11 - 24:12
    maze. The Pac-Man is controlled by the player.
    So
  • 24:12 - 24:15
    that code is basically just responding to
    hardware events.
  • 24:15 - 24:18
    Boring. The maze is there so that the player
  • 24:18 - 24:20
    has some chance at avoiding the ghosts. But
    the
  • 24:20 - 24:23
    ghost AI, that's what's gonna make the game
    interesting
  • 24:23 - 24:26
    enough that people keep dropping quarters
    into a slot,
  • 24:26 - 24:27
    and by the way, video games used to cost
  • 24:27 - 24:28
    a quarter.
  • 24:28 - 24:32
    When I was your age.
  • 24:32 - 24:34
    So to keep things simple, we'll start with
    one
  • 24:34 - 24:38
    ghost. How do we program its movement? We
    could
  • 24:38 - 24:40
    choose a random direction and move that way
    until
  • 24:40 - 24:42
    we hit a wall and then choose another random
  • 24:42 - 24:45
    direction. This is very easy to implement,
    but not
  • 24:45 - 24:46
    much of a challenge for the player.
  • 24:46 - 24:50
    OK, so, we could compute the distance to the
  • 24:50 - 24:52
    Pac-Man in x and y and pick a direction
  • 24:52 - 24:55
    that makes one of those smaller. But then
    the
  • 24:55 - 24:57
    ghost is gonna get stuck in corners or behind
  • 24:57 - 24:59
    walls cause it won't go around to catch the
  • 24:59 - 25:00
    Pac-Man. And, again, it's gonna be too easy
    for
  • 25:00 - 25:01
    the player.
  • 25:01 - 25:04
    So how about instead of minimizing linear
    distance, we
  • 25:04 - 25:09
    focus on topological distance? We can compute
    all possible
  • 25:09 - 25:13
    paths through the maze, pick the shortest
    one that
  • 25:13 - 25:15
    gets us to the Pac-Man and then step down
  • 25:15 - 25:17
    it. And when we get to the next place,
  • 25:17 - 25:19
    we'll do it all again.
  • 25:19 - 25:21
    This works find for one ghost. But if all
  • 25:21 - 25:24
    four ghosts use this algorithm, then they're
    gonna wind
  • 25:24 - 25:25
    up chasing after the player in a tight little
  • 25:25 - 25:30
    bunch instead of fanning out. OK. So each
    ghost
  • 25:30 - 25:32
    computes all possible paths to the Pac-Man
    and rejects
  • 25:32 - 25:35
    any path that goes through another ghost.
    That shouldn't
  • 25:35 - 25:38
    be too hard, right?
  • 25:38 - 25:42
    I don't have a statistically valid sample,
    but my
  • 25:42 - 25:44
    guess is that when asked to design an AI
  • 25:44 - 25:46
    for the ghosts, most programmers would go
    through a
  • 25:46 - 25:47
    thought process more or less like what I just
  • 25:47 - 25:52
    walked through. So, how is this solution biased?
  • 25:52 - 25:54
    I don't have a name, a good name for
  • 25:54 - 25:56
    how this is biased, so the best way I
  • 25:56 - 25:59
    have to communicate this idea is to walk you
  • 25:59 - 26:01
    through a very different solution.
  • 26:01 - 26:05
    In 2006, I attended Oopsla, this is a conference
  • 26:05 - 26:08
    put on by the ACM, as a student volunteer,
  • 26:08 - 26:10
    and I happened to sit in on a presentation
  • 26:10 - 26:14
    by Alexander Repenning from the University
    of Colorado. And
  • 26:14 - 26:18
    in his presentation, he walked through the
    Pac-Man problem,
  • 26:18 - 26:20
    more or less the way I just did, and
  • 26:20 - 26:21
    then he presented this idea.
  • 26:21 - 26:24
    What you do is you give the Pac-Man a
  • 26:24 - 26:26
    smell, and then you model the diffusion of
    that
  • 26:26 - 26:31
    smell throughout the environment. In the real
    world, smells
  • 26:31 - 26:34
    travel through the air. We certainly don't
    need to
  • 26:34 - 26:36
    model each individual air molecule. What we
    can do,
  • 26:36 - 26:39
    instead, is just divide the environment up
    into reasonably
  • 26:39 - 26:42
    sized logical chunks, and we model those.
  • 26:42 - 26:45
    Coincidentally, we already do have an object
    that does
  • 26:45 - 26:47
    exactly that for us. It's the tiles of the
  • 26:47 - 26:49
    maze itself. They're not really doing anything
    else, so
  • 26:49 - 26:52
    we can borrow those as a convenient container
    for
  • 26:52 - 26:56
    this computation. We program the game as follows.
  • 26:56 - 26:59
    We say that the Pac-Man gives whatever floor
    tile
  • 26:59 - 27:02
    it's standing on a Pac-Man smell value, say
    a
  • 27:02 - 27:06
    thousand. The number doesn't really matter.
    And that, that
  • 27:06 - 27:08
    tile then passes a smaller value off to each
  • 27:08 - 27:10
    of its neighbors, and they pass a smaller
    value
  • 27:10 - 27:13
    off to each of their neighbors and so on.
  • 27:13 - 27:14
    Iterate this a few times and you get a
  • 27:14 - 27:17
    diffusion contour that we can visualize as
    a hill
  • 27:17 - 27:19
    with its peak centered on the Pac-Man.
  • 27:19 - 27:21
    It's a little hard to see here. The Pac-Man
  • 27:21 - 27:23
    is at the bottom of that big yellow bar
  • 27:23 - 27:30
    on the left. So we've got the Pac-Man. We've
  • 27:34 - 27:37
    got the floor tiles. But in order to make
  • 27:37 - 27:39
    it a maze, we also have to have some
  • 27:39 - 27:41
    walls. What we do is we give the walls
  • 27:41 - 27:44
    a Pac-Man smell value of zero. That chops
    the
  • 27:44 - 27:51
    hill up a bit.
  • 27:56 - 27:58
    And now all our ghost has to do is
  • 27:58 - 28:04
    climb the hill. We program the first ghost
    to
  • 28:04 - 28:07
    sample each of the floor tiles next to it,
  • 28:07 - 28:09
    pick the one with the biggest number, go that
  • 28:09 - 28:13
    way. It barely seems worthy of being called
    an
  • 28:13 - 28:17
    AI, does it? But check this out. When we
  • 28:17 - 28:18
    add more ghosts to the maze, we only have
  • 28:18 - 28:21
    to make one change to get them to cooperate.
  • 28:21 - 28:23
    And interestingly, we don't change the ghosts'
    movement behaviors
  • 28:23 - 28:26
    at all. Instead, we have the ghosts tell the
  • 28:26 - 28:29
    floor tile that they're, I guess, floating
    above, that
  • 28:29 - 28:33
    its Pac-Men smell value is zero. This changes
    the
  • 28:33 - 28:36
    shape of that diffusion contour. Instead of
    a smooth
  • 28:36 - 28:38
    hill that always slopes down away from the
    Pac-Man,
  • 28:38 - 28:40
    there are now cliffs where the hill drops
    immediately
  • 28:40 - 28:41
    to zero.
  • 28:41 - 28:45
    In effect, we turn the ghosts into movable
    walls,
  • 28:45 - 28:47
    so that when one ghost cuts off another one,
  • 28:47 - 28:50
    the second one will automatically choose a
    different route.
  • 28:50 - 28:53
    This lets the ghosts cooperate without needing
    to be
  • 28:53 - 28:57
    aware of each other. And halfway through this
    conference
  • 28:57 - 28:58
    session that I was sitting in where I saw
  • 28:58 - 29:01
    this, I was like.
  • 29:01 - 29:05
    What just happened?
  • 29:05 - 29:07
    At first, like, my first level of surprise
    was
  • 29:07 - 29:10
    just, what an interesting approach. But then
    I got
  • 29:10 - 29:13
    really, completely stunned when I thought
    about how surprising
  • 29:13 - 29:17
    that solution was. And I hope that looking
    at
  • 29:17 - 29:20
    the second solution helps you understand the
    bias in
  • 29:20 - 29:22
    the first solution.
  • 29:22 - 29:25
    In his paper, Professor Repenning wrote, The
    challenge to
  • 29:25 - 29:28
    find this solution is a psychological, not
    a technical
  • 29:28 - 29:31
    one. Our first instinct, when we're presented
    with this
  • 29:31 - 29:34
    problem, is to imagine ourselves as the ghost.
    This
  • 29:34 - 29:37
    is the body syntonic reasoning that's built
    into Logo,
  • 29:37 - 29:39
    and in this case it's a trap.
  • 29:39 - 29:41
    Because it leads us to solve the pursuit problem
  • 29:41 - 29:45
    by making the pursuer smarter. Once we started
    down
  • 29:45 - 29:48
    that road, it's very unlikely that we're going
    to
  • 29:48 - 29:52
    consider a radically different approach, even,
    or, perhaps, especially
  • 29:52 - 29:56
    if it's a very much simpler one. In other
  • 29:56 - 29:59
    words, body syntonicity biases us towards
    modeling objects in
  • 29:59 - 30:04
    the foreground, rather than objects in the
    background.
  • 30:04 - 30:07
    Oops, sorry.
  • 30:07 - 30:11
    OK. Does this mean that you shouldn't use
    body
  • 30:11 - 30:13
    syntonic reasoning? Of course not. It's a
    tool. It's
  • 30:13 - 30:15
    right for some jobs. It's not right for others.
  • 30:15 - 30:17
    I want to take a look at one more
  • 30:17 - 30:21
    technique from part one. What's the bias in
    Please
  • 30:21 - 30:24
    Mr. Gear, what is your ratio? Aside from the
  • 30:24 - 30:28
    gendered language, which is trivially easy
    to address. This
  • 30:28 - 30:31
    technique is explicitly designed to give you
    an opportunity
  • 30:31 - 30:34
    to discover new objects in your model. But
    it
  • 30:34 - 30:36
    only works after you've given at least one
    of
  • 30:36 - 30:38
    those objects a name.
  • 30:38 - 30:42
    Names have gravity. Metaphors can be tar pits.
    It's
  • 30:42 - 30:45
    very likely that the new objects that you
    discover
  • 30:45 - 30:47
    are going to be fairly closely related to
    the
  • 30:47 - 30:51
    ones that you already have. Another way to
    help
  • 30:51 - 30:53
    see this is to think about how many steps
  • 30:53 - 30:57
    it takes to get from Please, Ms. Pac-Man,
    what
  • 30:57 - 31:00
    is your current position in the maze? To,
    please
  • 31:00 - 31:02
    Ms. Floor Tile, how much do you smell like
  • 31:02 - 31:05
    Pac-Man?
  • 31:05 - 31:06
    For a lot of people, the answer to that
  • 31:06 - 31:10
    question is probably infinity. It certainly
    was for me.
  • 31:10 - 31:11
    My guess is that you don't come up with
  • 31:11 - 31:14
    this technique unless you've already done
    some work modeling
  • 31:14 - 31:19
    diffusion in another context. Which, incidentally,
    is why I
  • 31:19 - 31:22
    like to work on diverse teams. The more different
  • 31:22 - 31:25
    backgrounds and perspectives we have access
    to, the more
  • 31:25 - 31:28
    chances we have to find a novel application
    of
  • 31:28 - 31:32
    some seemingly unrelated technique, because
    somebody's worked with it
  • 31:32 - 31:33
    before.
  • 31:33 - 31:38
    It can be exhilarating and very empowering
    to find
  • 31:38 - 31:40
    these techniques that let us take shortcuts
    in our
  • 31:40 - 31:44
    work by leveraging these specialized structures
    in our brains.
  • 31:44 - 31:47
    But those structures themselves take shortcuts,
    and if you're
  • 31:47 - 31:49
    not careful, they can lead you down a primrose
  • 31:49 - 31:49
    path.
  • 31:49 - 31:52
    I want to go back to that quote that
  • 31:52 - 31:53
    got me thinking about all this in the first
  • 31:53 - 31:56
    place. About how we don't ever teach anybody
    how
  • 31:56 - 31:59
    to do that or even that they should.
  • 31:59 - 32:01
    Ultimately, I think we should use techniques
    like this,
  • 32:01 - 32:03
    despite the, the biases in them. I think we
  • 32:03 - 32:07
    should share them. And I think, to paraphrase
    Glenn,
  • 32:07 - 32:08
    we should teach people that this is a thing
  • 32:08 - 32:12
    that you can and should do. And, I think
  • 32:12 - 32:14
    that we should teach people that looking critically
    at
  • 32:14 - 32:17
    the answers that these techniques give you
    is also
  • 32:17 - 32:19
    a thing that you can and should do.
  • 32:19 - 32:21
    We might not always be able to come up
  • 32:21 - 32:24
    with a radically simpler or different approach,
    but the
  • 32:24 - 32:27
    least we can do is give ourselves the opportunity
  • 32:27 - 32:33
    to do so, by asking how is this biased?
  • 32:33 - 32:35
    I want to say thank you, real quickly, to
  • 32:35 - 32:36
    everybody who helped me with this talk, or
    the
  • 32:36 - 32:39
    ideas in it. And also thank you to LivingSocial
  • 32:39 - 32:42
    for paying for my trip. And also for bringing
  • 32:42 - 32:46
    these wonderful brains. So, they're gonna
    start tearing this
  • 32:46 - 32:49
    stage down in a few minutes. Rather than take
  • 32:49 - 32:50
    Q and A up here, I'm gonna pack up
  • 32:50 - 32:52
    all my stuff and then I'm gonna migrate over
  • 32:52 - 32:55
    there, and you can come and bug me. pick
  • 32:55 - 32:56
    up a brain. Whatever.
  • 32:56 - 32:57
    Thank you.
Title:
RailsConf 2014 - Cognitive Shortcuts: Models, Visualizations, Metaphors, and Other Lies
Description:

more » « less
Duration:
33:23

English subtitles

Revisions