Return to Video

Reading minds | Marvin Chun | TEDxYale

  • 0:16 - 0:17
    Good afternoon.
  • 0:18 - 0:19
    My name is Marvin Chun,
  • 0:19 - 0:21
    and I am a researcher,
  • 0:21 - 0:24
    and I teach psychology at Yale.
  • 0:25 - 0:26
    I'm very proud of what I do,
  • 0:26 - 0:30
    and I really believe in the value
    of psychological science.
  • 0:31 - 0:34
    However, to start with a footnote,
  • 0:34 - 0:37
    I will admit that I feel
    self-conscious sometimes
  • 0:37 - 0:39
    telling people that I'm a psychologist,
  • 0:39 - 0:41
    especially when you're meeting
    a stranger on the airplane
  • 0:41 - 0:44
    or at a cocktail party.
  • 0:44 - 0:45
    And the reason for this
  • 0:45 - 0:48
    is because there's
    a very typical question that you get
  • 0:48 - 0:51
    when you tell people
    that you're a psychologist.
  • 0:51 - 0:53
    What do you think that question might be?
  • 0:53 - 0:55
    (Audience) What am I thinking?
  • 0:55 - 0:56
    "What am I thinking?" exactly.
  • 0:56 - 0:57
    "Can you read my mind?"
  • 0:57 - 1:01
    is a question that a lot
    of psychologists cringe at
  • 1:01 - 1:02
    whenever they hear it
  • 1:02 - 1:05
    because it really reflects
    a misconception about -
  • 1:05 - 1:09
    a kind of a Freudian misconception
    about what we do for a living.
  • 1:09 - 1:12
    So you know, I've developed
    a lot of answers over the years:
  • 1:12 - 1:13
    "If I could read your mind,
  • 1:13 - 1:15
    I'd be a whole lot richer
    than I am right now."
  • 1:17 - 1:19
    The other answer I came up with is,
  • 1:19 - 1:21
    "Oh, I'm actually a neuroscientist,"
  • 1:21 - 1:23
    and then that really makes people quiet.
  • 1:23 - 1:24
    (Laughter)
  • 1:25 - 1:26
    But the best answer of course is,
  • 1:26 - 1:28
    "Yeah, of course I can read your mind,
  • 1:28 - 1:30
    and you really should be
    ashamed of yourself."
  • 1:30 - 1:32
    (Laughter)
  • 1:36 - 1:40
    But all that is motivation
    for what I'll share with you today,
  • 1:40 - 1:44
    which is I've been a psychologist
    for about 20 years now,
  • 1:44 - 1:49
    and we've actually reached this point
    where I can actually answer that question
  • 1:49 - 1:53
    by saying, "Yes,
    if I can put you in a scanner,
  • 1:53 - 1:55
    then we can read your mind."
  • 1:55 - 1:59
    And what I'll share with you today
    is to what extent, how far are we
  • 1:59 - 2:03
    at being able to read
    other people's minds.
  • 2:04 - 2:06
    Of course this is the stuff
    of science fiction.
  • 2:06 - 2:09
    Even a movie as recent as "Divergent"
  • 2:09 - 2:10
    has these many sequences
  • 2:10 - 2:13
    in which they're reading out
    the brain activity
  • 2:13 - 2:14
    to see what she's dreaming
  • 2:14 - 2:17
    under the influence of drugs.
  • 2:18 - 2:21
    And you know, again,
    these technologies do exist.
  • 2:21 - 2:24
    In modern day, in the real word,
    they look more like this.
  • 2:24 - 2:27
    These are just standard
    MRI machines and scanners
  • 2:27 - 2:29
    that are slightly modified
  • 2:29 - 2:34
    so that they can allow you
    to infer and read out brain activity.
  • 2:36 - 2:39
    These methods have been around
    for about 25 years,
  • 2:39 - 2:42
    and in the first decade -
    I'd say the 1990s -
  • 2:42 - 2:44
    a lot of effort was devoted
  • 2:44 - 2:48
    to mapping out what the different
    parts of the brain do.
  • 2:48 - 2:53
    One of the most successful forms
    of this is exemplified here.
  • 2:53 - 2:56
    Imagine you're a subject
    lying down in that scanner;
  • 2:56 - 2:57
    there's a computer projector
  • 2:57 - 2:59
    that allows you
    to look at a series of images
  • 2:59 - 3:02
    while your brain is being scanned.
  • 3:02 - 3:04
    For some periods of time,
  • 3:04 - 3:07
    you are going to be seeing
    sequences of scene images
  • 3:08 - 3:11
    alternating with other
    sequences of face images,
  • 3:11 - 3:15
    and they'll just go back and forth
    while your brain is being scanned.
  • 3:15 - 3:17
    And the key motivation here
    for the researchers
  • 3:17 - 3:20
    is to compare what's
    happening in the brain
  • 3:20 - 3:23
    when you're looking at scenes
    versus when you're looking at faces
  • 3:23 - 3:27
    to see if anything different happens
    for these two categories of stimuli.
  • 3:27 - 3:30
    And I'd say even within 10 minutes
    of scanning your brain,
  • 3:30 - 3:34
    you can get a very reliable pattern
    of results that looks like this,
  • 3:34 - 3:40
    where there is a region of the brain
    that's specialized for processing scenes,
  • 3:40 - 3:42
    shown here in the warm colors,
  • 3:42 - 3:45
    and there is also
    a separate region of the brain,
  • 3:45 - 3:47
    shown here in blue colors,
  • 3:47 - 3:51
    that is more selectively active
    when people are viewing faces.
  • 3:51 - 3:54
    There's this disassociation here.
  • 3:54 - 3:57
    And this allows you to see
  • 3:57 - 3:59
    where different functions
    reside in the brain.
  • 3:59 - 4:02
    And in the case
    of scene and face perception,
  • 4:02 - 4:06
    these two forms of vision
    are so important in our everyday lives -
  • 4:06 - 4:08
    or course scene processing
    important for navigation,
  • 4:08 - 4:12
    face processing important
    for social interactions -
  • 4:12 - 4:15
    that our brain has
    specialized areas devoted to them.
  • 4:16 - 4:18
    And this alone, I think,
    is pretty amazing.
  • 4:19 - 4:22
    A lot of this work
    started coming out in the mid-1990s,
  • 4:22 - 4:25
    and it really complimented and reinforced
  • 4:25 - 4:28
    a lot of the patient work
    that's been around for dozens of years.
  • 4:29 - 4:33
    But really, the methodologies of fMRI
  • 4:33 - 4:37
    have really gone far beyond
    these very basic mapping studies.
  • 4:37 - 4:40
    You know, in fact,
    when you look at a study like this,
  • 4:40 - 4:41
    you may say, "Oh, that's really cool,
  • 4:41 - 4:44
    but what does this have to do
    about everyday life?
  • 4:44 - 4:49
    How is this going to help me understand
    why I feel this way or don't feel this way
  • 4:49 - 4:52
    or why I like a certain person
    or don't like certain other people?
  • 4:52 - 4:55
    Will this tell me what kind
    of career I should take?"
  • 4:55 - 4:57
    You know, you can't really get too far
  • 4:57 - 5:00
    with this scene and face
    area mapping alone.
  • 5:01 - 5:05
    So it wasn't until this study
    came up around 2000 -
  • 5:05 - 5:07
    And most of these studies,
    I should emphasize,
  • 5:07 - 5:09
    are from other labs around the field.
  • 5:09 - 5:10
    Any studies from Yale,
  • 5:10 - 5:12
    I'll have the mark of "Yale"
    on the corner.
  • 5:12 - 5:18
    But this particular study was done at MIT,
    by Nancy Kanwisher and colleagues.
  • 5:18 - 5:21
    They actually had subjects
    look at nothing.
  • 5:21 - 5:25
    They had subjects
    close their eyes in the scanner
  • 5:26 - 5:28
    while they gave them instructions,
  • 5:28 - 5:32
    and the instructions
    were one of two types of tasks.
  • 5:32 - 5:36
    One was to imagine
    a bunch of faces that they knew.
  • 5:36 - 5:39
    So you might be able to imagine
    your friends and family,
  • 5:39 - 5:41
    cycle through them one at a time
  • 5:41 - 5:43
    while your brain is being scanned.
  • 5:43 - 5:45
    That would be the face
    imagery instruction.
  • 5:45 - 5:48
    And then the other instruction
    that they gave the subjects
  • 5:48 - 5:50
    was of course a scene imagery instruction,
  • 5:50 - 5:54
    so imagine after this talk,
    you walk home or take your car home,
  • 5:54 - 5:56
    get into your home,
    navigate throughout your house,
  • 5:56 - 5:58
    change into something more comfortable,
  • 5:58 - 6:00
    go to the kitchen, get something to eat.
  • 6:00 - 6:04
    You have this ability to visualize
    places and scenes in your mind,
  • 6:04 - 6:08
    and the questions is, "What happens
    when you're visualizing scenes,
  • 6:08 - 6:11
    and what happens
    when you're visualizing faces?
  • 6:11 - 6:13
    What are the neural correlates of that?"
  • 6:13 - 6:18
    And the hypothesis was, well, maybe
    imagining faces activates the face area
  • 6:18 - 6:21
    and maybe imagining scenes
    would activate the scene area,
  • 6:21 - 6:24
    and that's in fact indeed what they found.
  • 6:25 - 6:27
    First, you bring in a bunch
    of subjects into the scanner,
  • 6:27 - 6:29
    show them faces, show them scenes,
  • 6:29 - 6:31
    you map out the face area on top,
  • 6:31 - 6:34
    you map out the place area
    in the second row.
  • 6:34 - 6:36
    And then the same subjects,
  • 6:37 - 6:39
    at a different time in the scanner,
  • 6:39 - 6:42
    have them close their eyes,
    do the instructions I just had you do,
  • 6:42 - 6:43
    and you compare.
  • 6:43 - 6:45
    When you have face imagery,
  • 6:45 - 6:48
    you can see in the second row
    that the face area is active,
  • 6:48 - 6:50
    even though nothing's on the screen.
  • 6:50 - 6:54
    And in the fourth row here,
    the second row of the bottom pair,
  • 6:54 - 6:57
    you see that the place area is active
    when you're imagining scenes;
  • 6:57 - 6:59
    nothing's on the screen,
  • 6:59 - 7:03
    but simply when you're imagining scenes,
    it will activate the place area.
  • 7:03 - 7:07
    In fact, even just
    by looking at fMRI data alone
  • 7:07 - 7:10
    and looking at the relative activity
    for the face area and place area,
  • 7:10 - 7:12
    you can guess over 80% of the time,
  • 7:12 - 7:14
    even in this first study,
  • 7:14 - 7:16
    what subjects were imagining.
  • 7:16 - 7:17
    Okay?
  • 7:17 - 7:20
    And so, in this sense, in my mind,
  • 7:20 - 7:23
    I believe that this is the first study
    that started using brain imaging
  • 7:23 - 7:26
    to actually read out
    what people were thinking.
  • 7:26 - 7:29
    You can take a naïve person,
    look at this graph, you know,
  • 7:29 - 7:32
    and ask which bar is higher,
    which line is higher,
  • 7:32 - 7:36
    and they can guess better than chance,
    way better than chance,
  • 7:36 - 7:38
    what people were thinking.
  • 7:39 - 7:41
    Now, you may say, "Okay,
    that's really cool,
  • 7:41 - 7:45
    but there're a whole lot more
    interesting things to do with a scanner
  • 7:45 - 7:47
    or with what we want to know
  • 7:47 - 7:50
    than whether people
    are thinking of faces or scenes."
  • 7:50 - 7:51
    And so the rest of my talk
  • 7:51 - 7:55
    will be devoted to the next 15 years
    of work since this study
  • 7:55 - 7:59
    that have really advanced
    our ability to read out minds.
  • 8:00 - 8:02
    This is one study
    that was published in 2010
  • 8:02 - 8:05
    in the New England Journal of Medicine
  • 8:05 - 8:06
    by another group.
  • 8:06 - 8:11
    They were actually studying patients
    who were in a persistent vegetative state,
  • 8:11 - 8:14
    who were locked in,
    had no ability to express themselves;
  • 8:14 - 8:17
    it was not even clear if they were
    capable of voluntary thought,
  • 8:17 - 8:22
    and yet because you can instruct people
    to imagine one thing versus another -
  • 8:22 - 8:26
    in this case, imagine walking
    through your house, navigation,
  • 8:26 - 8:28
    in the other case, imagine playing tennis
  • 8:28 - 8:30
    because that activates
    a whole motor circuitry
  • 8:30 - 8:34
    that's separate from the spacial
    navigation circuitry -
  • 8:34 - 8:37
    you attach each of those imagery
    instruction to "yes" or "no,"
  • 8:37 - 8:40
    and then you can ask them
    factual questions,
  • 8:40 - 8:44
    such as "Is your father's name Alexander,
    or is your father's name Thomas?"
  • 8:44 - 8:47
    And they were able to demonstrate
    that in this patient,
  • 8:47 - 8:50
    who otherwise had no ability
    to communicate,
  • 8:50 - 8:55
    he was able to respond
    to these factual questions accurately,
  • 8:55 - 8:59
    and even in control subject
    who do not have problems.
  • 9:00 - 9:01
    This method is so reliable
  • 9:01 - 9:06
    that almost over 95%
    of their responses were decodable
  • 9:06 - 9:09
    just through brain imagining alone -
    "yes" versus "no."
  • 9:09 - 9:11
    So you can imagine,
    with a game of 20 questions,
  • 9:11 - 9:15
    you can really get insight
    into what people are thinking
  • 9:15 - 9:17
    using these methodologies.
  • 9:18 - 9:19
    But again, it's limited;
  • 9:19 - 9:21
    we only have two states here -
  • 9:21 - 9:22
    "yes," "no,"
  • 9:22 - 9:25
    think tennis or think
    walking around your house.
  • 9:26 - 9:30
    But fortunately, there're tremendously
    brilliant scientists all around the world
  • 9:30 - 9:36
    who are constantly refining the methods
    to decode fMRI activity
  • 9:36 - 9:39
    so that we can understand the mind.
  • 9:40 - 9:44
    And I'd like to use this slide
    from Nature magazine
  • 9:44 - 9:47
    to motivate the next few studies
    I'll share with you.
  • 9:48 - 9:53
    One huge limitation of brain imaging,
    at least in the first 15 years,
  • 9:53 - 9:57
    is that there are not
    many specialized areas in the brain.
  • 9:57 - 9:59
    The face area is one.
    Place area is another.
  • 9:59 - 10:04
    These are very, very important visual
    functions that we carry out every day,
  • 10:04 - 10:08
    but there is no "shoe" area in the brain,
    there is no "cat" area in the brain.
  • 10:08 - 10:12
    You don't see separate blobs
    or patches of activity
  • 10:12 - 10:14
    for these two different categories,
  • 10:14 - 10:17
    and indeed, for most categories
    that we encounter
  • 10:17 - 10:19
    and are able to discriminate,
  • 10:19 - 10:21
    they don't have separate brain regions.
  • 10:21 - 10:23
    And because there are
    no separate brain regions,
  • 10:23 - 10:24
    the question then is,
  • 10:24 - 10:27
    "How do we study them,
    and how do we discriminate them,
  • 10:27 - 10:31
    how do we get at these more fine
    detailed differences that matter so much
  • 10:31 - 10:33
    if we really want to understand
  • 10:33 - 10:36
    how the mind works
    and how we can read it out
  • 10:36 - 10:38
    using fMRI?
  • 10:38 - 10:44
    Fortunately, some neuroscientists
    collaborated with computer vision people
  • 10:44 - 10:49
    and with electrical engineers
    and computer scientists and statisticians
  • 10:49 - 10:52
    to use very refined mathematical methods
  • 10:52 - 10:56
    for pulling out and decoding
    more subtle patterns of activity
  • 10:56 - 10:59
    that are unique to shoe processing
  • 10:59 - 11:03
    and that are unique
    to the processing of cat stimuli.
  • 11:03 - 11:06
    So, for instance,
    even though the shoes and cats,
  • 11:06 - 11:08
    you show a whole bunch
    of them to subjects,
  • 11:08 - 11:11
    it'll activate same part of cortex,
    the same part of the brain,
  • 11:11 - 11:13
    and so that if you average
    activity in that part,
  • 11:13 - 11:17
    you won't see any differences
    between the shoes and the cats.
  • 11:17 - 11:19
    As you can see here on the right,
  • 11:19 - 11:23
    the shoes are still associated
    with different fine patterns,
  • 11:23 - 11:25
    or micropatterns of activity
  • 11:25 - 11:29
    that are different from the patterns
    of activity elicited by cats.
  • 11:29 - 11:34
    These little boxes over here,
    these cells are referred to voxels;
  • 11:34 - 11:36
    the way fMRI collects data from the brain
  • 11:36 - 11:39
    is it kind of chops up your brain
    into tiny little cubes,
  • 11:39 - 11:41
    and each cube is a unit of measurement -
  • 11:41 - 11:45
    it gives you numerical data
    that you can use to analyze.
  • 11:45 - 11:48
    And if you imagine
    that the brightness of these squares
  • 11:48 - 11:50
    corresponds to a different
    level of activity,
  • 11:50 - 11:53
    quantitatively measurable
    different level of activity,
  • 11:53 - 11:55
    and you can imagine over a spacial region
  • 11:55 - 12:00
    that the pattern of these activities
    may be different for shoes versus cats,
  • 12:00 - 12:04
    then all you need to do
    is to work with a computer algorithm
  • 12:04 - 12:06
    to learn these subtle
    differences in patterns,
  • 12:06 - 12:10
    so we have what we might call a "decoder."
  • 12:10 - 12:12
    And you train up a computer decoder
  • 12:12 - 12:16
    to learn the differences
    between shoe patterns and cat patterns,
  • 12:16 - 12:18
    and then what you have
    is here in the testing phase:
  • 12:18 - 12:21
    after you train up
    your computer algorithm,
  • 12:21 - 12:23
    you can show a new shoe,
  • 12:23 - 12:26
    it will elicit a certain pattern
    of fMRI activity,
  • 12:26 - 12:28
    as you can see over here,
  • 12:28 - 12:30
    and the computer
    will tell you its best guess
  • 12:30 - 12:33
    as to whether this pattern over here
  • 12:33 - 12:37
    is closer to a shoe
    or closer to that for a cat.
  • 12:37 - 12:38
    And it will allow you
  • 12:38 - 12:40
    to guess something
    that's a lot more refined
  • 12:40 - 12:42
    and that is not possible
  • 12:42 - 12:46
    when you don't have areas
    like the face area or the place area.
  • 12:48 - 12:52
    And now people are really taking off
    with these types of methodologies.
  • 12:52 - 12:57
    Those started coming out in 2005,
    so a little than a decade ago.
  • 12:57 - 13:00
    This is a study published
    in the New England Journal of Medicine,
  • 13:00 - 13:03
    year 2013, literally last year,
  • 13:04 - 13:07
    a very important clinical
    application of studying
  • 13:07 - 13:10
    or trying to find a way
    to objectively measure pain,
  • 13:10 - 13:13
    which is a very subjective state, we know.
  • 13:14 - 13:15
    You can use fMRI,
  • 13:15 - 13:17
    and these researchers have developed ways
  • 13:17 - 13:21
    of mapping and predicting
    the amount of heat
  • 13:21 - 13:23
    and the amount of pain people experience
  • 13:23 - 13:28
    as you increase the temperature
    of a noxious stimulus on their skin,
  • 13:28 - 13:32
    and the fMRI decoder
    will kind of give you an estimate
  • 13:32 - 13:35
    of how much heat
    and maybe even how much pain
  • 13:35 - 13:38
    someone is experiencing,
  • 13:38 - 13:41
    to the extent that they can predict,
    just based on fMRI activity alone,
  • 13:41 - 13:45
    how much pain a person is experiencing
    over 93% of the time,
  • 13:45 - 13:48
    or with 93% accuracy.
  • 13:50 - 13:53
    This is an astonishing version
    that I'm sure many of you have seen,
  • 13:53 - 13:54
    published in 2011,
  • 13:54 - 13:58
    from Jack Gallant's group
    over in UC, Berkley.
  • 13:58 - 14:03
    They showed people a bunch
    of these videos from YouTube,
  • 14:03 - 14:04
    here shown here on the left,
  • 14:04 - 14:08
    and then they trained up
    a classifier, a decoder,
  • 14:08 - 14:13
    to learn the mapping
    between the videos and fMRI activity;
  • 14:13 - 14:17
    and then when they showed
    people novel videos
  • 14:17 - 14:20
    that elicited certain patterns
    of fMRI activity,
  • 14:20 - 14:23
    they were able to use
    that fMRI activity alone
  • 14:23 - 14:26
    to guess what videos people were seeing.
  • 14:26 - 14:28
    And they pulled the guesses
    off of YouTube as well
  • 14:28 - 14:29
    and averaged them,
  • 14:29 - 14:31
    and that's why it's a little grainy,
  • 14:31 - 14:34
    but what you'll see here -
    and this is all over YouTube -
  • 14:34 - 14:36
    is what people saw
  • 14:36 - 14:39
    and, on the right, what the computer
    is guessing people saw
  • 14:39 - 14:41
    based on fMRI activity alone.
  • 15:13 - 15:16
    So it's pretty astonishing;
    this literally is reading out the mind.
  • 15:16 - 15:20
    I mean, you have complicated models,
    you have to train it and such,
  • 15:20 - 15:22
    but still, this is based
    on fMRI activity alone
  • 15:22 - 15:25
    that they're able to make
    these guesses over here.
  • 15:25 - 15:30
    I had a student at Yale University.
    His name is Alan Cowen.
  • 15:30 - 15:32
    He had a very strong
    mathematics background.
  • 15:32 - 15:35
    He came to my lab,
    and he said, "I really love this stuff."
  • 15:36 - 15:38
    He loved this stuff so much
    that he wanted to do it himself.
  • 15:38 - 15:43
    I actually did not have the ability
    to do this in the lab,
  • 15:43 - 15:46
    but he developed it
    together with my post doc Brice Kuhl,
  • 15:46 - 15:50
    who's now an assistant professor
    at New York University.
  • 15:50 - 15:51
    They wanted to ask the question
  • 15:51 - 15:56
    of "Can we limit this type
    of analysis to faces?"
  • 15:57 - 15:58
    As you saw before,
  • 15:58 - 16:01
    you can kind of see what categories
    people were looking at,
  • 16:01 - 16:04
    but certainly the algorithm
    was not able to give you guesses
  • 16:04 - 16:06
    as to which person you were looking at
  • 16:06 - 16:09
    or what kind of animal
    you were looking at.
  • 16:09 - 16:11
    You're really getting
    more categorical guesses
  • 16:11 - 16:13
    in the prior example.
  • 16:13 - 16:15
    And so Alan and Brice,
  • 16:15 - 16:19
    they thought that, well, if we focused
    our algorithms on faces alone -
  • 16:19 - 16:23
    you know, the faces
    being so important for everyday life,
  • 16:23 - 16:24
    and given the fact
  • 16:24 - 16:27
    that there are specialized mechanisms
    in the brain for face processing -
  • 16:27 - 16:30
    maybe if we do something
    that's that specialized,
  • 16:30 - 16:34
    we might be able to individually
    guess individual faces.
  • 16:35 - 16:37
    As a summary, that's what they did.
  • 16:37 - 16:39
    They showed people a bunch of faces,
  • 16:39 - 16:41
    they trained up these algorithms,
  • 16:41 - 16:46
    and then using fMRI activity alone,
    they were able to generate good guesses,
  • 16:47 - 16:49
    above-chance guesses,
  • 16:49 - 16:52
    as to which faces
    an individual is looking at,
  • 16:52 - 16:54
    again, based on fMRI activity alone,
  • 16:54 - 16:56
    and that's what's shown here on the right.
  • 16:57 - 16:59
    Just to give you a little bit more detail
  • 16:59 - 17:01
    for those who kind of
    like that kind of stuff.
  • 17:01 - 17:03
    It's really relying
    on a lot of computer vision.
  • 17:03 - 17:05
    This is not voodoo magic.
  • 17:05 - 17:07
    There's a lot of computer vision
    that allows you
  • 17:07 - 17:13
    to mathematically summarize
    features of a whole array of faces.
  • 17:14 - 17:16
    People call them "eigenfaces."
  • 17:16 - 17:19
    It's based on principle
    component analysis.
  • 17:19 - 17:23
    So you can decompose these faces
    and describe a whole array of faces
  • 17:23 - 17:25
    with these mathematical algorithms.
  • 17:25 - 17:29
    And basically what
    Alan and Brice did in my lab
  • 17:29 - 17:35
    was to map those mathematical
    components to brain activity,
  • 17:35 - 17:39
    and then once you train up a database,
    a statistic library, for doing so,
  • 17:39 - 17:42
    then you can take a novel face
    that's shown up here on the right,
  • 17:43 - 17:46
    record the fMRI activity
    to those novel faces,
  • 17:46 - 17:50
    and then make your best guess
    as to which face people were looking at.
  • 17:50 - 17:53
    Right now, we are about 65, 70% correct,
  • 17:54 - 17:59
    and we and others are working very hard
    to improve that performance.
  • 18:00 - 18:02
    And this is just another example:
  • 18:02 - 18:04
    a whole bunch of originals
    on the left here,
  • 18:04 - 18:07
    and then reconstructions
    and guesses on the right.
  • 18:08 - 18:11
    This study actually just came out,
    just got published this past week,
  • 18:11 - 18:13
    so I'm pretty excited
  • 18:13 - 18:16
    that you might even see
    some media coverage of the work,
  • 18:16 - 18:19
    and again, I really want to credit
    Alan Cowen and Brice Kuhl
  • 18:19 - 18:21
    for making this possible.
  • 18:22 - 18:23
    Now, you may wonder,
  • 18:23 - 18:27
    going back, kind of, to the more
    categorical reading out of brains -
  • 18:27 - 18:28
    That work came out in 2011.
  • 18:28 - 18:31
    I've been talking about it a lot
    in classes and stuff,
  • 18:31 - 18:33
    and a really typical
    question that you get -
  • 18:34 - 18:35
    and this question I actually like -
  • 18:35 - 18:38
    is, "Well, if you can read out
    what people are seeing,
  • 18:38 - 18:40
    can you read out what they're dreaming?"
  • 18:40 - 18:43
    because that's the natural next step.
  • 18:43 - 18:46
    And this is work done
    by Horikawa et al. in Japan,
  • 18:46 - 18:48
    published in Science last year.
  • 18:48 - 18:51
    They actually took a lot of the work
    that Jack Gallant did,
  • 18:51 - 18:54
    modified it so they can decode
    what people were dreaming
  • 18:54 - 18:57
    while they were sleeping
    inside the scanner.
  • 18:57 - 18:58
    And what we have here on the -
  • 18:58 - 18:59
    Ignore the words,
  • 18:59 - 19:03
    these are just ways to try to categorize
    what people were dreaming about.
  • 19:03 - 19:05
    Just focus on the top left.
  • 19:05 - 19:10
    That's the computer
    algorithm's guess, or decoder's guess
  • 19:10 - 19:12
    as to what people were dreaming about.
  • 19:12 - 19:14
    You can imagine
    it's going to be a lot more messier
  • 19:14 - 19:16
    than what you saw before for perception,
  • 19:16 - 19:18
    but still, it's pretty amazing
  • 19:18 - 19:21
    that you can decode dreams now
    while people are sleeping,
  • 19:21 - 19:23
    you know, while they are
    in rapid eye movement sleep.
  • 19:23 - 19:26
    So what we have here -
    I'll turn it on, it's a video.
  • 19:26 - 19:29
    I will mention that,
    a little stereotypically,
  • 19:30 - 19:32
    this is a male subject,
  • 19:32 - 19:35
    and male subjects tend to think
    about one thing when they're dreaming,
  • 19:35 - 19:36
    (Laughter)
  • 19:36 - 19:40
    but it is rated PG,
    if you'll excuse me for that.
  • 19:43 - 19:45
    And again, you know, it's not easy.
  • 19:45 - 19:47
    It's, again, remarkable
    that they can do this,
  • 19:47 - 19:50
    and it gets better
    as it goes towards the end,
  • 19:50 - 19:54
    right before they awakened the subject
    to ask them what they were dreaming,
  • 19:54 - 19:56
    but you can start seeing
    buildings and places,
  • 19:56 - 20:00
    and here comes the dream,
    the real concrete stuff,
  • 20:00 - 20:01
    right there.
  • 20:01 - 20:02
    Okay.
  • 20:02 - 20:03
    (Laughter)
  • 20:03 - 20:07
    Then they wake up the subject and say,
    "What were you dreaming about?"
  • 20:07 - 20:10
    and they will report something
    that's consistent with the decoder,
  • 20:10 - 20:12
    as a way to access its validity.
  • 20:15 - 20:17
    So, we can decode and read out the mind;
  • 20:17 - 20:20
    the field, the kind
    of neuroscience can do that.
  • 20:20 - 20:23
    And the question is, "What are we
    going to use this capability for?"
  • 20:24 - 20:26
    Some of you may have seen this movie,
    the Minority Report.
  • 20:26 - 20:30
    It's a fascinating, amazing movie.
    I encourage you to see it if you can.
  • 20:30 - 20:32
    This movie is all
    about predicting behavior,
  • 20:32 - 20:34
    you know, reading out brain activity
  • 20:34 - 20:37
    to try to predict what's going
    to happen in the future.
  • 20:37 - 20:42
    And neuroscientists are working
    on this aspect of cognition as well.
  • 20:42 - 20:46
    As one example, colleagues
    over at the University of New Mexico,
  • 20:46 - 20:48
    Kent Kiehl's group,
  • 20:49 - 20:53
    scanned a whole bunch of prisoners
    who committed felony crimes.
  • 20:53 - 20:55
    They were all released back into society,
  • 20:55 - 20:57
    and the depended measure was:
  • 20:57 - 21:00
    "How likely was it
    for someone to be rearrested?
  • 21:00 - 21:02
    What was the likelihood of recidivism?"
  • 21:02 - 21:05
    And they found that based on brain scans
  • 21:05 - 21:08
    while people were
    doing tasks in the prison -
  • 21:08 - 21:11
    in the scanner that was
    located in the prison,
  • 21:11 - 21:13
    they could predict reliably
  • 21:13 - 21:16
    who was more likely
    to come back to prison,
  • 21:16 - 21:17
    be rearrested, re-commit a crime
  • 21:17 - 21:20
    versus who was less likely to do so.
  • 21:20 - 21:25
    In my own laboratory here at Yale,
    I worked with a undergraduate student.
  • 21:25 - 21:28
    This is, again,
    his own idea, Harrison Korn,
  • 21:28 - 21:30
    together with Michael Johnson.
  • 21:30 - 21:36
    They wanted to see if they could measure
    implicit or unconscious prejudice
  • 21:36 - 21:39
    while people were looking
    at these vignettes
  • 21:39 - 21:42
    which involve employment
    discrimination cases.
  • 21:42 - 21:44
    So you can read through that:
  • 21:44 - 21:46
    "Rodney, a 19 year-old African American,
  • 21:46 - 21:49
    applied for a job as a clerk
    at a teen-apparel store.
  • 21:49 - 21:51
    Had experience as a cashier,
  • 21:51 - 21:53
    glowing recommendation
    from a previous employer
  • 21:53 - 21:57
    but was denied the job because he did not
    match the company's look."
  • 21:57 - 21:58
    Okay?
  • 21:58 - 22:00
    And et cetera, et cetera, et cetera.
  • 22:00 - 22:01
    And the question is,
  • 22:01 - 22:05
    "In this hypothetical damage awards case,
  • 22:05 - 22:09
    how much would you
    as a subject award Rodney?"
  • 22:09 - 22:13
    And you can imagine a range of responses,
  • 22:13 - 22:16
    where you would award Rodney a lot
    or award Rodney a little,
  • 22:16 - 22:19
    and we had many other cases like this.
  • 22:19 - 22:20
    And the question is,
  • 22:20 - 22:23
    "Can we predict which subjects
    are going to award a lot of damages
  • 22:23 - 22:26
    and which subjects
    are going to award few damages?"
  • 22:27 - 22:32
    And Harrison found that if you scan people
  • 22:32 - 22:37
    while they were looking
    at white faces versus black faces,
  • 22:37 - 22:40
    based on the brain response
    to those faces,
  • 22:40 - 22:41
    he could predict
  • 22:41 - 22:47
    the amount of awards that were given
    in this hypothetical case later on.
  • 22:47 - 22:51
    And that's shown here
    by these correlations on the left.
  • 22:51 - 22:54
    As a final thing I want to share
    of my lab right now,
  • 22:54 - 22:58
    we're really interested in this issue
    of "what does it mean to be in the zone?
  • 22:58 - 23:00
    what does it mean
    to be able to stay focused
  • 23:00 - 23:02
    for a sustained amount of time?"
  • 23:02 - 23:03
    which, you might imagine,
  • 23:03 - 23:06
    is critical for almost anything
    that's important to us,
  • 23:06 - 23:09
    whether that's athletic performance,
    musical performance,
  • 23:09 - 23:13
    theatrical performance,
    giving a lecture, taking an exam.
  • 23:13 - 23:17
    Almost all these domains
    require sustained focus and attention.
  • 23:17 - 23:18
    You all know that there are times
  • 23:18 - 23:21
    when you're really in the zone
    and can do well
  • 23:21 - 23:23
    and there are other times
    where even if you're fully prepared,
  • 23:23 - 23:24
    you don't do as well,
  • 23:24 - 23:27
    because you're not in the zone
    or you're kind of distracted.
  • 23:27 - 23:30
    So my lab is interested
    in how we can characterize this
  • 23:30 - 23:32
    and how we can predict that.
  • 23:32 - 23:35
    And as work that's not even published yet,
  • 23:35 - 23:38
    Emily Finn, a graduate student
    in neuroscience program,
  • 23:38 - 23:41
    and Monica Rosenberg,
    a graduate student in psychology,
  • 23:41 - 23:43
    they're both working with me
  • 23:43 - 23:48
    and Todd Constable over at the Magnetic
    Resonance Research Center at Yale.
  • 23:49 - 23:53
    They find that it's not easy to predict
    what it means to be in the zone
  • 23:53 - 23:55
    and what it means to be out of the zone,
  • 23:55 - 23:57
    but if you start using measures
  • 23:57 - 24:02
    that also look at how different brain
    areas are connected with each other -
  • 24:02 - 24:04
    something that we call
    "functional connectivity" -
  • 24:04 - 24:07
    you can start getting at
    what it means to be in the zone
  • 24:07 - 24:09
    versus what it means
    to be out of the zone.
  • 24:09 - 24:10
    And what we have here
  • 24:10 - 24:15
    are two types of networks
    that Emily and Monica have identified.
  • 24:15 - 24:19
    There is a network of brain regions
    and the connectivity between them
  • 24:19 - 24:21
    that predicts good performance,
  • 24:21 - 24:23
    and there is another complementary network
  • 24:23 - 24:28
    that has different brain areas
    and different types of connectivity
  • 24:28 - 24:31
    that seems to be correlated
    with bad performance.
  • 24:31 - 24:34
    So if you had an activity
    in the blue network over here,
  • 24:34 - 24:35
    people tend to do worse.
  • 24:35 - 24:38
    If you have activity in the red network,
    people tend to do better.
  • 24:39 - 24:41
    These networks are so robust
  • 24:41 - 24:45
    that you can measure these networks
    even when subjects are doing nothing.
  • 24:45 - 24:48
    We put them in the scanner
    before anything starts,
  • 24:48 - 24:51
    have them close their eyes
    and rest for 10 minutes -
  • 24:51 - 24:53
    you call that a "resting state scan" -
  • 24:53 - 24:57
    and based on their resting
    state activity alone,
  • 24:57 - 25:02
    we were able to predict how well
    they were going to do over the next hour.
  • 25:03 - 25:06
    Again, red network corresponds
    with better performance;
  • 25:06 - 25:09
    blue network corresponds
    with worse performance.
  • 25:09 - 25:11
    And just on a variety of tasks,
  • 25:11 - 25:15
    you have very high
    predictive power in these studies.
  • 25:15 - 25:21
    The implications for studying ADHD
    and many other types of domains,
  • 25:21 - 25:22
    I think, are very large,
  • 25:22 - 25:24
    so hopefully, this will get
    published soon.
  • 25:24 - 25:27
    So in closing, I hope I've convinced you
  • 25:27 - 25:30
    that I am no longer embarrassed to say
    that we can read your mind,
  • 25:30 - 25:32
    if anyone asks me.
  • 25:32 - 25:35
    And I thank you for your attention.
  • 25:35 - 25:38
    (Applause)
Title:
Reading minds | Marvin Chun | TEDxYale
Description:

Can psychologists read dreams? Watch Marvin Chun's fascinating talk to find out more.

Marvin Chun is a cognitive neuroscientist with research interests in visual attention, memory. and perception. His lab employs neuroimaging (fMRI) and behavioral techniques to study how people perceive and remember visual information. His work in visual attention explores why people can consciously perceive only a small portion of all of the sensory information coming through the eyes. The lab’s research on memory investigates the neuronal correlates of memory encoding and retrieval. What are the fMRI signatures of memory traces in the brain? Much of his work on the interactions between memory and attention has centered on the role of context and associative learning. Finally, our work in perception examines the fundamental question of how the brain discriminates objects to make quick, efficient perceptual decisions.

This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at http://ted.com/tedx

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDxTalks
Duration:
25:42

English subtitles

Revisions