Reading minds | Marvin Chun | TEDxYale
-
0:16 - 0:17Good afternoon.
-
0:18 - 0:19My name is Marvin Chun,
-
0:19 - 0:21and I am a researcher,
-
0:21 - 0:24and I teach psychology at Yale.
-
0:25 - 0:26I'm very proud of what I do,
-
0:26 - 0:30and I really believe in the value
of psychological science. -
0:31 - 0:34However, to start with a footnote,
-
0:34 - 0:37I will admit that I feel
self-conscious sometimes -
0:37 - 0:39telling people that I'm a psychologist,
-
0:39 - 0:41especially when you're meeting
a stranger on the airplane -
0:41 - 0:44or at a cocktail party.
-
0:44 - 0:45And the reason for this
-
0:45 - 0:48is because there's
a very typical question that you get -
0:48 - 0:51when you tell people
that you're a psychologist. -
0:51 - 0:53What do you think that question might be?
-
0:53 - 0:55(Audience) What am I thinking?
-
0:55 - 0:56"What am I thinking?" exactly.
-
0:56 - 0:57"Can you read my mind?"
-
0:57 - 1:01is a question that a lot
of psychologists cringe at -
1:01 - 1:02whenever they hear it
-
1:02 - 1:05because it really reflects
a misconception about - -
1:05 - 1:09a kind of a Freudian misconception
about what we do for a living. -
1:09 - 1:12So you know, I've developed
a lot of answers over the years: -
1:12 - 1:13"If I could read your mind,
-
1:13 - 1:15I'd be a whole lot richer
than I am right now." -
1:17 - 1:19The other answer I came up with is,
-
1:19 - 1:21"Oh, I'm actually a neuroscientist,"
-
1:21 - 1:23and then that really makes people quiet.
-
1:23 - 1:24(Laughter)
-
1:25 - 1:26But the best answer of course is,
-
1:26 - 1:28"Yeah, of course I can read your mind,
-
1:28 - 1:30and you really should be
ashamed of yourself." -
1:30 - 1:32(Laughter)
-
1:36 - 1:40But all that is motivation
for what I'll share with you today, -
1:40 - 1:44which is I've been a psychologist
for about 20 years now, -
1:44 - 1:49and we've actually reached this point
where I can actually answer that question -
1:49 - 1:53by saying, "Yes,
if I can put you in a scanner, -
1:53 - 1:55then we can read your mind."
-
1:55 - 1:59And what I'll share with you today
is to what extent, how far are we -
1:59 - 2:03at being able to read
other people's minds. -
2:04 - 2:06Of course this is the stuff
of science fiction. -
2:06 - 2:09Even a movie as recent as "Divergent"
-
2:09 - 2:10has these many sequences
-
2:10 - 2:13in which they're reading out
the brain activity -
2:13 - 2:14to see what she's dreaming
-
2:14 - 2:17under the influence of drugs.
-
2:18 - 2:21And you know, again,
these technologies do exist. -
2:21 - 2:24In modern day, in the real word,
they look more like this. -
2:24 - 2:27These are just standard
MRI machines and scanners -
2:27 - 2:29that are slightly modified
-
2:29 - 2:34so that they can allow you
to infer and read out brain activity. -
2:36 - 2:39These methods have been around
for about 25 years, -
2:39 - 2:42and in the first decade -
I'd say the 1990s - -
2:42 - 2:44a lot of effort was devoted
-
2:44 - 2:48to mapping out what the different
parts of the brain do. -
2:48 - 2:53One of the most successful forms
of this is exemplified here. -
2:53 - 2:56Imagine you're a subject
lying down in that scanner; -
2:56 - 2:57there's a computer projector
-
2:57 - 2:59that allows you
to look at a series of images -
2:59 - 3:02while your brain is being scanned.
-
3:02 - 3:04For some periods of time,
-
3:04 - 3:07you are going to be seeing
sequences of scene images -
3:08 - 3:11alternating with other
sequences of face images, -
3:11 - 3:15and they'll just go back and forth
while your brain is being scanned. -
3:15 - 3:17And the key motivation here
for the researchers -
3:17 - 3:20is to compare what's
happening in the brain -
3:20 - 3:23when you're looking at scenes
versus when you're looking at faces -
3:23 - 3:27to see if anything different happens
for these two categories of stimuli. -
3:27 - 3:30And I'd say even within 10 minutes
of scanning your brain, -
3:30 - 3:34you can get a very reliable pattern
of results that looks like this, -
3:34 - 3:40where there is a region of the brain
that's specialized for processing scenes, -
3:40 - 3:42shown here in the warm colors,
-
3:42 - 3:45and there is also
a separate region of the brain, -
3:45 - 3:47shown here in blue colors,
-
3:47 - 3:51that is more selectively active
when people are viewing faces. -
3:51 - 3:54There's this disassociation here.
-
3:54 - 3:57And this allows you to see
-
3:57 - 3:59where different functions
reside in the brain. -
3:59 - 4:02And in the case
of scene and face perception, -
4:02 - 4:06these two forms of vision
are so important in our everyday lives - -
4:06 - 4:08or course scene processing
important for navigation, -
4:08 - 4:12face processing important
for social interactions - -
4:12 - 4:15that our brain has
specialized areas devoted to them. -
4:16 - 4:18And this alone, I think,
is pretty amazing. -
4:19 - 4:22A lot of this work
started coming out in the mid-1990s, -
4:22 - 4:25and it really complimented and reinforced
-
4:25 - 4:28a lot of the patient work
that's been around for dozens of years. -
4:29 - 4:33But really, the methodologies of fMRI
-
4:33 - 4:37have really gone far beyond
these very basic mapping studies. -
4:37 - 4:40You know, in fact,
when you look at a study like this, -
4:40 - 4:41you may say, "Oh, that's really cool,
-
4:41 - 4:44but what does this have to do
about everyday life? -
4:44 - 4:49How is this going to help me understand
why I feel this way or don't feel this way -
4:49 - 4:52or why I like a certain person
or don't like certain other people? -
4:52 - 4:55Will this tell me what kind
of career I should take?" -
4:55 - 4:57You know, you can't really get too far
-
4:57 - 5:00with this scene and face
area mapping alone. -
5:01 - 5:05So it wasn't until this study
came up around 2000 - -
5:05 - 5:07And most of these studies,
I should emphasize, -
5:07 - 5:09are from other labs around the field.
-
5:09 - 5:10Any studies from Yale,
-
5:10 - 5:12I'll have the mark of "Yale"
on the corner. -
5:12 - 5:18But this particular study was done at MIT,
by Nancy Kanwisher and colleagues. -
5:18 - 5:21They actually had subjects
look at nothing. -
5:21 - 5:25They had subjects
close their eyes in the scanner -
5:26 - 5:28while they gave them instructions,
-
5:28 - 5:32and the instructions
were one of two types of tasks. -
5:32 - 5:36One was to imagine
a bunch of faces that they knew. -
5:36 - 5:39So you might be able to imagine
your friends and family, -
5:39 - 5:41cycle through them one at a time
-
5:41 - 5:43while your brain is being scanned.
-
5:43 - 5:45That would be the face
imagery instruction. -
5:45 - 5:48And then the other instruction
that they gave the subjects -
5:48 - 5:50was of course a scene imagery instruction,
-
5:50 - 5:54so imagine after this talk,
you walk home or take your car home, -
5:54 - 5:56get into your home,
navigate throughout your house, -
5:56 - 5:58change into something more comfortable,
-
5:58 - 6:00go to the kitchen, get something to eat.
-
6:00 - 6:04You have this ability to visualize
places and scenes in your mind, -
6:04 - 6:08and the questions is, "What happens
when you're visualizing scenes, -
6:08 - 6:11and what happens
when you're visualizing faces? -
6:11 - 6:13What are the neural correlates of that?"
-
6:13 - 6:18And the hypothesis was, well, maybe
imagining faces activates the face area -
6:18 - 6:21and maybe imagining scenes
would activate the scene area, -
6:21 - 6:24and that's in fact indeed what they found.
-
6:25 - 6:27First, you bring in a bunch
of subjects into the scanner, -
6:27 - 6:29show them faces, show them scenes,
-
6:29 - 6:31you map out the face area on top,
-
6:31 - 6:34you map out the place area
in the second row. -
6:34 - 6:36And then the same subjects,
-
6:37 - 6:39at a different time in the scanner,
-
6:39 - 6:42have them close their eyes,
do the instructions I just had you do, -
6:42 - 6:43and you compare.
-
6:43 - 6:45When you have face imagery,
-
6:45 - 6:48you can see in the second row
that the face area is active, -
6:48 - 6:50even though nothing's on the screen.
-
6:50 - 6:54And in the fourth row here,
the second row of the bottom pair, -
6:54 - 6:57you see that the place area is active
when you're imagining scenes; -
6:57 - 6:59nothing's on the screen,
-
6:59 - 7:03but simply when you're imagining scenes,
it will activate the place area. -
7:03 - 7:07In fact, even just
by looking at fMRI data alone -
7:07 - 7:10and looking at the relative activity
for the face area and place area, -
7:10 - 7:12you can guess over 80% of the time,
-
7:12 - 7:14even in this first study,
-
7:14 - 7:16what subjects were imagining.
-
7:16 - 7:17Okay?
-
7:17 - 7:20And so, in this sense, in my mind,
-
7:20 - 7:23I believe that this is the first study
that started using brain imaging -
7:23 - 7:26to actually read out
what people were thinking. -
7:26 - 7:29You can take a naïve person,
look at this graph, you know, -
7:29 - 7:32and ask which bar is higher,
which line is higher, -
7:32 - 7:36and they can guess better than chance,
way better than chance, -
7:36 - 7:38what people were thinking.
-
7:39 - 7:41Now, you may say, "Okay,
that's really cool, -
7:41 - 7:45but there're a whole lot more
interesting things to do with a scanner -
7:45 - 7:47or with what we want to know
-
7:47 - 7:50than whether people
are thinking of faces or scenes." -
7:50 - 7:51And so the rest of my talk
-
7:51 - 7:55will be devoted to the next 15 years
of work since this study -
7:55 - 7:59that have really advanced
our ability to read out minds. -
8:00 - 8:02This is one study
that was published in 2010 -
8:02 - 8:05in the New England Journal of Medicine
-
8:05 - 8:06by another group.
-
8:06 - 8:11They were actually studying patients
who were in a persistent vegetative state, -
8:11 - 8:14who were locked in,
had no ability to express themselves; -
8:14 - 8:17it was not even clear if they were
capable of voluntary thought, -
8:17 - 8:22and yet because you can instruct people
to imagine one thing versus another - -
8:22 - 8:26in this case, imagine walking
through your house, navigation, -
8:26 - 8:28in the other case, imagine playing tennis
-
8:28 - 8:30because that activates
a whole motor circuitry -
8:30 - 8:34that's separate from the spacial
navigation circuitry - -
8:34 - 8:37you attach each of those imagery
instruction to "yes" or "no," -
8:37 - 8:40and then you can ask them
factual questions, -
8:40 - 8:44such as "Is your father's name Alexander,
or is your father's name Thomas?" -
8:44 - 8:47And they were able to demonstrate
that in this patient, -
8:47 - 8:50who otherwise had no ability
to communicate, -
8:50 - 8:55he was able to respond
to these factual questions accurately, -
8:55 - 8:59and even in control subject
who do not have problems. -
9:00 - 9:01This method is so reliable
-
9:01 - 9:06that almost over 95%
of their responses were decodable -
9:06 - 9:09just through brain imagining alone -
"yes" versus "no." -
9:09 - 9:11So you can imagine,
with a game of 20 questions, -
9:11 - 9:15you can really get insight
into what people are thinking -
9:15 - 9:17using these methodologies.
-
9:18 - 9:19But again, it's limited;
-
9:19 - 9:21we only have two states here -
-
9:21 - 9:22"yes," "no,"
-
9:22 - 9:25think tennis or think
walking around your house. -
9:26 - 9:30But fortunately, there're tremendously
brilliant scientists all around the world -
9:30 - 9:36who are constantly refining the methods
to decode fMRI activity -
9:36 - 9:39so that we can understand the mind.
-
9:40 - 9:44And I'd like to use this slide
from Nature magazine -
9:44 - 9:47to motivate the next few studies
I'll share with you. -
9:48 - 9:53One huge limitation of brain imaging,
at least in the first 15 years, -
9:53 - 9:57is that there are not
many specialized areas in the brain. -
9:57 - 9:59The face area is one.
Place area is another. -
9:59 - 10:04These are very, very important visual
functions that we carry out every day, -
10:04 - 10:08but there is no "shoe" area in the brain,
there is no "cat" area in the brain. -
10:08 - 10:12You don't see separate blobs
or patches of activity -
10:12 - 10:14for these two different categories,
-
10:14 - 10:17and indeed, for most categories
that we encounter -
10:17 - 10:19and are able to discriminate,
-
10:19 - 10:21they don't have separate brain regions.
-
10:21 - 10:23And because there are
no separate brain regions, -
10:23 - 10:24the question then is,
-
10:24 - 10:27"How do we study them,
and how do we discriminate them, -
10:27 - 10:31how do we get at these more fine
detailed differences that matter so much -
10:31 - 10:33if we really want to understand
-
10:33 - 10:36how the mind works
and how we can read it out -
10:36 - 10:38using fMRI?
-
10:38 - 10:44Fortunately, some neuroscientists
collaborated with computer vision people -
10:44 - 10:49and with electrical engineers
and computer scientists and statisticians -
10:49 - 10:52to use very refined mathematical methods
-
10:52 - 10:56for pulling out and decoding
more subtle patterns of activity -
10:56 - 10:59that are unique to shoe processing
-
10:59 - 11:03and that are unique
to the processing of cat stimuli. -
11:03 - 11:06So, for instance,
even though the shoes and cats, -
11:06 - 11:08you show a whole bunch
of them to subjects, -
11:08 - 11:11it'll activate same part of cortex,
the same part of the brain, -
11:11 - 11:13and so that if you average
activity in that part, -
11:13 - 11:17you won't see any differences
between the shoes and the cats. -
11:17 - 11:19As you can see here on the right,
-
11:19 - 11:23the shoes are still associated
with different fine patterns, -
11:23 - 11:25or micropatterns of activity
-
11:25 - 11:29that are different from the patterns
of activity elicited by cats. -
11:29 - 11:34These little boxes over here,
these cells are referred to voxels; -
11:34 - 11:36the way fMRI collects data from the brain
-
11:36 - 11:39is it kind of chops up your brain
into tiny little cubes, -
11:39 - 11:41and each cube is a unit of measurement -
-
11:41 - 11:45it gives you numerical data
that you can use to analyze. -
11:45 - 11:48And if you imagine
that the brightness of these squares -
11:48 - 11:50corresponds to a different
level of activity, -
11:50 - 11:53quantitatively measurable
different level of activity, -
11:53 - 11:55and you can imagine over a spacial region
-
11:55 - 12:00that the pattern of these activities
may be different for shoes versus cats, -
12:00 - 12:04then all you need to do
is to work with a computer algorithm -
12:04 - 12:06to learn these subtle
differences in patterns, -
12:06 - 12:10so we have what we might call a "decoder."
-
12:10 - 12:12And you train up a computer decoder
-
12:12 - 12:16to learn the differences
between shoe patterns and cat patterns, -
12:16 - 12:18and then what you have
is here in the testing phase: -
12:18 - 12:21after you train up
your computer algorithm, -
12:21 - 12:23you can show a new shoe,
-
12:23 - 12:26it will elicit a certain pattern
of fMRI activity, -
12:26 - 12:28as you can see over here,
-
12:28 - 12:30and the computer
will tell you its best guess -
12:30 - 12:33as to whether this pattern over here
-
12:33 - 12:37is closer to a shoe
or closer to that for a cat. -
12:37 - 12:38And it will allow you
-
12:38 - 12:40to guess something
that's a lot more refined -
12:40 - 12:42and that is not possible
-
12:42 - 12:46when you don't have areas
like the face area or the place area. -
12:48 - 12:52And now people are really taking off
with these types of methodologies. -
12:52 - 12:57Those started coming out in 2005,
so a little than a decade ago. -
12:57 - 13:00This is a study published
in the New England Journal of Medicine, -
13:00 - 13:03year 2013, literally last year,
-
13:04 - 13:07a very important clinical
application of studying -
13:07 - 13:10or trying to find a way
to objectively measure pain, -
13:10 - 13:13which is a very subjective state, we know.
-
13:14 - 13:15You can use fMRI,
-
13:15 - 13:17and these researchers have developed ways
-
13:17 - 13:21of mapping and predicting
the amount of heat -
13:21 - 13:23and the amount of pain people experience
-
13:23 - 13:28as you increase the temperature
of a noxious stimulus on their skin, -
13:28 - 13:32and the fMRI decoder
will kind of give you an estimate -
13:32 - 13:35of how much heat
and maybe even how much pain -
13:35 - 13:38someone is experiencing,
-
13:38 - 13:41to the extent that they can predict,
just based on fMRI activity alone, -
13:41 - 13:45how much pain a person is experiencing
over 93% of the time, -
13:45 - 13:48or with 93% accuracy.
-
13:50 - 13:53This is an astonishing version
that I'm sure many of you have seen, -
13:53 - 13:54published in 2011,
-
13:54 - 13:58from Jack Gallant's group
over in UC, Berkley. -
13:58 - 14:03They showed people a bunch
of these videos from YouTube, -
14:03 - 14:04here shown here on the left,
-
14:04 - 14:08and then they trained up
a classifier, a decoder, -
14:08 - 14:13to learn the mapping
between the videos and fMRI activity; -
14:13 - 14:17and then when they showed
people novel videos -
14:17 - 14:20that elicited certain patterns
of fMRI activity, -
14:20 - 14:23they were able to use
that fMRI activity alone -
14:23 - 14:26to guess what videos people were seeing.
-
14:26 - 14:28And they pulled the guesses
off of YouTube as well -
14:28 - 14:29and averaged them,
-
14:29 - 14:31and that's why it's a little grainy,
-
14:31 - 14:34but what you'll see here -
and this is all over YouTube - -
14:34 - 14:36is what people saw
-
14:36 - 14:39and, on the right, what the computer
is guessing people saw -
14:39 - 14:41based on fMRI activity alone.
-
15:13 - 15:16So it's pretty astonishing;
this literally is reading out the mind. -
15:16 - 15:20I mean, you have complicated models,
you have to train it and such, -
15:20 - 15:22but still, this is based
on fMRI activity alone -
15:22 - 15:25that they're able to make
these guesses over here. -
15:25 - 15:30I had a student at Yale University.
His name is Alan Cowen. -
15:30 - 15:32He had a very strong
mathematics background. -
15:32 - 15:35He came to my lab,
and he said, "I really love this stuff." -
15:36 - 15:38He loved this stuff so much
that he wanted to do it himself. -
15:38 - 15:43I actually did not have the ability
to do this in the lab, -
15:43 - 15:46but he developed it
together with my post doc Brice Kuhl, -
15:46 - 15:50who's now an assistant professor
at New York University. -
15:50 - 15:51They wanted to ask the question
-
15:51 - 15:56of "Can we limit this type
of analysis to faces?" -
15:57 - 15:58As you saw before,
-
15:58 - 16:01you can kind of see what categories
people were looking at, -
16:01 - 16:04but certainly the algorithm
was not able to give you guesses -
16:04 - 16:06as to which person you were looking at
-
16:06 - 16:09or what kind of animal
you were looking at. -
16:09 - 16:11You're really getting
more categorical guesses -
16:11 - 16:13in the prior example.
-
16:13 - 16:15And so Alan and Brice,
-
16:15 - 16:19they thought that, well, if we focused
our algorithms on faces alone - -
16:19 - 16:23you know, the faces
being so important for everyday life, -
16:23 - 16:24and given the fact
-
16:24 - 16:27that there are specialized mechanisms
in the brain for face processing - -
16:27 - 16:30maybe if we do something
that's that specialized, -
16:30 - 16:34we might be able to individually
guess individual faces. -
16:35 - 16:37As a summary, that's what they did.
-
16:37 - 16:39They showed people a bunch of faces,
-
16:39 - 16:41they trained up these algorithms,
-
16:41 - 16:46and then using fMRI activity alone,
they were able to generate good guesses, -
16:47 - 16:49above-chance guesses,
-
16:49 - 16:52as to which faces
an individual is looking at, -
16:52 - 16:54again, based on fMRI activity alone,
-
16:54 - 16:56and that's what's shown here on the right.
-
16:57 - 16:59Just to give you a little bit more detail
-
16:59 - 17:01for those who kind of
like that kind of stuff. -
17:01 - 17:03It's really relying
on a lot of computer vision. -
17:03 - 17:05This is not voodoo magic.
-
17:05 - 17:07There's a lot of computer vision
that allows you -
17:07 - 17:13to mathematically summarize
features of a whole array of faces. -
17:14 - 17:16People call them "eigenfaces."
-
17:16 - 17:19It's based on principle
component analysis. -
17:19 - 17:23So you can decompose these faces
and describe a whole array of faces -
17:23 - 17:25with these mathematical algorithms.
-
17:25 - 17:29And basically what
Alan and Brice did in my lab -
17:29 - 17:35was to map those mathematical
components to brain activity, -
17:35 - 17:39and then once you train up a database,
a statistic library, for doing so, -
17:39 - 17:42then you can take a novel face
that's shown up here on the right, -
17:43 - 17:46record the fMRI activity
to those novel faces, -
17:46 - 17:50and then make your best guess
as to which face people were looking at. -
17:50 - 17:53Right now, we are about 65, 70% correct,
-
17:54 - 17:59and we and others are working very hard
to improve that performance. -
18:00 - 18:02And this is just another example:
-
18:02 - 18:04a whole bunch of originals
on the left here, -
18:04 - 18:07and then reconstructions
and guesses on the right. -
18:08 - 18:11This study actually just came out,
just got published this past week, -
18:11 - 18:13so I'm pretty excited
-
18:13 - 18:16that you might even see
some media coverage of the work, -
18:16 - 18:19and again, I really want to credit
Alan Cowen and Brice Kuhl -
18:19 - 18:21for making this possible.
-
18:22 - 18:23Now, you may wonder,
-
18:23 - 18:27going back, kind of, to the more
categorical reading out of brains - -
18:27 - 18:28That work came out in 2011.
-
18:28 - 18:31I've been talking about it a lot
in classes and stuff, -
18:31 - 18:33and a really typical
question that you get - -
18:34 - 18:35and this question I actually like -
-
18:35 - 18:38is, "Well, if you can read out
what people are seeing, -
18:38 - 18:40can you read out what they're dreaming?"
-
18:40 - 18:43because that's the natural next step.
-
18:43 - 18:46And this is work done
by Horikawa et al. in Japan, -
18:46 - 18:48published in Science last year.
-
18:48 - 18:51They actually took a lot of the work
that Jack Gallant did, -
18:51 - 18:54modified it so they can decode
what people were dreaming -
18:54 - 18:57while they were sleeping
inside the scanner. -
18:57 - 18:58And what we have here on the -
-
18:58 - 18:59Ignore the words,
-
18:59 - 19:03these are just ways to try to categorize
what people were dreaming about. -
19:03 - 19:05Just focus on the top left.
-
19:05 - 19:10That's the computer
algorithm's guess, or decoder's guess -
19:10 - 19:12as to what people were dreaming about.
-
19:12 - 19:14You can imagine
it's going to be a lot more messier -
19:14 - 19:16than what you saw before for perception,
-
19:16 - 19:18but still, it's pretty amazing
-
19:18 - 19:21that you can decode dreams now
while people are sleeping, -
19:21 - 19:23you know, while they are
in rapid eye movement sleep. -
19:23 - 19:26So what we have here -
I'll turn it on, it's a video. -
19:26 - 19:29I will mention that,
a little stereotypically, -
19:30 - 19:32this is a male subject,
-
19:32 - 19:35and male subjects tend to think
about one thing when they're dreaming, -
19:35 - 19:36(Laughter)
-
19:36 - 19:40but it is rated PG,
if you'll excuse me for that. -
19:43 - 19:45And again, you know, it's not easy.
-
19:45 - 19:47It's, again, remarkable
that they can do this, -
19:47 - 19:50and it gets better
as it goes towards the end, -
19:50 - 19:54right before they awakened the subject
to ask them what they were dreaming, -
19:54 - 19:56but you can start seeing
buildings and places, -
19:56 - 20:00and here comes the dream,
the real concrete stuff, -
20:00 - 20:01right there.
-
20:01 - 20:02Okay.
-
20:02 - 20:03(Laughter)
-
20:03 - 20:07Then they wake up the subject and say,
"What were you dreaming about?" -
20:07 - 20:10and they will report something
that's consistent with the decoder, -
20:10 - 20:12as a way to access its validity.
-
20:15 - 20:17So, we can decode and read out the mind;
-
20:17 - 20:20the field, the kind
of neuroscience can do that. -
20:20 - 20:23And the question is, "What are we
going to use this capability for?" -
20:24 - 20:26Some of you may have seen this movie,
the Minority Report. -
20:26 - 20:30It's a fascinating, amazing movie.
I encourage you to see it if you can. -
20:30 - 20:32This movie is all
about predicting behavior, -
20:32 - 20:34you know, reading out brain activity
-
20:34 - 20:37to try to predict what's going
to happen in the future. -
20:37 - 20:42And neuroscientists are working
on this aspect of cognition as well. -
20:42 - 20:46As one example, colleagues
over at the University of New Mexico, -
20:46 - 20:48Kent Kiehl's group,
-
20:49 - 20:53scanned a whole bunch of prisoners
who committed felony crimes. -
20:53 - 20:55They were all released back into society,
-
20:55 - 20:57and the depended measure was:
-
20:57 - 21:00"How likely was it
for someone to be rearrested? -
21:00 - 21:02What was the likelihood of recidivism?"
-
21:02 - 21:05And they found that based on brain scans
-
21:05 - 21:08while people were
doing tasks in the prison - -
21:08 - 21:11in the scanner that was
located in the prison, -
21:11 - 21:13they could predict reliably
-
21:13 - 21:16who was more likely
to come back to prison, -
21:16 - 21:17be rearrested, re-commit a crime
-
21:17 - 21:20versus who was less likely to do so.
-
21:20 - 21:25In my own laboratory here at Yale,
I worked with a undergraduate student. -
21:25 - 21:28This is, again,
his own idea, Harrison Korn, -
21:28 - 21:30together with Michael Johnson.
-
21:30 - 21:36They wanted to see if they could measure
implicit or unconscious prejudice -
21:36 - 21:39while people were looking
at these vignettes -
21:39 - 21:42which involve employment
discrimination cases. -
21:42 - 21:44So you can read through that:
-
21:44 - 21:46"Rodney, a 19 year-old African American,
-
21:46 - 21:49applied for a job as a clerk
at a teen-apparel store. -
21:49 - 21:51Had experience as a cashier,
-
21:51 - 21:53glowing recommendation
from a previous employer -
21:53 - 21:57but was denied the job because he did not
match the company's look." -
21:57 - 21:58Okay?
-
21:58 - 22:00And et cetera, et cetera, et cetera.
-
22:00 - 22:01And the question is,
-
22:01 - 22:05"In this hypothetical damage awards case,
-
22:05 - 22:09how much would you
as a subject award Rodney?" -
22:09 - 22:13And you can imagine a range of responses,
-
22:13 - 22:16where you would award Rodney a lot
or award Rodney a little, -
22:16 - 22:19and we had many other cases like this.
-
22:19 - 22:20And the question is,
-
22:20 - 22:23"Can we predict which subjects
are going to award a lot of damages -
22:23 - 22:26and which subjects
are going to award few damages?" -
22:27 - 22:32And Harrison found that if you scan people
-
22:32 - 22:37while they were looking
at white faces versus black faces, -
22:37 - 22:40based on the brain response
to those faces, -
22:40 - 22:41he could predict
-
22:41 - 22:47the amount of awards that were given
in this hypothetical case later on. -
22:47 - 22:51And that's shown here
by these correlations on the left. -
22:51 - 22:54As a final thing I want to share
of my lab right now, -
22:54 - 22:58we're really interested in this issue
of "what does it mean to be in the zone? -
22:58 - 23:00what does it mean
to be able to stay focused -
23:00 - 23:02for a sustained amount of time?"
-
23:02 - 23:03which, you might imagine,
-
23:03 - 23:06is critical for almost anything
that's important to us, -
23:06 - 23:09whether that's athletic performance,
musical performance, -
23:09 - 23:13theatrical performance,
giving a lecture, taking an exam. -
23:13 - 23:17Almost all these domains
require sustained focus and attention. -
23:17 - 23:18You all know that there are times
-
23:18 - 23:21when you're really in the zone
and can do well -
23:21 - 23:23and there are other times
where even if you're fully prepared, -
23:23 - 23:24you don't do as well,
-
23:24 - 23:27because you're not in the zone
or you're kind of distracted. -
23:27 - 23:30So my lab is interested
in how we can characterize this -
23:30 - 23:32and how we can predict that.
-
23:32 - 23:35And as work that's not even published yet,
-
23:35 - 23:38Emily Finn, a graduate student
in neuroscience program, -
23:38 - 23:41and Monica Rosenberg,
a graduate student in psychology, -
23:41 - 23:43they're both working with me
-
23:43 - 23:48and Todd Constable over at the Magnetic
Resonance Research Center at Yale. -
23:49 - 23:53They find that it's not easy to predict
what it means to be in the zone -
23:53 - 23:55and what it means to be out of the zone,
-
23:55 - 23:57but if you start using measures
-
23:57 - 24:02that also look at how different brain
areas are connected with each other - -
24:02 - 24:04something that we call
"functional connectivity" - -
24:04 - 24:07you can start getting at
what it means to be in the zone -
24:07 - 24:09versus what it means
to be out of the zone. -
24:09 - 24:10And what we have here
-
24:10 - 24:15are two types of networks
that Emily and Monica have identified. -
24:15 - 24:19There is a network of brain regions
and the connectivity between them -
24:19 - 24:21that predicts good performance,
-
24:21 - 24:23and there is another complementary network
-
24:23 - 24:28that has different brain areas
and different types of connectivity -
24:28 - 24:31that seems to be correlated
with bad performance. -
24:31 - 24:34So if you had an activity
in the blue network over here, -
24:34 - 24:35people tend to do worse.
-
24:35 - 24:38If you have activity in the red network,
people tend to do better. -
24:39 - 24:41These networks are so robust
-
24:41 - 24:45that you can measure these networks
even when subjects are doing nothing. -
24:45 - 24:48We put them in the scanner
before anything starts, -
24:48 - 24:51have them close their eyes
and rest for 10 minutes - -
24:51 - 24:53you call that a "resting state scan" -
-
24:53 - 24:57and based on their resting
state activity alone, -
24:57 - 25:02we were able to predict how well
they were going to do over the next hour. -
25:03 - 25:06Again, red network corresponds
with better performance; -
25:06 - 25:09blue network corresponds
with worse performance. -
25:09 - 25:11And just on a variety of tasks,
-
25:11 - 25:15you have very high
predictive power in these studies. -
25:15 - 25:21The implications for studying ADHD
and many other types of domains, -
25:21 - 25:22I think, are very large,
-
25:22 - 25:24so hopefully, this will get
published soon. -
25:24 - 25:27So in closing, I hope I've convinced you
-
25:27 - 25:30that I am no longer embarrassed to say
that we can read your mind, -
25:30 - 25:32if anyone asks me.
-
25:32 - 25:35And I thank you for your attention.
-
25:35 - 25:38(Applause)
- Title:
- Reading minds | Marvin Chun | TEDxYale
- Description:
-
Can psychologists read dreams? Watch Marvin Chun's fascinating talk to find out more.
Marvin Chun is a cognitive neuroscientist with research interests in visual attention, memory. and perception. His lab employs neuroimaging (fMRI) and behavioral techniques to study how people perceive and remember visual information. His work in visual attention explores why people can consciously perceive only a small portion of all of the sensory information coming through the eyes. The lab’s research on memory investigates the neuronal correlates of memory encoding and retrieval. What are the fMRI signatures of memory traces in the brain? Much of his work on the interactions between memory and attention has centered on the role of context and associative learning. Finally, our work in perception examines the fundamental question of how the brain discriminates objects to make quick, efficient perceptual decisions.
This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at http://ted.com/tedx
- Video Language:
- English
- Team:
- closed TED
- Project:
- TEDxTalks
- Duration:
- 25:42
Peter van de Ven approved English subtitles for Reading minds | Marvin Chun | TEDxYale | ||
Peter van de Ven accepted English subtitles for Reading minds | Marvin Chun | TEDxYale | ||
Peter van de Ven edited English subtitles for Reading minds | Marvin Chun | TEDxYale | ||
Peter van de Ven edited English subtitles for Reading minds | Marvin Chun | TEDxYale | ||
Zeddi Lee edited English subtitles for Reading minds | Marvin Chun | TEDxYale | ||
Zeddi Lee edited English subtitles for Reading minds | Marvin Chun | TEDxYale | ||
Zeddi Lee edited English subtitles for Reading minds | Marvin Chun | TEDxYale | ||
Zeddi Lee edited English subtitles for Reading minds | Marvin Chun | TEDxYale |