0:00:15.620,0:00:17.129 Good afternoon. 0:00:17.559,0:00:18.753 My name is Marvin Chun, 0:00:18.753,0:00:20.917 and I am a researcher, 0:00:20.917,0:00:24.232 and I teach psychology at Yale. 0:00:24.517,0:00:25.986 I'm very proud of what I do, 0:00:25.986,0:00:29.774 and I really believe in the value[br]of psychological science. 0:00:31.013,0:00:33.552 However, to start with a footnote, 0:00:33.552,0:00:36.601 I will admit that I feel[br]self-conscious sometimes 0:00:36.601,0:00:38.754 telling people that I'm a psychologist, 0:00:38.754,0:00:41.483 especially when you're meeting[br]a stranger on the airplane 0:00:41.483,0:00:43.770 or at a cocktail party. 0:00:43.770,0:00:44.867 And the reason for this 0:00:44.867,0:00:48.086 is because there's[br]a very typical question that you get 0:00:48.086,0:00:51.093 when you tell people[br]that you're a psychologist. 0:00:51.093,0:00:53.310 What do you think that question might be? 0:00:53.310,0:00:54.740 (Audience) What am I thinking? 0:00:54.740,0:00:56.172 "What am I thinking?" exactly. 0:00:56.172,0:00:57.273 "Can you read my mind?" 0:00:57.273,0:01:00.568 is a question that a lot[br]of psychologists cringe at 0:01:00.568,0:01:02.305 whenever they hear it 0:01:02.305,0:01:04.636 because it really reflects[br]a misconception about - 0:01:04.636,0:01:08.746 a kind of a Freudian misconception[br]about what we do for a living. 0:01:08.746,0:01:11.854 So you know, I've developed[br]a lot of answers over the years: 0:01:11.854,0:01:13.140 "If I could read your mind, 0:01:13.140,0:01:15.333 I'd be a whole lot richer[br]than I am right now." 0:01:16.719,0:01:18.812 The other answer I came up with is, 0:01:18.812,0:01:20.813 "Oh, I'm actually a neuroscientist," 0:01:20.813,0:01:22.722 and then that really makes people quiet. 0:01:22.722,0:01:24.210 (Laughter) 0:01:24.746,0:01:26.323 But the best answer of course is, 0:01:26.323,0:01:28.136 "Yeah, of course I can read your mind, 0:01:28.136,0:01:30.281 and you really should be[br]ashamed of yourself." 0:01:30.281,0:01:31.911 (Laughter) 0:01:35.571,0:01:39.700 But all that is motivation[br]for what I'll share with you today, 0:01:39.700,0:01:43.895 which is I've been a psychologist[br]for about 20 years now, 0:01:43.895,0:01:49.428 and we've actually reached this point[br]where I can actually answer that question 0:01:49.428,0:01:52.720 by saying, "Yes,[br]if I can put you in a scanner, 0:01:52.720,0:01:54.946 then we can read your mind." 0:01:54.946,0:01:59.159 And what I'll share with you today[br]is to what extent, how far are we 0:01:59.159,0:02:02.670 at being able to read[br]other people's minds. 0:02:04.075,0:02:06.264 Of course this is the stuff[br]of science fiction. 0:02:06.264,0:02:08.690 Even a movie as recent as "Divergent" 0:02:08.690,0:02:10.378 has these many sequences 0:02:10.378,0:02:12.898 in which they're reading out[br]the brain activity 0:02:12.898,0:02:14.234 to see what she's dreaming 0:02:14.234,0:02:17.247 under the influence of drugs. 0:02:18.077,0:02:20.666 And you know, again,[br]these technologies do exist. 0:02:20.986,0:02:23.703 In modern day, in the real word,[br]they look more like this. 0:02:23.703,0:02:27.101 These are just standard[br]MRI machines and scanners 0:02:27.101,0:02:29.124 that are slightly modified 0:02:29.124,0:02:33.612 so that they can allow you[br]to infer and read out brain activity. 0:02:35.648,0:02:39.153 These methods have been around[br]for about 25 years, 0:02:39.343,0:02:42.085 and in the first decade -[br]I'd say the 1990s - 0:02:42.085,0:02:44.182 a lot of effort was devoted 0:02:44.182,0:02:48.177 to mapping out what the different[br]parts of the brain do. 0:02:48.460,0:02:53.273 One of the most successful forms[br]of this is exemplified here. 0:02:53.273,0:02:55.888 Imagine you're a subject[br]lying down in that scanner; 0:02:55.888,0:02:57.227 there's a computer projector 0:02:57.227,0:02:59.453 that allows you[br]to look at a series of images 0:02:59.453,0:03:01.644 while your brain is being scanned. 0:03:02.224,0:03:03.866 For some periods of time, 0:03:03.866,0:03:07.300 you are going to be seeing[br]sequences of scene images 0:03:07.600,0:03:10.843 alternating with other[br]sequences of face images, 0:03:11.143,0:03:14.515 and they'll just go back and forth[br]while your brain is being scanned. 0:03:14.725,0:03:17.303 And the key motivation here[br]for the researchers 0:03:17.303,0:03:19.661 is to compare what's[br]happening in the brain 0:03:19.661,0:03:22.853 when you're looking at scenes[br]versus when you're looking at faces 0:03:22.853,0:03:27.489 to see if anything different happens[br]for these two categories of stimuli. 0:03:27.489,0:03:30.313 And I'd say even within 10 minutes[br]of scanning your brain, 0:03:30.313,0:03:34.445 you can get a very reliable pattern[br]of results that looks like this, 0:03:34.445,0:03:39.624 where there is a region of the brain[br]that's specialized for processing scenes, 0:03:39.624,0:03:41.725 shown here in the warm colors, 0:03:42.165,0:03:45.178 and there is also[br]a separate region of the brain, 0:03:45.178,0:03:47.166 shown here in blue colors, 0:03:47.166,0:03:51.142 that is more selectively active[br]when people are viewing faces. 0:03:51.352,0:03:53.698 There's this disassociation here. 0:03:54.258,0:03:56.636 And this allows you to see 0:03:56.636,0:03:59.256 where different functions[br]reside in the brain. 0:03:59.256,0:04:01.901 And in the case[br]of scene and face perception, 0:04:01.901,0:04:05.804 these two forms of vision[br]are so important in our everyday lives - 0:04:05.804,0:04:08.411 or course scene processing[br]important for navigation, 0:04:08.411,0:04:11.782 face processing important[br]for social interactions - 0:04:11.782,0:04:14.612 that our brain has[br]specialized areas devoted to them. 0:04:15.662,0:04:18.429 And this alone, I think,[br]is pretty amazing. 0:04:18.905,0:04:22.139 A lot of this work[br]started coming out in the mid-1990s, 0:04:22.399,0:04:24.968 and it really complimented and reinforced 0:04:24.968,0:04:28.312 a lot of the patient work[br]that's been around for dozens of years. 0:04:29.042,0:04:32.516 But really, the methodologies of fMRI 0:04:32.516,0:04:37.189 have really gone far beyond[br]these very basic mapping studies. 0:04:37.189,0:04:39.709 You know, in fact,[br]when you look at a study like this, 0:04:39.709,0:04:41.477 you may say, "Oh, that's really cool, 0:04:41.477,0:04:43.822 but what does this have to do[br]about everyday life? 0:04:43.822,0:04:49.022 How is this going to help me understand[br]why I feel this way or don't feel this way 0:04:49.022,0:04:52.292 or why I like a certain person[br]or don't like certain other people? 0:04:52.292,0:04:55.004 Will this tell me what kind[br]of career I should take?" 0:04:55.004,0:04:56.817 You know, you can't really get too far 0:04:56.817,0:04:59.539 with this scene and face[br]area mapping alone. 0:05:00.959,0:05:04.573 So it wasn't until this study[br]came up around 2000 - 0:05:04.793,0:05:06.937 And most of these studies,[br]I should emphasize, 0:05:06.937,0:05:08.707 are from other labs around the field. 0:05:08.707,0:05:09.759 Any studies from Yale, 0:05:09.759,0:05:12.168 I'll have the mark of "Yale"[br]on the corner. 0:05:12.168,0:05:18.073 But this particular study was done at MIT,[br]by Nancy Kanwisher and colleagues. 0:05:18.073,0:05:20.916 They actually had subjects[br]look at nothing. 0:05:21.406,0:05:25.174 They had subjects[br]close their eyes in the scanner 0:05:25.824,0:05:28.013 while they gave them instructions, 0:05:28.013,0:05:31.882 and the instructions[br]were one of two types of tasks. 0:05:31.882,0:05:35.918 One was to imagine[br]a bunch of faces that they knew. 0:05:35.918,0:05:38.636 So you might be able to imagine[br]your friends and family, 0:05:38.636,0:05:40.710 cycle through them one at a time 0:05:41.060,0:05:42.676 while your brain is being scanned. 0:05:42.676,0:05:44.919 That would be the face[br]imagery instruction. 0:05:44.919,0:05:47.634 And then the other instruction[br]that they gave the subjects 0:05:47.634,0:05:49.631 was of course a scene imagery instruction, 0:05:49.631,0:05:53.575 so imagine after this talk,[br]you walk home or take your car home, 0:05:53.575,0:05:56.220 get into your home,[br]navigate throughout your house, 0:05:56.220,0:05:58.077 change into something more comfortable, 0:05:58.077,0:05:59.979 go to the kitchen, get something to eat. 0:05:59.979,0:06:04.468 You have this ability to visualize[br]places and scenes in your mind, 0:06:04.468,0:06:08.055 and the questions is, "What happens[br]when you're visualizing scenes, 0:06:08.055,0:06:10.596 and what happens[br]when you're visualizing faces? 0:06:10.906,0:06:13.008 What are the neural correlates of that?" 0:06:13.208,0:06:18.141 And the hypothesis was, well, maybe[br]imagining faces activates the face area 0:06:18.141,0:06:21.466 and maybe imagining scenes[br]would activate the scene area, 0:06:21.466,0:06:24.226 and that's in fact indeed what they found. 0:06:24.736,0:06:27.402 First, you bring in a bunch[br]of subjects into the scanner, 0:06:27.402,0:06:29.368 show them faces, show them scenes, 0:06:29.368,0:06:31.215 you map out the face area on top, 0:06:31.215,0:06:34.163 you map out the place area[br]in the second row. 0:06:34.163,0:06:35.796 And then the same subjects, 0:06:36.851,0:06:38.560 at a different time in the scanner, 0:06:38.560,0:06:41.946 have them close their eyes,[br]do the instructions I just had you do, 0:06:41.946,0:06:43.417 and you compare. 0:06:43.417,0:06:44.712 When you have face imagery, 0:06:44.712,0:06:47.914 you can see in the second row[br]that the face area is active, 0:06:47.914,0:06:49.810 even though nothing's on the screen. 0:06:49.810,0:06:53.606 And in the fourth row here,[br]the second row of the bottom pair, 0:06:53.776,0:06:57.481 you see that the place area is active[br]when you're imagining scenes; 0:06:57.481,0:06:58.625 nothing's on the screen, 0:06:58.625,0:07:02.660 but simply when you're imagining scenes,[br]it will activate the place area. 0:07:03.450,0:07:06.778 In fact, even just[br]by looking at fMRI data alone 0:07:06.778,0:07:10.314 and looking at the relative activity[br]for the face area and place area, 0:07:10.314,0:07:12.477 you can guess over 80% of the time, 0:07:12.477,0:07:13.969 even in this first study, 0:07:13.969,0:07:16.016 what subjects were imagining. 0:07:16.016,0:07:17.024 Okay? 0:07:17.024,0:07:19.954 And so, in this sense, in my mind, 0:07:19.954,0:07:23.292 I believe that this is the first study[br]that started using brain imaging 0:07:23.292,0:07:26.247 to actually read out[br]what people were thinking. 0:07:26.247,0:07:29.300 You can take a naïve person,[br]look at this graph, you know, 0:07:29.300,0:07:32.392 and ask which bar is higher,[br]which line is higher, 0:07:32.392,0:07:35.890 and they can guess better than chance,[br]way better than chance, 0:07:35.890,0:07:37.755 what people were thinking. 0:07:38.615,0:07:40.667 Now, you may say, "Okay,[br]that's really cool, 0:07:40.667,0:07:44.609 but there're a whole lot more[br]interesting things to do with a scanner 0:07:44.609,0:07:46.842 or with what we want to know 0:07:46.842,0:07:49.771 than whether people[br]are thinking of faces or scenes." 0:07:49.771,0:07:51.068 And so the rest of my talk 0:07:51.068,0:07:55.129 will be devoted to the next 15 years[br]of work since this study 0:07:55.129,0:07:58.579 that have really advanced[br]our ability to read out minds. 0:08:00.199,0:08:02.373 This is one study[br]that was published in 2010 0:08:02.373,0:08:04.623 in the New England Journal of Medicine 0:08:04.943,0:08:06.031 by another group. 0:08:06.031,0:08:10.584 They were actually studying patients[br]who were in a persistent vegetative state, 0:08:10.584,0:08:13.928 who were locked in,[br]had no ability to express themselves; 0:08:13.928,0:08:17.365 it was not even clear if they were[br]capable of voluntary thought, 0:08:17.365,0:08:22.036 and yet because you can instruct people[br]to imagine one thing versus another - 0:08:22.036,0:08:25.642 in this case, imagine walking[br]through your house, navigation, 0:08:25.642,0:08:27.912 in the other case, imagine playing tennis 0:08:27.912,0:08:30.235 because that activates[br]a whole motor circuitry 0:08:30.235,0:08:33.619 that's separate from the spacial[br]navigation circuitry - 0:08:33.889,0:08:37.485 you attach each of those imagery[br]instruction to "yes" or "no," 0:08:37.485,0:08:39.625 and then you can ask them[br]factual questions, 0:08:39.625,0:08:44.388 such as "Is your father's name Alexander,[br]or is your father's name Thomas?" 0:08:44.388,0:08:47.340 And they were able to demonstrate[br]that in this patient, 0:08:47.340,0:08:49.785 who otherwise had no ability[br]to communicate, 0:08:49.785,0:08:54.917 he was able to respond[br]to these factual questions accurately, 0:08:54.917,0:08:59.174 and even in control subject[br]who do not have problems. 0:08:59.654,0:09:01.131 This method is so reliable 0:09:01.131,0:09:05.857 that almost over 95%[br]of their responses were decodable 0:09:05.857,0:09:08.883 just through brain imagining alone -[br]"yes" versus "no." 0:09:08.883,0:09:11.436 So you can imagine,[br]with a game of 20 questions, 0:09:11.436,0:09:14.538 you can really get insight[br]into what people are thinking 0:09:14.538,0:09:16.677 using these methodologies. 0:09:17.817,0:09:19.288 But again, it's limited; 0:09:19.288,0:09:20.717 we only have two states here - 0:09:20.717,0:09:21.722 "yes," "no," 0:09:21.722,0:09:25.093 think tennis or think[br]walking around your house. 0:09:25.698,0:09:29.908 But fortunately, there're tremendously[br]brilliant scientists all around the world 0:09:29.908,0:09:35.640 who are constantly refining the methods[br]to decode fMRI activity 0:09:35.640,0:09:38.570 so that we can understand the mind. 0:09:39.763,0:09:43.861 And I'd like to use this slide[br]from Nature magazine 0:09:43.861,0:09:47.460 to motivate the next few studies[br]I'll share with you. 0:09:47.920,0:09:53.087 One huge limitation of brain imaging,[br]at least in the first 15 years, 0:09:53.087,0:09:56.715 is that there are not[br]many specialized areas in the brain. 0:09:56.715,0:09:59.237 The face area is one.[br]Place area is another. 0:09:59.237,0:10:03.693 These are very, very important visual[br]functions that we carry out every day, 0:10:03.693,0:10:08.375 but there is no "shoe" area in the brain,[br]there is no "cat" area in the brain. 0:10:08.375,0:10:12.308 You don't see separate blobs[br]or patches of activity 0:10:12.308,0:10:14.047 for these two different categories, 0:10:14.047,0:10:16.994 and indeed, for most categories[br]that we encounter 0:10:16.994,0:10:18.744 and are able to discriminate, 0:10:18.744,0:10:21.023 they don't have separate brain regions. 0:10:21.023,0:10:23.471 And because there are[br]no separate brain regions, 0:10:23.471,0:10:24.477 the question then is, 0:10:24.477,0:10:27.052 "How do we study them,[br]and how do we discriminate them, 0:10:27.052,0:10:31.127 how do we get at these more fine[br]detailed differences that matter so much 0:10:31.127,0:10:33.227 if we really want to understand 0:10:33.227,0:10:35.895 how the mind works[br]and how we can read it out 0:10:35.895,0:10:37.657 using fMRI? 0:10:38.487,0:10:44.456 Fortunately, some neuroscientists[br]collaborated with computer vision people 0:10:44.456,0:10:48.575 and with electrical engineers[br]and computer scientists and statisticians 0:10:48.575,0:10:51.538 to use very refined mathematical methods 0:10:51.538,0:10:56.184 for pulling out and decoding[br]more subtle patterns of activity 0:10:56.184,0:10:58.565 that are unique to shoe processing 0:10:58.565,0:11:03.008 and that are unique[br]to the processing of cat stimuli. 0:11:03.008,0:11:05.790 So, for instance,[br]even though the shoes and cats, 0:11:05.790,0:11:07.788 you show a whole bunch[br]of them to subjects, 0:11:07.788,0:11:11.153 it'll activate same part of cortex,[br]the same part of the brain, 0:11:11.153,0:11:13.447 and so that if you average[br]activity in that part, 0:11:13.447,0:11:17.177 you won't see any differences[br]between the shoes and the cats. 0:11:17.177,0:11:19.183 As you can see here on the right, 0:11:19.403,0:11:22.980 the shoes are still associated[br]with different fine patterns, 0:11:22.980,0:11:24.870 or micropatterns of activity 0:11:24.870,0:11:28.849 that are different from the patterns[br]of activity elicited by cats. 0:11:29.179,0:11:33.722 These little boxes over here,[br]these cells are referred to voxels; 0:11:34.112,0:11:36.139 the way fMRI collects data from the brain 0:11:36.139,0:11:39.053 is it kind of chops up your brain[br]into tiny little cubes, 0:11:39.053,0:11:41.158 and each cube is a unit of measurement - 0:11:41.158,0:11:44.943 it gives you numerical data[br]that you can use to analyze. 0:11:44.943,0:11:47.550 And if you imagine[br]that the brightness of these squares 0:11:47.550,0:11:50.179 corresponds to a different[br]level of activity, 0:11:50.179,0:11:53.152 quantitatively measurable[br]different level of activity, 0:11:53.152,0:11:55.106 and you can imagine over a spacial region 0:11:55.106,0:11:59.695 that the pattern of these activities[br]may be different for shoes versus cats, 0:11:59.985,0:12:03.650 then all you need to do[br]is to work with a computer algorithm 0:12:03.650,0:12:06.449 to learn these subtle[br]differences in patterns, 0:12:06.449,0:12:09.795 so we have what we might call a "decoder." 0:12:10.125,0:12:12.232 And you train up a computer decoder 0:12:12.232,0:12:15.615 to learn the differences[br]between shoe patterns and cat patterns, 0:12:15.915,0:12:18.444 and then what you have[br]is here in the testing phase: 0:12:18.444,0:12:20.732 after you train up[br]your computer algorithm, 0:12:20.732,0:12:22.681 you can show a new shoe, 0:12:23.321,0:12:26.048 it will elicit a certain pattern[br]of fMRI activity, 0:12:26.048,0:12:27.599 as you can see over here, 0:12:27.599,0:12:30.351 and the computer[br]will tell you its best guess 0:12:30.351,0:12:32.981 as to whether this pattern over here 0:12:32.981,0:12:36.844 is closer to a shoe[br]or closer to that for a cat. 0:12:36.844,0:12:37.849 And it will allow you 0:12:37.849,0:12:40.270 to guess something[br]that's a lot more refined 0:12:40.270,0:12:42.065 and that is not possible 0:12:42.065,0:12:45.625 when you don't have areas[br]like the face area or the place area. 0:12:48.365,0:12:52.058 And now people are really taking off[br]with these types of methodologies. 0:12:52.058,0:12:56.633 Those started coming out in 2005,[br]so a little than a decade ago. 0:12:56.633,0:12:59.907 This is a study published[br]in the New England Journal of Medicine, 0:12:59.907,0:13:03.047 year 2013, literally last year, 0:13:04.237,0:13:07.317 a very important clinical[br]application of studying 0:13:07.317,0:13:09.957 or trying to find a way[br]to objectively measure pain, 0:13:09.957,0:13:13.328 which is a very subjective state, we know. 0:13:13.708,0:13:15.053 You can use fMRI, 0:13:15.053,0:13:17.403 and these researchers have developed ways 0:13:17.403,0:13:20.882 of mapping and predicting[br]the amount of heat 0:13:20.882,0:13:23.178 and the amount of pain people experience 0:13:23.178,0:13:27.773 as you increase the temperature[br]of a noxious stimulus on their skin, 0:13:28.033,0:13:32.326 and the fMRI decoder[br]will kind of give you an estimate 0:13:32.326,0:13:35.261 of how much heat[br]and maybe even how much pain 0:13:35.261,0:13:37.624 someone is experiencing, 0:13:37.624,0:13:41.334 to the extent that they can predict,[br]just based on fMRI activity alone, 0:13:41.334,0:13:45.250 how much pain a person is experiencing[br]over 93% of the time, 0:13:45.250,0:13:47.526 or with 93% accuracy. 0:13:49.726,0:13:52.863 This is an astonishing version[br]that I'm sure many of you have seen, 0:13:52.863,0:13:53.859 published in 2011, 0:13:53.859,0:13:57.652 from Jack Gallant's group[br]over in UC, Berkley. 0:13:57.992,0:14:02.564 They showed people a bunch[br]of these videos from YouTube, 0:14:02.564,0:14:04.294 here shown here on the left, 0:14:04.294,0:14:07.532 and then they trained up[br]a classifier, a decoder, 0:14:07.532,0:14:13.087 to learn the mapping[br]between the videos and fMRI activity; 0:14:13.497,0:14:16.877 and then when they showed[br]people novel videos 0:14:16.877,0:14:19.847 that elicited certain patterns[br]of fMRI activity, 0:14:19.847,0:14:22.583 they were able to use[br]that fMRI activity alone 0:14:22.583,0:14:25.513 to guess what videos people were seeing. 0:14:25.513,0:14:28.269 And they pulled the guesses[br]off of YouTube as well 0:14:28.269,0:14:29.272 and averaged them, 0:14:29.272,0:14:31.063 and that's why it's a little grainy, 0:14:31.063,0:14:33.755 but what you'll see here -[br]and this is all over YouTube - 0:14:34.075,0:14:36.130 is what people saw 0:14:36.130,0:14:39.023 and, on the right, what the computer[br]is guessing people saw 0:14:39.023,0:14:41.352 based on fMRI activity alone. 0:15:13.138,0:15:16.431 So it's pretty astonishing;[br]this literally is reading out the mind. 0:15:16.431,0:15:19.569 I mean, you have complicated models,[br]you have to train it and such, 0:15:19.569,0:15:22.436 but still, this is based[br]on fMRI activity alone 0:15:22.436,0:15:24.858 that they're able to make[br]these guesses over here. 0:15:25.078,0:15:29.849 I had a student at Yale University.[br]His name is Alan Cowen. 0:15:30.049,0:15:32.190 He had a very strong[br]mathematics background. 0:15:32.190,0:15:35.217 He came to my lab,[br]and he said, "I really love this stuff." 0:15:35.527,0:15:38.337 He loved this stuff so much[br]that he wanted to do it himself. 0:15:38.337,0:15:42.707 I actually did not have the ability[br]to do this in the lab, 0:15:42.707,0:15:46.010 but he developed it[br]together with my post doc Brice Kuhl, 0:15:46.010,0:15:49.618 who's now an assistant professor[br]at New York University. 0:15:49.618,0:15:51.108 They wanted to ask the question 0:15:51.108,0:15:56.242 of "Can we limit this type[br]of analysis to faces?" 0:15:57.052,0:15:58.054 As you saw before, 0:15:58.054,0:16:00.819 you can kind of see what categories[br]people were looking at, 0:16:00.819,0:16:04.195 but certainly the algorithm[br]was not able to give you guesses 0:16:04.195,0:16:06.255 as to which person you were looking at 0:16:06.255,0:16:08.801 or what kind of animal[br]you were looking at. 0:16:08.801,0:16:11.380 You're really getting[br]more categorical guesses 0:16:11.380,0:16:12.700 in the prior example. 0:16:12.700,0:16:14.753 And so Alan and Brice, 0:16:14.753,0:16:19.459 they thought that, well, if we focused[br]our algorithms on faces alone - 0:16:19.459,0:16:22.638 you know, the faces[br]being so important for everyday life, 0:16:22.638,0:16:23.642 and given the fact 0:16:23.642,0:16:27.483 that there are specialized mechanisms[br]in the brain for face processing - 0:16:27.483,0:16:29.772 maybe if we do something[br]that's that specialized, 0:16:29.772,0:16:33.652 we might be able to individually[br]guess individual faces. 0:16:35.142,0:16:37.148 As a summary, that's what they did. 0:16:37.148,0:16:39.131 They showed people a bunch of faces, 0:16:39.131,0:16:41.026 they trained up these algorithms, 0:16:41.026,0:16:46.474 and then using fMRI activity alone,[br]they were able to generate good guesses, 0:16:46.804,0:16:48.794 above-chance guesses, 0:16:48.794,0:16:51.684 as to which faces[br]an individual is looking at, 0:16:51.684,0:16:53.566 again, based on fMRI activity alone, 0:16:53.566,0:16:56.099 and that's what's shown here on the right. 0:16:56.929,0:16:58.881 Just to give you a little bit more detail 0:16:58.881,0:17:01.213 for those who kind of[br]like that kind of stuff. 0:17:01.213,0:17:03.461 It's really relying[br]on a lot of computer vision. 0:17:03.461,0:17:04.948 This is not voodoo magic. 0:17:04.948,0:17:07.187 There's a lot of computer vision[br]that allows you 0:17:07.187,0:17:13.298 to mathematically summarize[br]features of a whole array of faces. 0:17:14.418,0:17:15.921 People call them "eigenfaces." 0:17:15.921,0:17:18.577 It's based on principle[br]component analysis. 0:17:18.577,0:17:22.602 So you can decompose these faces[br]and describe a whole array of faces 0:17:22.602,0:17:25.258 with these mathematical algorithms. 0:17:25.258,0:17:28.854 And basically what[br]Alan and Brice did in my lab 0:17:28.854,0:17:34.902 was to map those mathematical[br]components to brain activity, 0:17:34.902,0:17:39.066 and then once you train up a database,[br]a statistic library, for doing so, 0:17:39.066,0:17:42.336 then you can take a novel face[br]that's shown up here on the right, 0:17:42.976,0:17:46.219 record the fMRI activity[br]to those novel faces, 0:17:46.219,0:17:49.686 and then make your best guess[br]as to which face people were looking at. 0:17:49.686,0:17:53.102 Right now, we are about 65, 70% correct, 0:17:53.512,0:17:58.982 and we and others are working very hard[br]to improve that performance. 0:18:00.302,0:18:01.874 And this is just another example: 0:18:01.874,0:18:03.917 a whole bunch of originals[br]on the left here, 0:18:03.917,0:18:07.436 and then reconstructions[br]and guesses on the right. 0:18:07.686,0:18:11.441 This study actually just came out,[br]just got published this past week, 0:18:11.441,0:18:13.000 so I'm pretty excited 0:18:13.000,0:18:16.220 that you might even see[br]some media coverage of the work, 0:18:16.220,0:18:19.047 and again, I really want to credit[br]Alan Cowen and Brice Kuhl 0:18:19.047,0:18:20.722 for making this possible. 0:18:21.572,0:18:22.574 Now, you may wonder, 0:18:22.574,0:18:26.564 going back, kind of, to the more[br]categorical reading out of brains - 0:18:27.184,0:18:28.478 That work came out in 2011. 0:18:28.478,0:18:31.006 I've been talking about it a lot[br]in classes and stuff, 0:18:31.006,0:18:33.226 and a really typical[br]question that you get - 0:18:33.506,0:18:35.177 and this question I actually like - 0:18:35.177,0:18:37.704 is, "Well, if you can read out[br]what people are seeing, 0:18:37.704,0:18:39.754 can you read out what they're dreaming?" 0:18:39.754,0:18:42.502 because that's the natural next step. 0:18:42.502,0:18:46.066 And this is work done[br]by Horikawa et al. in Japan, 0:18:46.066,0:18:47.785 published in Science last year. 0:18:47.785,0:18:51.481 They actually took a lot of the work[br]that Jack Gallant did, 0:18:51.481,0:18:54.437 modified it so they can decode[br]what people were dreaming 0:18:54.437,0:18:57.058 while they were sleeping[br]inside the scanner. 0:18:57.058,0:18:58.486 And what we have here on the - 0:18:58.486,0:18:59.487 Ignore the words, 0:18:59.487,0:19:03.103 these are just ways to try to categorize[br]what people were dreaming about. 0:19:03.103,0:19:05.068 Just focus on the top left. 0:19:05.418,0:19:09.714 That's the computer[br]algorithm's guess, or decoder's guess 0:19:09.714,0:19:11.722 as to what people were dreaming about. 0:19:11.722,0:19:14.183 You can imagine[br]it's going to be a lot more messier 0:19:14.183,0:19:16.451 than what you saw before for perception, 0:19:16.451,0:19:17.879 but still, it's pretty amazing 0:19:17.879,0:19:20.834 that you can decode dreams now[br]while people are sleeping, 0:19:20.834,0:19:23.310 you know, while they are[br]in rapid eye movement sleep. 0:19:23.310,0:19:26.115 So what we have here -[br]I'll turn it on, it's a video. 0:19:26.115,0:19:29.269 I will mention that,[br]a little stereotypically, 0:19:30.250,0:19:31.651 this is a male subject, 0:19:31.651,0:19:34.937 and male subjects tend to think[br]about one thing when they're dreaming, 0:19:34.937,0:19:35.943 (Laughter) 0:19:35.943,0:19:39.796 but it is rated PG,[br]if you'll excuse me for that. 0:19:42.773,0:19:45.220 And again, you know, it's not easy. 0:19:45.220,0:19:47.368 It's, again, remarkable[br]that they can do this, 0:19:47.368,0:19:50.036 and it gets better[br]as it goes towards the end, 0:19:50.036,0:19:53.558 right before they awakened the subject[br]to ask them what they were dreaming, 0:19:53.558,0:19:55.699 but you can start seeing[br]buildings and places, 0:19:55.699,0:19:59.679 and here comes the dream,[br]the real concrete stuff, 0:19:59.679,0:20:00.679 right there. 0:20:00.679,0:20:01.675 Okay. 0:20:01.675,0:20:03.330 (Laughter) 0:20:03.330,0:20:06.622 Then they wake up the subject and say,[br]"What were you dreaming about?" 0:20:06.622,0:20:09.719 and they will report something[br]that's consistent with the decoder, 0:20:09.719,0:20:11.641 as a way to access its validity. 0:20:14.661,0:20:17.080 So, we can decode and read out the mind; 0:20:17.080,0:20:19.571 the field, the kind[br]of neuroscience can do that. 0:20:19.571,0:20:22.998 And the question is, "What are we[br]going to use this capability for?" 0:20:23.618,0:20:26.380 Some of you may have seen this movie, [br]the Minority Report. 0:20:26.380,0:20:30.074 It's a fascinating, amazing movie.[br]I encourage you to see it if you can. 0:20:30.074,0:20:32.120 This movie is all[br]about predicting behavior, 0:20:32.120,0:20:33.852 you know, reading out brain activity 0:20:33.852,0:20:36.663 to try to predict what's going[br]to happen in the future. 0:20:36.663,0:20:41.508 And neuroscientists are working[br]on this aspect of cognition as well. 0:20:41.878,0:20:46.284 As one example, colleagues[br]over at the University of New Mexico, 0:20:46.284,0:20:47.835 Kent Kiehl's group, 0:20:49.035,0:20:53.087 scanned a whole bunch of prisoners[br]who committed felony crimes. 0:20:53.087,0:20:55.039 They were all released back into society, 0:20:55.039,0:20:57.069 and the depended measure was: 0:20:57.069,0:20:59.583 "How likely was it[br]for someone to be rearrested? 0:20:59.583,0:21:02.169 What was the likelihood of recidivism?" 0:21:02.169,0:21:04.657 And they found that based on brain scans 0:21:04.657,0:21:08.187 while people were[br]doing tasks in the prison - 0:21:08.187,0:21:10.714 in the scanner that was[br]located in the prison, 0:21:10.714,0:21:12.642 they could predict reliably 0:21:12.642,0:21:15.609 who was more likely[br]to come back to prison, 0:21:15.609,0:21:17.472 be rearrested, re-commit a crime 0:21:17.472,0:21:20.035 versus who was less likely to do so. 0:21:20.375,0:21:24.659 In my own laboratory here at Yale,[br]I worked with a undergraduate student. 0:21:24.659,0:21:27.726 This is, again,[br]his own idea, Harrison Korn, 0:21:27.726,0:21:29.585 together with Michael Johnson. 0:21:29.585,0:21:35.594 They wanted to see if they could measure[br]implicit or unconscious prejudice 0:21:36.044,0:21:38.767 while people were looking[br]at these vignettes 0:21:39.207,0:21:42.053 which involve employment[br]discrimination cases. 0:21:42.053,0:21:43.632 So you can read through that: 0:21:43.632,0:21:45.895 "Rodney, a 19 year-old African American, 0:21:45.895,0:21:49.288 applied for a job as a clerk[br]at a teen-apparel store. 0:21:49.499,0:21:50.986 Had experience as a cashier, 0:21:50.986,0:21:53.235 glowing recommendation[br]from a previous employer 0:21:53.235,0:21:57.086 but was denied the job because he did not[br]match the company's look." 0:21:57.086,0:21:58.088 Okay? 0:21:58.088,0:21:59.807 And et cetera, et cetera, et cetera. 0:21:59.807,0:22:00.820 And the question is, 0:22:00.820,0:22:05.416 "In this hypothetical damage awards case, 0:22:05.416,0:22:09.269 how much would you[br]as a subject award Rodney?" 0:22:09.269,0:22:12.963 And you can imagine a range of responses, 0:22:12.963,0:22:16.336 where you would award Rodney a lot[br]or award Rodney a little, 0:22:16.336,0:22:18.803 and we had many other cases like this. 0:22:18.803,0:22:19.802 And the question is, 0:22:19.802,0:22:23.085 "Can we predict which subjects[br]are going to award a lot of damages 0:22:23.085,0:22:26.171 and which subjects[br]are going to award few damages?" 0:22:26.651,0:22:31.573 And Harrison found that if you scan people 0:22:31.573,0:22:36.926 while they were looking[br]at white faces versus black faces, 0:22:36.926,0:22:39.510 based on the brain response[br]to those faces, 0:22:39.510,0:22:40.848 he could predict 0:22:40.848,0:22:47.063 the amount of awards that were given[br]in this hypothetical case later on. 0:22:47.473,0:22:50.925 And that's shown here[br]by these correlations on the left. 0:22:51.395,0:22:54.468 As a final thing I want to share[br]of my lab right now, 0:22:54.468,0:22:58.186 we're really interested in this issue[br]of "what does it mean to be in the zone? 0:22:58.186,0:23:00.293 what does it mean[br]to be able to stay focused 0:23:00.293,0:23:01.823 for a sustained amount of time?" 0:23:01.823,0:23:03.013 which, you might imagine, 0:23:03.013,0:23:05.588 is critical for almost anything[br]that's important to us, 0:23:05.588,0:23:08.665 whether that's athletic performance,[br]musical performance, 0:23:08.665,0:23:12.599 theatrical performance,[br]giving a lecture, taking an exam. 0:23:13.039,0:23:16.877 Almost all these domains[br]require sustained focus and attention. 0:23:16.877,0:23:18.454 You all know that there are times 0:23:18.454,0:23:20.594 when you're really in the zone[br]and can do well 0:23:20.594,0:23:23.497 and there are other times[br]where even if you're fully prepared, 0:23:23.497,0:23:24.500 you don't do as well, 0:23:24.500,0:23:27.314 because you're not in the zone[br]or you're kind of distracted. 0:23:27.314,0:23:29.880 So my lab is interested[br]in how we can characterize this 0:23:29.880,0:23:31.757 and how we can predict that. 0:23:32.227,0:23:35.007 And as work that's not even published yet, 0:23:35.007,0:23:37.790 Emily Finn, a graduate student[br]in neuroscience program, 0:23:37.790,0:23:40.823 and Monica Rosenberg,[br]a graduate student in psychology, 0:23:41.193,0:23:42.530 they're both working with me 0:23:42.530,0:23:47.925 and Todd Constable over at the Magnetic[br]Resonance Research Center at Yale. 0:23:48.924,0:23:53.203 They find that it's not easy to predict[br]what it means to be in the zone 0:23:53.203,0:23:55.109 and what it means to be out of the zone, 0:23:55.109,0:23:57.108 but if you start using measures 0:23:57.108,0:24:01.645 that also look at how different brain[br]areas are connected with each other - 0:24:01.645,0:24:03.977 something that we call[br]"functional connectivity" - 0:24:03.977,0:24:07.090 you can start getting at[br]what it means to be in the zone 0:24:07.090,0:24:09.102 versus what it means[br]to be out of the zone. 0:24:09.102,0:24:10.110 And what we have here 0:24:10.110,0:24:14.746 are two types of networks [br]that Emily and Monica have identified. 0:24:14.746,0:24:18.713 There is a network of brain regions[br]and the connectivity between them 0:24:18.713,0:24:20.698 that predicts good performance, 0:24:20.698,0:24:23.256 and there is another complementary network 0:24:23.256,0:24:27.732 that has different brain areas[br]and different types of connectivity 0:24:27.732,0:24:30.648 that seems to be correlated[br]with bad performance. 0:24:30.648,0:24:33.576 So if you had an activity[br]in the blue network over here, 0:24:33.576,0:24:34.982 people tend to do worse. 0:24:34.982,0:24:38.196 If you have activity in the red network,[br]people tend to do better. 0:24:38.506,0:24:40.783 These networks are so robust 0:24:40.783,0:24:44.938 that you can measure these networks[br]even when subjects are doing nothing. 0:24:44.938,0:24:48.084 We put them in the scanner[br]before anything starts, 0:24:48.084,0:24:50.715 have them close their eyes[br]and rest for 10 minutes - 0:24:50.715,0:24:52.985 you call that a "resting state scan" - 0:24:52.985,0:24:56.677 and based on their resting[br]state activity alone, 0:24:57.117,0:25:02.247 we were able to predict how well[br]they were going to do over the next hour. 0:25:02.797,0:25:05.959 Again, red network corresponds[br]with better performance; 0:25:05.959,0:25:08.700 blue network corresponds[br]with worse performance. 0:25:08.700,0:25:10.684 And just on a variety of tasks, 0:25:10.684,0:25:14.530 you have very high[br]predictive power in these studies. 0:25:14.860,0:25:20.569 The implications for studying ADHD[br]and many other types of domains, 0:25:20.569,0:25:21.723 I think, are very large, 0:25:21.723,0:25:24.016 so hopefully, this will get[br]published soon. 0:25:24.016,0:25:26.553 So in closing, I hope I've convinced you 0:25:26.553,0:25:30.232 that I am no longer embarrassed to say[br]that we can read your mind, 0:25:30.232,0:25:31.916 if anyone asks me. 0:25:32.166,0:25:35.086 And I thank you for your attention. 0:25:35.086,0:25:37.989 (Applause)[br]