Good afternoon. My name is Marvin Chun, and I am a researcher, and I teach psychology at Yale. I'm very proud of what I do, and I really believe in the value of psychological science. However, to start with a footnote, I will admit that I feel self-conscious sometimes telling people that I'm a psychologist, especially when you're meeting a stranger on the airplane or at a cocktail party. And the reason for this is because there's a very typical question that you get when you tell people that you're a psychologist. What do you think that question might be? (Audience) What am I thinking? "What am I thinking?" exactly. "Can you read my mind?" is a question that a lot of psychologists cringe at whenever they hear it because it really reflects a misconception about - a kind of a Freudian misconception about what we do for a living. So you know, I've developed a lot of answers over the years: "If I could read your mind, I'd be a whole lot richer than I am right now." The other answer I came up with is, "Oh, I'm actually a neuroscientist," and then that really makes people quiet. (Laughter) But the best answer of course is, "Yeah, of course I can read your mind, and you really should be ashamed of yourself." (Laughter) But all that is motivation for what I'll share with you today, which is I've been a psychologist for about 20 years now, and we've actually reached this point where I can actually answer that question by saying, "Yes, if I can put you in a scanner, then we can read your mind." And what I'll share with you today is to what extent, how far are we at being able to read other people's minds. Of course this is the stuff of science fiction. Even a movie as recent as "Divergent" has these many sequences in which they're reading out the brain activity to see what she's dreaming under the influence of drugs. And you know, again, these technologies do exist. In modern day, in the real word, they look more like this. These are just standard MRI machines and scanners that are slightly modified so that they can allow you to infer and read out brain activity. These methods have been around for about 25 years, and in the first decade - I'd say the 1990s - a lot of effort was devoted to mapping out what the different parts of the brain do. One of the most successful forms of this is exemplified here. Imagine you're a subject lying down in that scanner; there's a computer projector that allows you to look at a series of images while your brain is being scanned. For some periods of time, you are going to be seeing sequences of scene images alternating with other sequences of face images, and they'll just go back and forth while your brain is being scanned. And the key motivation here for the researchers is to compare what's happening in the brain when you're looking at scenes versus when you're looking at faces to see if anything different happens for these two categories of stimuli. And I'd say even within 10 minutes of scanning your brain, you can get a very reliable pattern of results that looks like this, where there is a region of the brain that's specialized for processing scenes, shown here in the warm colors, and there is also a separate region of the brain, shown here in blue colors, that is more selectively active when people are viewing faces. There's this disassociation here. And this allows you to see where different functions reside in the brain. And in the case of scene and face perception, these two forms of vision are so important in our everyday lives - or course scene processing important for navigation, face processing important for social interactions - that our brain has specialized areas devoted to them. And this alone, I think, is pretty amazing. A lot of this work started coming out in the mid-1990s, and it really complimented and reinforced a lot of the patient work that's been around for dozens of years. But really, the methodologies of fMRI have really gone far beyond these very basic mapping studies. You know, in fact, when you look at a study like this, you may say, "Oh, that's really cool, but what does this have to do about everyday life? How is this going to help me understand why I feel this way or don't feel this way or why I like a certain person or don't like certain other people? Will this tell me what kind of career I should take?" You know, you can't really get too far with this scene and face area mapping alone. So it wasn't until this study came up around 2000 - And most of these studies, I should emphasize, are from other labs around the field. Any studies from Yale, I'll have the mark of "Yale" on the corner. But this particular study was done at MIT, by Nancy Kanwisher and colleagues. They actually had subjects look at nothing. They had subjects close their eyes in the scanner while they gave them instructions, and the instructions were one of two types of tasks. One was to imagine a bunch of faces that they knew. So you might be able to imagine your friends and family, cycle through them one at a time while your brain is being scanned. That would be the face imagery instruction. And then the other instruction that they gave the subjects was of course a scene imagery instruction, so imagine after this talk, you walk home or take your car home, get into your home, navigate throughout your house, change into something more comfortable, go to the kitchen, get something to eat. You have this ability to visualize places and scenes in your mind, and the questions is, "What happens when you're visualizing scenes, and what happens when you're visualizing faces? What are the neural correlates of that?" And the hypothesis was, well, maybe imagining faces activates the face area and maybe imagining scenes would activate the scene area, and that's in fact indeed what they found. First, you bring in a bunch of subjects into the scanner, show them faces, show them scenes, you map out the face area on top, you map out the place area in the second row. And then the same subjects, at a different time in the scanner, have them close their eyes, do the instructions I just had you do, and you compare. When you have face imagery, you can see in the second row that the face area is active, even though nothing's on the screen. And in the fourth row here, the second row of the bottom pair, you see that the place area is active when you're imagining scenes; nothing's on the screen, but simply when you're imagining scenes, it will activate the place area. In fact, even just by looking at fMRI data alone and looking at the relative activity for the face area and place area, you can guess over 80% of the time, even in this first study, what subjects were imagining. Okay? And so, in this sense, in my mind, I believe that this is the first study that started using brain imaging to actually read out what people were thinking. You can take a naïve person, look at this graph, you know, and ask which bar is higher, which line is higher, and they can guess better than chance, way better than chance, what people were thinking. Now, you may say, "Okay, that's really cool, but there're a whole lot more interesting things to do with a scanner or with what we want to know than whether people are thinking of faces or scenes." And so the rest of my talk will be devoted to the next 15 years of work since this study that have really advanced our ability to read out minds. This is one study that was published in 2010 in the New England Journal of Medicine by another group. They were actually studying patients who were in a persistent vegetative state, who were locked in, had no ability to express themselves; it was not even clear if they were capable of voluntary thought, and yet because you can instruct people to imagine one thing versus another - in this case, imagine walking through your house, navigation, in the other case, imagine playing tennis because that activates a whole motor circuitry that's separate from the spacial navigation circuitry - you attach each of those imagery instruction to "yes" or "no," and then you can ask them factual questions, such as "Is your father's name Alexander, or is your father's name Thomas?" And they were able to demonstrate that in this patient, who otherwise had no ability to communicate, he was able to respond to these factual questions accurately, and even in control subject who do not have problems. This method is so reliable that almost over 95% of their responses were decodable just through brain imagining alone - "yes" versus "no." So you can imagine, with a game of 20 questions, you can really get insight into what people are thinking using these methodologies. But again, it's limited; we only have two states here - "yes," "no," think tennis or think walking around your house. But fortunately, there're tremendously brilliant scientists all around the world who are constantly refining the methods to decode fMRI activity so that we can understand the mind. And I'd like to use this slide from Nature magazine to motivate the next few studies I'll share with you. One huge limitation of brain imaging, at least in the first 15 years, is that there are not many specialized areas in the brain. The face area is one. Place area is another. These are very, very important visual functions that we carry out every day, but there is no "shoe" area in the brain, there is no "cat" area in the brain. You don't see separate blobs or patches of activity for these two different categories, and indeed, for most categories that we encounter and are able to discriminate, they don't have separate brain regions. And because there are no separate brain regions, the question then is, "How do we study them, and how do we discriminate them, how do we get at these more fine detailed differences that matter so much if we really want to understand how the mind works and how we can read it out using fMRI? Fortunately, some neuroscientists collaborated with computer vision people and with electrical engineers and computer scientists and statisticians to use very refined mathematical methods for pulling out and decoding more subtle patterns of activity that are unique to shoe processing and that are unique to the processing of cat stimuli. So, for instance, even though the shoes and cats, you show a whole bunch of them to subjects, it'll activate same part of cortex, the same part of the brain, and so that if you average activity in that part, you won't see any differences between the shoes and the cats. As you can see here on the right, the shoes are still associated with different fine patterns, or micropatterns of activity that are different from the patterns of activity elicited by cats. These little boxes over here, these cells are referred to voxels; the way fMRI collects data from the brain is it kind of chops up your brain into tiny little cubes, and each cube is a unit of measurement - it gives you numerical data that you can use to analyze. And if you imagine that the brightness of these squares corresponds to a different level of activity, quantitatively measurable different level of activity, and you can imagine over a spacial region that the pattern of these activities may be different for shoes versus cats, then all you need to do is to work with a computer algorithm to learn these subtle differences in patterns, so we have what we might call a "decoder." And you train up a computer decoder to learn the differences between shoe patterns and cat patterns, and then what you have is here in the testing phase: after you train up your computer algorithm, you can show a new shoe, it will elicit a certain pattern of fMRI activity, as you can see over here, and the computer will tell you its best guess as to whether this pattern over here is closer to a shoe or closer to that for a cat. And it will allow you to guess something that's a lot more refined and that is not possible when you don't have areas like the face area or the place area. And now people are really taking off with these types of methodologies. Those started coming out in 2005, so a little than a decade ago. This is a study published in the New England Journal of Medicine, year 2013, literally last year, a very important clinical application of studying or trying to find a way to objectively measure pain, which is a very subjective state, we know. You can use fMRI, and these researchers have developed ways of mapping and predicting the amount of heat and the amount of pain people experience as you increase the temperature of a noxious stimulus on their skin, and the fMRI decoder will kind of give you an estimate of how much heat and maybe even how much pain someone is experiencing, to the extent that they can predict, just based on fMRI activity alone, how much pain a person is experiencing over 93% of the time, or with 93% accuracy. This is an astonishing version that I'm sure many of you have seen, published in 2011, from Jack Gallant's group over in UC, Berkley. They showed people a bunch of these videos from YouTube, here shown here on the left, and then they trained up a classifier, a decoder, to learn the mapping between the videos and fMRI activity; and then when they showed people novel videos that elicited certain patterns of fMRI activity, they were able to use that fMRI activity alone to guess what videos people were seeing. And they pulled the guesses off of YouTube as well and averaged them, and that's why it's a little grainy, but what you'll see here - and this is all over YouTube - is what people saw and, on the right, what the computer is guessing people saw based on fMRI activity alone. So it's pretty astonishing; this literally is reading out the mind. I mean, you have complicated models, you have to train it and such, but still, this is based on fMRI activity alone that they're able to make these guesses over here. I had a student at Yale University. His name is Alan Cowen. He had a very strong mathematics background. He came to my lab, and he said, "I really love this stuff." He loved this stuff so much that he wanted to do it himself. I actually did not have the ability to do this in the lab, but he developed it together with my post doc Brice Kuhl, who's now an assistant professor at New York University. They wanted to ask the question of "Can we limit this type of analysis to faces?" As you saw before, you can kind of see what categories people were looking at, but certainly the algorithm was not able to give you guesses as to which person you were looking at or what kind of animal you were looking at. You're really getting more categorical guesses in the prior example. And so Alan and Brice, they thought that, well, if we focused our algorithms on faces alone - you know, the faces being so important for everyday life, and given the fact that there are specialized mechanisms in the brain for face processing - maybe if we do something that's that specialized, we might be able to individually guess individual faces. As a summary, that's what they did. They showed people a bunch of faces, they trained up these algorithms, and then using fMRI activity alone, they were able to generate good guesses, above-chance guesses, as to which faces an individual is looking at, again, based on fMRI activity alone, and that's what's shown here on the right. Just to give you a little bit more detail for those who kind of like that kind of stuff. It's really relying on a lot of computer vision. This is not voodoo magic. There's a lot of computer vision that allows you to mathematically summarize features of a whole array of faces. People call them "eigenfaces." It's based on principle component analysis. So you can decompose these faces and describe a whole array of faces with these mathematical algorithms. And basically what Alan and Brice did in my lab was to map those mathematical components to brain activity, and then once you train up a database, a statistic library, for doing so, then you can take a novel face that's shown up here on the right, record the fMRI activity to those novel faces, and then make your best guess as to which face people were looking at. Right now, we are about 65, 70% correct, and we and others are working very hard to improve that performance. And this is just another example: a whole bunch of originals on the left here, and then reconstructions and guesses on the right. This study actually just came out, just got published this past week, so I'm pretty excited that you might even see some media coverage of the work, and again, I really want to credit Alan Cowen and Brice Kuhl for making this possible. Now, you may wonder, going back, kind of, to the more categorical reading out of brains - That work came out in 2011. I've been talking about it a lot in classes and stuff, and a really typical question that you get - and this question I actually like - is, "Well, if you can read out what people are seeing, can you read out what they're dreaming?" because that's the natural next step. And this is work done by Horikawa et al. in Japan, published in Science last year. They actually took a lot of the work that Jack Gallant did, modified it so they can decode what people were dreaming while they were sleeping inside the scanner. And what we have here on the - Ignore the words, these are just ways to try to categorize what people were dreaming about. Just focus on the top left. That's the computer algorithm's guess, or decoder's guess as to what people were dreaming about. You can imagine it's going to be a lot more messier than what you saw before for perception, but still, it's pretty amazing that you can decode dreams now while people are sleeping, you know, while they are in rapid eye movement sleep. So what we have here - I'll turn it on, it's a video. I will mention that, a little stereotypically, this is a male subject, and male subjects tend to think about one thing when they're dreaming, (Laughter) but it is rated PG, if you'll excuse me for that. And again, you know, it's not easy. It's, again, remarkable that they can do this, and it gets better as it goes towards the end, right before they awakened the subject to ask them what they were dreaming, but you can start seeing buildings and places, and here comes the dream, the real concrete stuff, right there. Okay. (Laughter) Then they wake up the subject and say, "What were you dreaming about?" and they will report something that's consistent with the decoder, as a way to access its validity. So, we can decode and read out the mind; the field, the kind of neuroscience can do that. And the question is, "What are we going to use this capability for?" Some of you may have seen this movie, the Minority Report. It's a fascinating, amazing movie. I encourage you to see it if you can. This movie is all about predicting behavior, you know, reading out brain activity to try to predict what's going to happen in the future. And neuroscientists are working on this aspect of cognition as well. As one example, colleagues over at the University of New Mexico, Kent Kiehl's group, scanned a whole bunch of prisoners who committed felony crimes. They were all released back into society, and the depended measure was: "How likely was it for someone to be rearrested? What was the likelihood of recidivism?" And they found that based on brain scans while people were doing tasks in the prison - in the scanner that was located in the prison, they could predict reliably who was more likely to come back to prison, be rearrested, re-commit a crime versus who was less likely to do so. In my own laboratory here at Yale, I worked with a undergraduate student. This is, again, his own idea, Harrison Korn, together with Michael Johnson. They wanted to see if they could measure implicit or unconscious prejudice while people were looking at these vignettes which involve employment discrimination cases. So you can read through that: "Rodney, a 19 year-old African American, applied for a job as a clerk at a teen-apparel store. Had experience as a cashier, glowing recommendation from a previous employer but was denied the job because he did not match the company's look." Okay? And et cetera, et cetera, et cetera. And the question is, "In this hypothetical damage awards case, how much would you as a subject award Rodney?" And you can imagine a range of responses, where you would award Rodney a lot or award Rodney a little, and we had many other cases like this. And the question is, "Can we predict which subjects are going to award a lot of damages and which subjects are going to award few damages?" And Harrison found that if you scan people while they were looking at white faces versus black faces, based on the brain response to those faces, he could predict the amount of awards that were given in this hypothetical case later on. And that's shown here by these correlations on the left. As a final thing I want to share of my lab right now, we're really interested in this issue of "what does it mean to be in the zone? what does it mean to be able to stay focused for a sustained amount of time?" which, you might imagine, is critical for almost anything that's important to us, whether that's athletic performance, musical performance, theatrical performance, giving a lecture, taking an exam. Almost all these domains require sustained focus and attention. You all know that there are times when you're really in the zone and can do well and there are other times where even if you're fully prepared, you don't do as well, because you're not in the zone or you're kind of distracted. So my lab is interested in how we can characterize this and how we can predict that. And as work that's not even published yet, Emily Finn, a graduate student in neuroscience program, and Monica Rosenberg, a graduate student in psychology, they're both working with me and Todd Constable over at the Magnetic Resonance Research Center at Yale. They find that it's not easy to predict what it means to be in the zone and what it means to be out of the zone, but if you start using measures that also look at how different brain areas are connected with each other - something that we call "functional connectivity" - you can start getting at what it means to be in the zone versus what it means to be out of the zone. And what we have here are two types of networks that Emily and Monica have identified. There is a network of brain regions and the connectivity between them that predicts good performance, and there is another complementary network that has different brain areas and different types of connectivity that seems to be correlated with bad performance. So if you had an activity in the blue network over here, people tend to do worse. If you have activity in the red network, people tend to do better. These networks are so robust that you can measure these networks even when subjects are doing nothing. We put them in the scanner before anything starts, have them close their eyes and rest for 10 minutes - you call that a "resting state scan" - and based on their resting state activity alone, we were able to predict how well they were going to do over the next hour. Again, red network corresponds with better performance; blue network corresponds with worse performance. And just on a variety of tasks, you have very high predictive power in these studies. The implications for studying ADHD and many other types of domains, I think, are very large, so hopefully, this will get published soon. So in closing, I hope I've convinced you that I am no longer embarrassed to say that we can read your mind, if anyone asks me. And I thank you for your attention. (Applause)