WEBVTT 00:00:00.835 --> 00:00:03.040 Mark Twain summed up what I take to be one of 00:00:03.040 --> 00:00:06.013 the fundamental problems of cognitive science 00:00:06.013 --> 00:00:08.033 with a single witticism. 00:00:08.033 --> 00:00:11.492 He said, there's something fascinating about science. 00:00:11.492 --> 00:00:14.720 One gets such wholesale returns of conjecture 00:00:14.720 --> 00:00:17.924 out of such a trifling investment in fact. 00:00:17.924 --> 00:00:20.199 (Laughter) NOTE Paragraph 00:00:20.199 --> 00:00:22.803 Twain meant it as a joke, of course, but he's right: 00:00:22.803 --> 00:00:25.679 there's something fascinating about science. 00:00:25.679 --> 00:00:30.091 From a few bones, we infer the existence of dinosuars. 00:00:30.091 --> 00:00:34.781 From spectra lines, the composition of nebulae. 00:00:34.781 --> 00:00:37.289 From fruit flies, 00:00:37.289 --> 00:00:41.352 the mechanisms of heredity, 00:00:41.352 --> 00:00:45.601 and from reconstructed images of blood flowing through the brain, 00:00:45.601 --> 00:00:50.189 or in my case, from the behavior of very young children, 00:00:50.189 --> 00:00:53.138 we try to say something about the fundamental mechanisms 00:00:53.138 --> 00:00:55.716 of human cognition. 00:00:55.716 --> 00:01:00.475 In particular, in my lab in the Department of Brain and Cognitive Sciences at MIT, 00:01:00.475 --> 00:01:02.379 I have spent the past decade 00:01:02.379 --> 00:01:04.399 trying to understand the mystery of how children 00:01:04.399 --> 00:01:08.416 learn so much from so little so quickly. 00:01:08.416 --> 00:01:11.644 Because, it turns out that the fascinating thing about science 00:01:11.644 --> 00:01:15.173 is also a fascinating thing about children, 00:01:15.173 --> 00:01:17.634 which, to put a gentler spin on Mark Twain 00:01:17.634 --> 00:01:19.724 is precisely their ability 00:01:19.724 --> 00:01:22.417 to draw rich, abstract inferences 00:01:22.417 --> 00:01:24.554 rapidly and accurately 00:01:24.554 --> 00:01:28.245 from sparse, noisy data. 00:01:28.245 --> 00:01:30.753 I'm going to give you just two examples today. 00:01:30.753 --> 00:01:33.052 One is about a problem of generalization, 00:01:33.052 --> 00:01:36.047 and the other is about a problem of causal reasoning. 00:01:36.047 --> 00:01:38.415 And although I'm going to talk about work in my lab, 00:01:38.415 --> 00:01:41.875 this work is inspired by and indebted to a field. 00:01:41.875 --> 00:01:47.308 I'm grateful to mentors, colleagues, and collaborators around the world. NOTE Paragraph 00:01:47.308 --> 00:01:50.652 Let me start with the problem of generalization. 00:01:50.652 --> 00:01:54.785 Generalizing from small samples of data is the bread and butter of science. 00:01:54.785 --> 00:01:57.339 We poll a tiny fraction of the electorate 00:01:57.339 --> 00:02:00.055 and we predict the outcome of national elections. 00:02:00.055 --> 00:02:04.165 We see how a handful of patients respond to treatment in a clinical trial, 00:02:04.165 --> 00:02:07.230 and we bring drugs to a national market. 00:02:07.230 --> 00:02:11.595 But this only works if our sample is randomly drawn from the population. 00:02:11.595 --> 00:02:14.033 If our sample is cherry-picked in some way 00:02:14.033 --> 00:02:16.402 -- say, we poll only urban voters, 00:02:16.402 --> 00:02:20.790 or say, in our clinical trials for treatments for heart disease, 00:02:20.790 --> 00:02:22.671 we include only men -- 00:02:22.671 --> 00:02:26.479 the results may not generalize to the broader population. NOTE Paragraph 00:02:26.479 --> 00:02:30.031 So scientists care whether evidence is randomly sampled or not, 00:02:30.031 --> 00:02:32.585 but what does that have to do with babies? 00:02:32.585 --> 00:02:37.206 Well, babies have to generalize from small samples of data all the time. 00:02:37.206 --> 00:02:40.364 They see a few rubber ducks and learn that they float, 00:02:40.364 --> 00:02:43.939 or a few balls and learn that they bounce. 00:02:43.939 --> 00:02:47.190 And they develop expectations about ducks and balls 00:02:47.190 --> 00:02:49.396 that they're going to extend to rubber ducks and balls 00:02:49.396 --> 00:02:51.485 for the rest of their lives. 00:02:51.485 --> 00:02:55.224 And the kinds of generalizations babies have to make about ducks and balls 00:02:55.224 --> 00:02:57.313 they have to make about almost everything: 00:02:57.313 --> 00:03:02.097 shoes and ships and ceiling wax and cabbages and [kings?]. NOTE Paragraph 00:03:02.097 --> 00:03:05.161 So do babies care whether the tiny bit of evidence they see 00:03:05.161 --> 00:03:08.853 is plausibly representative of a larger population? 00:03:08.853 --> 00:03:11.663 Let's find out. 00:03:11.663 --> 00:03:13.386 I'm going to show you two movies, 00:03:13.386 --> 00:03:15.848 one from each of two conditions of an experiment, 00:03:15.848 --> 00:03:18.286 and because you're going to see just two movies, 00:03:18.286 --> 00:03:20.422 you're going to see just two babies, 00:03:20.422 --> 00:03:24.369 and any two babies differ from each other in innumerable ways. 00:03:24.369 --> 00:03:27.550 But these babies, of course, here stand in for groups of babies, 00:03:27.550 --> 00:03:29.315 and the differences you're going to see 00:03:29.315 --> 00:03:34.051 represent average group differences in babies' behavior across conditions. 00:03:34.051 --> 00:03:37.743 In each movie, you're going to see a baby doing maybe 00:03:37.743 --> 00:03:41.203 just exactly what you might expect a baby to do, 00:03:41.203 --> 00:03:45.220 and we can hardly make babies more magical than they already are. 00:03:45.220 --> 00:03:48.099 But to my mind the magical thing, 00:03:48.099 --> 00:03:50.189 and what I want you to pay attention to, 00:03:50.189 --> 00:03:53.300 is the contrast between these two conditions, 00:03:53.300 --> 00:03:56.829 because the only thing that differs between these two movies 00:03:56.829 --> 00:04:00.335 is the statistical evidence the babies are going to observe. 00:04:00.335 --> 00:04:04.608 We're going to show babies a box of blue and yellow balls, 00:04:04.608 --> 00:04:09.228 and my then-graduate student, now colleague at Stanford, Yowan Gwan, 00:04:09.228 --> 00:04:12.595 is going to pull three blue balls in a row out of this box, 00:04:12.595 --> 00:04:15.428 and when she pulls those balls out, she's going to squeeze them, 00:04:15.428 --> 00:04:17.541 and the balls are going to squeak. 00:04:17.541 --> 00:04:20.304 And if you're a baby, that's like a TEDTalk. 00:04:20.304 --> 00:04:22.208 It doesn't get better than that. 00:04:22.208 --> 00:04:24.769 (Laughter) 00:04:26.968 --> 00:04:30.627 But the important point is it's really easy to pull three blue balls in a row 00:04:30.627 --> 00:04:33.112 out of a box of mostly blue balls. 00:04:33.112 --> 00:04:34.992 You could do that with your eyes closed. 00:04:34.992 --> 00:04:37.988 It's plausibly a random sample from this population. 00:04:37.988 --> 00:04:40.077 And if you can reach into a box at random 00:04:40.077 --> 00:04:42.005 and pull out things that squeak, 00:04:42.005 --> 00:04:44.559 then maybe everything in the box squeaks. 00:04:44.559 --> 00:04:48.459 So maybe babies should expect those yellow balls to squeak as well. 00:04:48.459 --> 00:04:50.758 Now, those yellow balls have funny sticks on the end, 00:04:50.758 --> 00:04:53.475 so babies could do other things with them if they wanted to. 00:04:53.475 --> 00:04:55.216 They could pound them or whack them. 00:04:55.216 --> 00:04:58.002 But let's see what the baby does. NOTE Paragraph 00:05:00.548 --> 00:05:03.891 (Video) Yowan Gwan: See this? (Ball squeaks) 00:05:03.891 --> 00:05:07.946 Did you see that? (Ball squeaks) 00:05:07.946 --> 00:05:11.102 Cool. 00:05:12.706 --> 00:05:14.656 See this one? 00:05:14.656 --> 00:05:16.537 (Ball squeaks) 00:05:16.537 --> 00:05:19.019 Wow. NOTE Paragraph 00:05:21.854 --> 00:05:23.967 Laura Schulz: Told you. (Laughter) NOTE Paragraph 00:05:23.967 --> 00:05:27.998 (Video) YG: See this one? (Ball squeaks) 00:05:27.998 --> 00:05:32.617 Hey Clara, this one's for you. You can go ahead and play. 00:05:39.854 --> 00:05:44.219 (Laughter) NOTE Paragraph 00:05:44.219 --> 00:05:47.214 LS: I don't even have to talk, right? 00:05:47.214 --> 00:05:49.583 All right, it's nice that babies will generalize properties 00:05:49.583 --> 00:05:51.371 of blue balls to yellow balls, 00:05:51.371 --> 00:05:54.737 and it's impressive that babies can learn from imitating us, 00:05:54.737 --> 00:05:58.406 but we've known those things about babies for a very long time. 00:05:58.406 --> 00:06:00.217 The really interesting question 00:06:00.217 --> 00:06:03.259 is what happens when we show babies exactly the same thing, 00:06:03.259 --> 00:06:06.370 and we can ensure it's exactly the same because we have a secret compartment 00:06:06.370 --> 00:06:08.390 and we actually pull the balls from there, 00:06:08.390 --> 00:06:12.268 but this time, all we change is the apparent population 00:06:12.268 --> 00:06:15.077 from which that evidence was drawn. 00:06:15.077 --> 00:06:18.723 This time, we're going to show babies three blue balls 00:06:18.723 --> 00:06:21.857 pulled out of a box of mostly yellow balls, 00:06:21.857 --> 00:06:23.599 and guess what? 00:06:23.599 --> 00:06:26.269 You cannot randomly draw three blue balls in a row 00:06:26.269 --> 00:06:28.753 out of a box of mostly yellow balls. 00:06:28.753 --> 00:06:32.050 That is not plausibly randomly sampled evidence. 00:06:32.050 --> 00:06:37.623 That evidence suggests that maybe Yowan was deliberately sampling the blue balls. 00:06:37.623 --> 00:06:40.456 Maybe there's something special about the blue balls. 00:06:40.456 --> 00:06:43.822 Maybe only the blue balls squeak. 00:06:43.822 --> 00:06:45.717 Let's see what the baby does. NOTE Paragraph 00:06:45.717 --> 00:06:49.711 (Video) YG: See this? (Ball squeaks) 00:06:49.711 --> 00:06:53.496 See this toy? (Ball squeaks) 00:06:53.496 --> 00:06:58.976 Oh, that was cool. See? (Ball squeaks) 00:06:58.976 --> 00:07:03.037 Now this one's for you to play. You can go ahead and play. NOTE Paragraph 00:07:11.264 --> 00:07:13.911 (Baby cries) NOTE Paragraph 00:07:13.911 --> 00:07:17.649 LS: So you just saw two 15-month old babies 00:07:17.649 --> 00:07:19.591 do entirely different things 00:07:19.591 --> 00:07:23.190 based only on the probability of the sample they observed. 00:07:23.190 --> 00:07:25.511 Let me show you the experimental results. 00:07:25.511 --> 00:07:28.275 On the vertical axis, you'll see the percentage of babies 00:07:28.275 --> 00:07:30.805 who squeezed the ball in each condition, 00:07:30.805 --> 00:07:34.520 and as you'll see, babies are much more likely to generalize the evidence 00:07:34.520 --> 00:07:37.655 when it's plausibly representative of the population 00:07:37.655 --> 00:07:41.393 than when the evidence is clearly cherry-picked. 00:07:41.393 --> 00:07:43.808 And this leads to a fun prediction: 00:07:43.808 --> 00:07:47.546 suppose you pulled just one blue ball 00:07:47.546 --> 00:07:49.334 out of the mostly yellow box. 00:07:49.334 --> 00:07:52.445 Now, you just can't pull three blue balls in a row at random out of a yellow box, 00:07:52.445 --> 00:07:55.092 but you could randomly sample just one blue ball. 00:07:55.092 --> 00:07:57.089 That's not an improbable sample. 00:07:57.089 --> 00:07:59.184 And if you could reach into a box at random 00:07:59.184 --> 00:08:01.181 and pull out something that squeaks, 00:08:01.181 --> 00:08:03.875 maybe everything in the box squeaks. 00:08:03.875 --> 00:08:08.008 So even though babies are going to see much less evidence for squeaking, 00:08:08.008 --> 00:08:10.562 and have many fewer actions to imitate 00:08:10.562 --> 00:08:13.905 in this one ball condition than in the condition you just saw, 00:08:13.905 --> 00:08:17.797 we predicted that babies themselves would squeeze more, 00:08:17.797 --> 00:08:20.691 and that's exactly what we found. 00:08:20.691 --> 00:08:25.102 So 15-month old babies, in this respect, like scientists, 00:08:25.102 --> 00:08:28.190 care whether evidence is randomly sampled or not, 00:08:28.190 --> 00:08:31.697 and they use this to develop expectations about the world: 00:08:31.697 --> 00:08:33.879 what squeaks and what doesn't, 00:08:33.879 --> 00:08:38.384 what to explore and what to ignore. NOTE Paragraph 00:08:38.384 --> 00:08:40.450 Let me show you another example now, 00:08:40.450 --> 00:08:43.074 this time about a problem of causal reasoning. 00:08:43.074 --> 00:08:45.619 And it starts with a problem of confounded evidence 00:08:45.619 --> 00:08:47.291 that all of us have, 00:08:47.291 --> 00:08:49.311 which is that we are part of the world. 00:08:49.311 --> 00:08:52.747 And this may not seem like a problem to you, but like most problems, 00:08:52.747 --> 00:08:55.464 it's only a problem when things go wrong. 00:08:55.464 --> 00:08:57.275 Take this baby, for instance. 00:08:57.275 --> 00:08:59.008 Things are going wrong for him. 00:08:59.008 --> 00:09:01.251 He would like to make this toy go, and he can't. 00:09:01.251 --> 00:09:03.780 I'll show you a few second clip. 00:09:09.053 --> 00:09:11.026 And there's two possibilities, broadly: 00:09:11.026 --> 00:09:13.394 maybe he's doing something wrong, 00:09:13.394 --> 00:09:18.340 or maybe there's something wrong with the toy. 00:09:18.340 --> 00:09:20.221 So in this next experiment, 00:09:20.221 --> 00:09:23.518 we're going to give babies just a tiny bit of statistical data 00:09:23.518 --> 00:09:26.049 supporting one hypothesis over the other, 00:09:26.049 --> 00:09:29.555 and we're going to see if babies can use that to make different decisions 00:09:29.555 --> 00:09:31.389 about what to do. NOTE Paragraph 00:09:31.389 --> 00:09:34.361 Here's the setup. 00:09:34.361 --> 00:09:37.101 Yowan is going to try to make the toy go and succeed. 00:09:37.101 --> 00:09:40.421 I am then going to try twice and fail both times, 00:09:40.421 --> 00:09:43.533 and then Yowan is going to try again and succeed, 00:09:43.533 --> 00:09:46.705 and this roughly sums up my relationship to my graduate students 00:09:46.705 --> 00:09:50.002 in technology across the board. 00:09:50.002 --> 00:09:53.322 But the important point here is it provides a little bit of evidence 00:09:53.322 --> 00:09:57.061 that the problem isn't with the toy, it's with the person. 00:09:57.061 --> 00:09:59.034 Some people can make this toy go, 00:09:59.034 --> 00:10:00.799 and some can't. 00:10:00.799 --> 00:10:04.212 Now, when the baby gets the toy, he's going to have a choice. 00:10:04.212 --> 00:10:06.093 His mom is right there, 00:10:06.093 --> 00:10:09.715 so he can go ahead and hand off the toy and change the person, 00:10:09.715 --> 00:10:12.873 but there's also going to be another toy at the end of that cloth, 00:10:12.873 --> 00:10:16.425 and he can pull the cloth towards him and change the toy. 00:10:16.425 --> 00:10:18.515 So let's see what the baby does. NOTE Paragraph 00:10:18.515 --> 00:10:21.278 (Video) YG: Two, three. Go! 00:10:21.278 --> 00:10:25.829 (Video) LS: One, two, three, go! 00:10:25.829 --> 00:10:33.211 Arthur, I'm going to try again. One, two, three, go! 00:10:33.677 --> 00:10:36.277 (Video) YG: Arthur, let me try again, okay? 00:10:36.277 --> 00:10:40.827 One, two, three, go! 00:10:40.843 --> 00:10:43.466 Look at that. Remember these toys? 00:10:43.466 --> 00:10:46.810 See these toys? Yeah, I'm going to put this one over here, 00:10:46.810 --> 00:10:48.412 and I'm going to give this one to you. 00:10:48.412 --> 00:10:51.127 You can go ahead and play. 00:11:11.213 --> 00:11:15.950 LS: Okay, Laura, but of course, babies love their mommies. 00:11:15.950 --> 00:11:18.132 Of course babies give toys to their mommies 00:11:18.132 --> 00:11:20.292 when they can't make them work. 00:11:20.292 --> 00:11:22.405 So again, the really important question is what happens when we change 00:11:22.405 --> 00:11:26.909 the statistical data ever so slightly. 00:11:26.909 --> 00:11:30.996 This time, babies are going to see the toy work and fail in exactly the same order, 00:11:30.996 --> 00:11:33.411 but we're changing the distribution of evidence. 00:11:33.411 --> 00:11:37.822 This time, Yowan is going to succeed once and fail once, and so am I. 00:11:37.822 --> 00:11:41.189 And this suggests it doesn't matter who tries this toy, 00:11:41.189 --> 00:11:43.395 the toy is broken. 00:11:43.395 --> 00:11:45.345 It doesn't work all the time. 00:11:45.345 --> 00:11:47.008 Again, the baby's going to have a choice. 00:11:47.008 --> 00:11:48.796 Her mom is right next to her, 00:11:48.796 --> 00:11:50.677 so she can change the person, 00:11:50.677 --> 00:11:53.091 and there's going to be another toy at the end of the cloth. 00:11:53.091 --> 00:11:55.288 Let's watch what she does. NOTE Paragraph 00:11:55.288 --> 00:11:59.636 (Video) YG: Two, three, go! 00:11:59.636 --> 00:12:05.046 Let me try one more time. One, two, three, go! 00:12:05.046 --> 00:12:07.157 Hmm. NOTE Paragraph 00:12:08.087 --> 00:12:10.642 (Video) LS: Let me try, Clara. 00:12:10.642 --> 00:12:14.587 One, two, three, go! 00:12:15.935 --> 00:12:17.770 Hmm, let me try again. 00:12:17.770 --> 00:12:22.087 One, two, three, go! 00:12:23.319 --> 00:12:25.362 (Video) YG: I'm going to put this one over here, 00:12:25.362 --> 00:12:27.243 and I'm going to give this one to you. 00:12:27.243 --> 00:12:30.840 You can go ahead and play. 00:12:46.376 --> 00:12:51.273 (Applause) NOTE Paragraph 00:12:52.993 --> 00:12:55.385 LS: Let me show you the experimental results. 00:12:55.385 --> 00:12:57.860 On the vertical axis, you'll see the distribution 00:12:57.860 --> 00:13:00.437 of children's choices in each condition, 00:13:00.437 --> 00:13:04.988 and you'll see that the distribution of the choices children make 00:13:04.988 --> 00:13:07.775 depends on the evidence they observe. 00:13:07.775 --> 00:13:09.632 So in the second year of life, 00:13:09.632 --> 00:13:12.209 babies can use a tiny bit of statistical data 00:13:12.209 --> 00:13:15.576 to decide between two fundamentally different strategies 00:13:15.576 --> 00:13:17.457 for acting on the world: 00:13:17.457 --> 00:13:21.070 asking for help and exploring. 00:13:21.070 --> 00:13:25.134 I've just shown you two laboratory experiments 00:13:25.134 --> 00:13:28.825 out of literally hundreds in the field that make similar points, 00:13:28.825 --> 00:13:31.217 because the really critical point 00:13:31.217 --> 00:13:36.325 is that children's ability to make rich inferences from sparse data 00:13:36.325 --> 00:13:41.666 underlies all the species-specific cultural learning that we do. 00:13:41.666 --> 00:13:46.263 Children learn about new tools from just a few examples. 00:13:46.263 --> 00:13:48.840 They learn new causal relationships 00:13:48.840 --> 00:13:51.928 from just a few examples. 00:13:51.928 --> 00:13:57.129 They even learn new words, in this case in American sign language. NOTE Paragraph 00:13:57.129 --> 00:14:00.078 I want to close with just two points. 00:14:00.078 --> 00:14:03.738 If you've been following my world, the field of brain and cognitive sciences, 00:14:03.738 --> 00:14:05.665 for the past few years, 00:14:05.665 --> 00:14:08.057 three big ideas will have come to your attention. 00:14:08.057 --> 00:14:11.516 The first is that this is the era of the brain. 00:14:11.516 --> 00:14:15.185 And indeed, there have been staggering discoveries in neuroscience: 00:14:15.185 --> 00:14:18.621 localizing, functionally specialized regions of cortex, 00:14:18.621 --> 00:14:21.222 turning mouse brains transparent, 00:14:21.222 --> 00:14:24.998 activating neurons with light. 00:14:24.998 --> 00:14:26.994 A second big idea 00:14:26.994 --> 00:14:31.418 is that this is the ear of big data and machine learning, 00:14:31.418 --> 00:14:34.239 and machine learning promises to revolutionize our understanding 00:14:34.239 --> 00:14:38.906 of everything from social networks to epidemiology. 00:14:38.906 --> 00:14:41.599 And maybe, as it tackles problems of seen understanding 00:14:41.599 --> 00:14:43.712 and natural language processing 00:14:43.712 --> 00:14:46.916 to tell us something about human cognition. 00:14:46.916 --> 00:14:49.703 And the final big idea you'll have heard 00:14:49.703 --> 00:14:52.860 is that maybe it's a good idea we're going to know so much about brains 00:14:52.860 --> 00:14:54.997 and have so much access to big data, 00:14:54.997 --> 00:14:57.504 because left to our own devices, 00:14:57.504 --> 00:15:01.335 humans are fallible, we take shortcuts, 00:15:01.335 --> 00:15:04.772 we err, we make mistakes, 00:15:04.772 --> 00:15:08.696 we're biased, and in innumerable ways, 00:15:08.696 --> 00:15:11.425 we get the world wrong. 00:15:12.843 --> 00:15:15.792 I think these are all important stories, 00:15:15.792 --> 00:15:19.577 and they have a lot to tell us about what it means to be human, 00:15:19.577 --> 00:15:23.106 but I want to note that today I told you a very different story. 00:15:23.106 --> 00:15:27.773 It's a story about minds and not brains, 00:15:27.773 --> 00:15:30.629 and in particular, it's a story about the kinds of computations 00:15:30.629 --> 00:15:33.369 that uniquely human minds can perform, 00:15:33.369 --> 00:15:37.313 which involve rich, structured knowledge and the ability to learn 00:15:37.313 --> 00:15:39.940 from small amounts of data, 00:15:39.940 --> 00:15:43.771 the evidence of just a few examples. 00:15:43.771 --> 00:15:48.600 And fundamentally, it's a story about how starting as very small children 00:15:48.600 --> 00:15:52.780 and continuing out all the way to the greatest accomplishments 00:15:52.780 --> 00:15:56.123 of our culture, 00:15:56.123 --> 00:15:58.620 we get the world right. NOTE Paragraph 00:16:00.433 --> 00:16:04.110 Folks, human minds do not only learn 00:16:04.110 --> 00:16:06.145 from small amounts of data. 00:16:06.145 --> 00:16:08.746 Human minds think of altogether new ideas. 00:16:08.746 --> 00:16:11.787 Human minds generate research and discovery, 00:16:11.787 --> 00:16:16.710 and human minds generate art and literature and poetry and theater, 00:16:16.710 --> 00:16:21.330 and human minds take care of other humans: 00:16:21.330 --> 00:16:24.517 our old, our young, our sick. 00:16:24.517 --> 00:16:27.884 We even heal them. 00:16:27.884 --> 00:16:30.577 In the years to come, we're going to see technological innovation 00:16:30.577 --> 00:16:34.594 beyond anything I can even envision, 00:16:34.594 --> 00:16:36.614 but we are very unlikely 00:16:36.614 --> 00:16:40.283 to see anything even approximating the computational power 00:16:40.283 --> 00:16:42.256 of a human child 00:16:42.256 --> 00:16:46.621 in my lifetime or in yours. 00:16:46.621 --> 00:16:51.288 If we invest in these most powerful learners and their development, 00:16:51.288 --> 00:16:54.585 in babies and children 00:16:54.585 --> 00:16:56.411 and mothers and fathers 00:16:56.411 --> 00:16:59.011 and caregivers and teachers 00:16:59.011 --> 00:17:03.028 the ways we invest in our other most powerful and elegant forms 00:17:03.028 --> 00:17:06.488 of technology, engineering, and design, 00:17:06.488 --> 00:17:09.437 we will not just be dreaming of a better future, 00:17:09.437 --> 00:17:11.564 we will be planning for one. NOTE Paragraph 00:17:11.564 --> 00:17:13.909 Thank you very much. NOTE Paragraph 00:17:13.909 --> 00:17:17.081 (Applause) NOTE Paragraph 00:17:17.081 --> 00:17:22.236 Chris Anderson: Laura, thank you. I do actually have a question for you. 00:17:22.236 --> 00:17:24.595 First of all, the research is insane. 00:17:24.595 --> 00:17:28.032 I mean, who would design an experiment like that? (Laughter) 00:17:28.032 --> 00:17:31.027 I've seen that a couple of times, 00:17:31.027 --> 00:17:34.162 and I still don't honestly believe that that can truly be happening, 00:17:34.162 --> 00:17:37.087 but other people have done similar experiments. It checks out. 00:17:37.087 --> 00:17:39.363 They really are that genius. NOTE Paragraph 00:17:39.363 --> 00:17:42.010 LS: You know, they look really impressive in our experiments, 00:17:42.010 --> 00:17:44.192 but think about what they look like in real life, right? 00:17:44.192 --> 00:17:45.762 It starts out as a baby. 00:17:45.762 --> 00:17:47.829 Eighteen months later, it's talking to you, 00:17:47.829 --> 00:17:50.058 and babies' first words aren't just things like balls and ducks, 00:17:50.058 --> 00:17:52.681 they're things like "all gone," which refer to disappearance, 00:17:52.681 --> 00:17:55.444 or "uh oh," which refer to unintentional actions. 00:17:55.444 --> 00:17:57.116 It has to be that powerful. 00:17:57.116 --> 00:17:59.531 It has to be much more powerful than anything I showed you. 00:17:59.531 --> 00:18:01.435 They're figuring out the entire world. 00:18:01.435 --> 00:18:05.429 A four-year old can talk to you about almost anything. 00:18:05.429 --> 00:18:07.008 (Applause) NOTE Paragraph 00:18:07.008 --> 00:18:10.444 CA: And if I understand you right, the other key point you're making is, 00:18:10.444 --> 00:18:12.928 we've been through these years where there's all this talk 00:18:12.928 --> 00:18:15.088 of how quirky and buggy our minds are, 00:18:15.088 --> 00:18:17.897 that behavioral economics and the whole theories behind that 00:18:17.897 --> 00:18:19.360 that we're not rational agents. 00:18:19.360 --> 00:18:21.496 You're really saying that the biggest story 00:18:21.496 --> 00:18:23.772 is how extraordinary, 00:18:23.772 --> 00:18:29.019 and there really is genius there that is underappreciated. NOTE Paragraph 00:18:29.019 --> 00:18:30.830 LS: One of my favorite quotes in psychology 00:18:30.830 --> 00:18:33.004 comes from the social psychologist Solomon Asch, 00:18:33.004 --> 00:18:35.767 and he said the fundamental task of psychology is to remove 00:18:35.767 --> 00:18:38.553 the veil of self-evidence from things. 00:18:38.553 --> 00:18:43.104 There are a million orders of magnitude more decisions you make every day 00:18:43.104 --> 00:18:44.451 that get the world right. 00:18:44.451 --> 00:18:46.773 You know about objects and their properties. 00:18:46.773 --> 00:18:49.132 You know them when they're occluded. You know them in the dark. 00:18:49.132 --> 00:18:50.990 You can walk through rooms. 00:18:50.990 --> 00:18:53.312 You can figure out what other people are thinking. You can talk to them. 00:18:53.312 --> 00:18:55.262 You can navigate space. You know about numbers. 00:18:55.262 --> 00:18:58.304 You know causal relationships. You know about moral reasoning. 00:18:58.304 --> 00:19:01.020 You do this effortlessly, so we don't see it, 00:19:01.020 --> 00:19:03.722 but that is how we get the world right, and it's a remarkable 00:19:03.722 --> 00:19:06.013 and very difficult-to-understand accomplishment. NOTE Paragraph 00:19:06.013 --> 00:19:08.288 CA: I suspect there are people in the audience who have 00:19:08.288 --> 00:19:10.726 this view of accelerating technological power 00:19:10.726 --> 00:19:13.364 who might dispute your statement that never in our lifetimes 00:19:13.364 --> 00:19:16.592 will a computer do what a three-year old child can do, 00:19:16.592 --> 00:19:20.980 but what's clear is that in any scenario, 00:19:20.980 --> 00:19:24.750 our machines have so much to learn from our toddlers. 00:19:26.550 --> 00:19:29.446 LS: I think so. You'll have some machine learning folks up here. 00:19:29.446 --> 00:19:33.649 I mean, you should never bet against babies or chimpanzees 00:19:33.649 --> 00:19:37.294 or technology as a matter of practice, 00:19:37.294 --> 00:19:41.822 but it's not just a difference in quantity, 00:19:41.822 --> 00:19:43.586 it's a difference in kind. Right? 00:19:43.586 --> 00:19:45.746 We have incredibly powerful computers, 00:19:45.746 --> 00:19:48.137 and they do do amazingly sophisticated things, 00:19:48.137 --> 00:19:51.341 often with very big amounts of data. 00:19:51.341 --> 00:19:53.408 Human minds do, I think, something quite different, 00:19:53.408 --> 00:19:57.843 and I think it's the structured, hierarchical nature of human knowledge 00:19:57.843 --> 00:20:00.165 that remains a real challenge. NOTE Paragraph 00:20:00.165 --> 00:20:02.556 CA: Laura Schulz, wonderful food for thought. Thank you so much. NOTE Paragraph 00:20:02.556 --> 00:20:05.858 LS: Thank you. (Applause)