0:00:00.835,0:00:02.990 Mark Twain summed up[br]what I take to be 0:00:02.990,0:00:06.110 one of the fundamental problems[br]of cognitive science 0:00:06.110,0:00:07.820 with a single witticism. 0:00:08.410,0:00:11.492 He said, "There's something[br]fascinating about science. 0:00:11.492,0:00:14.720 One gets such wholesale[br]returns of conjecture 0:00:14.720,0:00:17.924 out of such a trifling[br]investment in fact." 0:00:17.924,0:00:19.509 (Laughter) 0:00:20.199,0:00:22.803 Twain meant it as a joke,[br]of course, but he's right: 0:00:22.803,0:00:25.679 There's something[br]fascinating about science. 0:00:25.679,0:00:29.940 From a few bones, we infer[br]the existence of dinosuars. 0:00:30.910,0:00:34.781 From spectral lines,[br]the composition of nebulae. 0:00:35.471,0:00:38.409 From fruit flies, 0:00:38.409,0:00:41.352 the mechanisms of heredity, 0:00:41.352,0:00:45.601 and from reconstructed images[br]of blood flowing through the brain, 0:00:45.601,0:00:49.949 or in my case, from the behavior[br]of very young children, 0:00:50.519,0:00:53.138 we try to say something about[br]the fundamental mechanisms 0:00:53.138,0:00:55.716 of human cognition. 0:00:55.716,0:01:00.475 In particular, in my lab in the Department[br]of Brain and Cognitive Sciences at MIT, 0:01:00.475,0:01:04.129 I have spent the past decade[br]trying to understand the mystery 0:01:04.129,0:01:08.106 of how children learn so much[br]from so little so quickly. 0:01:08.666,0:01:11.644 Because, it turns out that[br]the fascinating thing about science 0:01:11.644,0:01:15.173 is also a fascinating[br]thing about children, 0:01:15.173,0:01:17.754 which, to put a gentler spin on Mark Twain 0:01:17.754,0:01:22.404 is precisely their ability[br]to draw rich, abstract inferences 0:01:22.404,0:01:27.065 rapidly and accurately[br]from sparse, noisy data. 0:01:28.355,0:01:30.753 I'm going to give you[br]just two examples today. 0:01:30.753,0:01:33.040 One is about a problem of generalization, 0:01:33.040,0:01:35.890 and the other is about a problem[br]of causal reasoning. 0:01:35.890,0:01:38.415 And although I'm going to talk[br]about work in my lab, 0:01:38.415,0:01:41.875 this work is inspired by[br]and indebted to a field. 0:01:41.875,0:01:46.158 I'm grateful to mentors, colleagues,[br]and collaborators around the world. 0:01:47.308,0:01:50.282 Let me start with the problem[br]of generalization. 0:01:50.652,0:01:54.785 Generalizing from small samples of data[br]is the bread and butter of science. 0:01:54.785,0:01:57.339 We poll a tiny fraction of the electorate 0:01:57.339,0:02:00.550 and we predict the outcome[br]of national elections. 0:02:00.550,0:02:04.165 We see how a handful of patients[br]respond to treatment in a clinical trial, 0:02:04.165,0:02:07.230 and we bring drugs to a national market. 0:02:07.230,0:02:11.595 But this only works if our sample[br]is randomly drawn from the population. 0:02:11.595,0:02:14.330 If our sample is cherry-picked in some way 0:02:14.330,0:02:16.402 -- say, we poll only urban voters, 0:02:16.402,0:02:20.790 or say, in our clinical trials[br]for treatments for heart disease, 0:02:20.790,0:02:22.671 we include only men -- 0:02:22.671,0:02:25.829 the results may not generalize[br]to the broader population. 0:02:26.479,0:02:30.060 So scientists care whether evidence[br]is randomly sampled or not, 0:02:30.060,0:02:32.075 but what does that have to do with babies? 0:02:32.585,0:02:37.206 Well, babies have to generalize[br]from small samples of data all the time. 0:02:37.206,0:02:40.364 They see a few rubber ducks[br]and learn that they float, 0:02:40.364,0:02:43.939 or a few balls and learn that they bounce. 0:02:43.939,0:02:46.890 And they develop expectations[br]about ducks and balls 0:02:46.890,0:02:49.606 that they're going to extend[br]to rubber ducks and balls 0:02:49.606,0:02:51.485 for the rest of their lives. 0:02:51.485,0:02:55.224 And the kinds of generalizations[br]babies have to make about ducks and balls 0:02:55.224,0:02:57.313 they have to make about almost everything: 0:02:57.313,0:03:01.230 shoes and ships and ceiling wax[br]and cabbages and [kings?]. 0:03:02.200,0:03:05.161 So do babies care whether[br]the tiny bit of evidence they see 0:03:05.161,0:03:08.853 is plausibly representative[br]of a larger population? 0:03:09.763,0:03:11.663 Let's find out. 0:03:11.663,0:03:13.386 I'm going to show you two movies, 0:03:13.386,0:03:15.848 one from each of two conditions[br]of an experiment, 0:03:15.848,0:03:18.286 and because you're going to see[br]just two movies, 0:03:18.286,0:03:20.422 you're going to see just two babies, 0:03:20.422,0:03:24.369 and any two babies differ from each other[br]in innumerable ways. 0:03:24.369,0:03:27.420 But these babies, of course,[br]here stand in for groups of babies, 0:03:27.420,0:03:29.315 and the differences you're going to see 0:03:29.315,0:03:34.510 represent average group differences[br]in babies' behavior across conditions. 0:03:35.160,0:03:37.743 In each movie, you're going to see[br]a baby doing maybe 0:03:37.743,0:03:41.203 just exactly what you might[br]expect a baby to do, 0:03:41.203,0:03:45.220 and we can hardly make babies[br]more magical than they already are. 0:03:46.090,0:03:48.100 But to my mind the magical thing, 0:03:48.100,0:03:50.189 and what I want you to pay attention to, 0:03:50.189,0:03:53.300 is the contrast between[br]these two conditions, 0:03:53.300,0:03:56.829 because the only thing[br]that differs between these two movies 0:03:56.829,0:04:01.425 is the statistical evidence[br]the babies are going to observe. 0:04:01.425,0:04:04.608 We're going to show babies[br]a box of blue and yellow balls, 0:04:04.608,0:04:09.228 and my then-graduate student,[br]now colleague at Stanford, Yowan Gwan, 0:04:09.228,0:04:12.305 is going to pull three blue balls[br]in a row out of this box, 0:04:12.305,0:04:15.428 and when she pulls those balls out,[br]she's going to squeeze them, 0:04:15.428,0:04:17.541 and the balls are going to squeak. 0:04:17.541,0:04:20.304 And if you're a baby,[br]that's like a TEDTalk. 0:04:20.304,0:04:22.208 It doesn't get better than that. 0:04:22.208,0:04:24.769 (Laughter) 0:04:26.968,0:04:30.627 But the important point is it's really[br]easy to pull three blue balls in a row 0:04:30.627,0:04:32.932 out of a box of mostly blue balls. 0:04:32.932,0:04:34.992 You could do that with your eyes closed. 0:04:34.992,0:04:37.988 It's plausibly a random sample[br]from this population. 0:04:37.988,0:04:41.720 And if you can reach into a box at random[br]and pull out things that squeak, 0:04:41.720,0:04:44.559 then maybe everything in the box squeaks. 0:04:44.559,0:04:48.209 So maybe babies should expect[br]those yellow balls to squeak as well. 0:04:48.209,0:04:50.728 Now, those yellow balls[br]have funny sticks on the end, 0:04:50.728,0:04:53.585 so babies could do other things[br]with them if they wanted to. 0:04:53.585,0:04:55.416 They could pound them or whack them. 0:04:55.416,0:04:58.002 But let's see what the baby does. 0:05:00.548,0:05:03.891 (Video) Yowan Gwan: See this?[br](Ball squeaks) 0:05:04.531,0:05:07.576 Did you see that?[br](Ball squeaks) 0:05:08.036,0:05:11.102 Cool. 0:05:12.706,0:05:14.656 See this one? 0:05:14.656,0:05:16.537 (Ball squeaks) 0:05:16.537,0:05:19.190 Wow. 0:05:21.854,0:05:23.967 Laura Schulz: Told you. (Laughter) 0:05:23.967,0:05:27.998 (Video) YG: See this one?[br](Ball squeaks) 0:05:27.998,0:05:32.617 Hey Clara, this one's for you.[br]You can go ahead and play. 0:05:39.854,0:05:44.219 (Laughter) 0:05:44.219,0:05:47.214 LS: I don't even have to talk, right? 0:05:47.214,0:05:50.113 All right, it's nice that babies[br]will generalize properties 0:05:50.113,0:05:51.641 of blue balls to yellow balls, 0:05:51.641,0:05:54.737 and it's impressive that babies[br]can learn from imitating us, 0:05:54.737,0:05:58.406 but we've known those things about babies[br]for a very long time. 0:05:58.406,0:06:00.217 The really interesting question 0:06:00.217,0:06:03.069 is what happens when we show babies[br]exactly the same thing, 0:06:03.069,0:06:06.680 and we can ensure it's exactly the same[br]because we have a secret compartment 0:06:06.680,0:06:08.790 and we actually pull the balls from there, 0:06:08.790,0:06:12.268 but this time, all we change[br]is the apparent population 0:06:12.268,0:06:15.170 from which that evidence was drawn. 0:06:15.780,0:06:18.723 This time, we're going to show babies[br]three blue balls 0:06:18.723,0:06:21.857 pulled out of a box[br]of mostly yellow balls, 0:06:22.357,0:06:23.599 and guess what? 0:06:23.599,0:06:26.269 You cannot randomly draw[br]three blue balls in a row 0:06:26.269,0:06:28.753 out of a box of mostly yellow balls. 0:06:28.753,0:06:32.500 That is not plausibly[br]randomly sampled evidence. 0:06:32.500,0:06:37.623 That evidence suggests that maybe Yowan[br]was deliberately sampling the blue balls. 0:06:37.623,0:06:40.206 Maybe there's something special[br]about the blue balls. 0:06:40.846,0:06:43.822 Maybe only the blue balls squeak. 0:06:43.822,0:06:45.717 Let's see what the baby does. 0:06:45.717,0:06:48.621 (Video) YG: See this?[br](Ball squeaks) 0:06:50.851,0:06:53.496 See this toy?[br](Ball squeaks) 0:06:53.496,0:06:58.976 Oh, that was cool. See?[br](Ball squeaks) 0:06:58.976,0:07:03.370 Now this one's for you to play.[br]You can go ahead and play. 0:07:11.264,0:07:13.911 (Baby cries) 0:07:14.901,0:07:17.649 LS: So you just saw[br]two 15-month old babies 0:07:17.649,0:07:19.591 do entirely different things 0:07:19.591,0:07:23.190 based only on the probability[br]of the sample they observed. 0:07:23.190,0:07:25.511 Let me show you the experimental results. 0:07:25.511,0:07:28.275 On the vertical axis, you'll see[br]the percentage of babies 0:07:28.275,0:07:30.805 who squeezed the ball in each condition, 0:07:30.805,0:07:34.520 and as you'll see, babies are much[br]more likely to generalize the evidence 0:07:34.520,0:07:37.655 when it's plausibly representative[br]of the population 0:07:37.655,0:07:41.393 than when the evidence[br]is clearly cherry-picked. 0:07:41.393,0:07:43.808 And this leads to a fun prediction: 0:07:43.808,0:07:48.676 suppose you pulled just one blue ball[br]out of the mostly yellow box. 0:07:48.896,0:07:52.765 Now, you just can't pull three blue balls[br]in a row at random out of a yellow box, 0:07:52.765,0:07:55.220 but you could randomly sample[br]just one blue ball. 0:07:55.220,0:07:57.190 That's not an improbable sample. 0:07:57.190,0:07:59.414 And if you could reach into[br]a box at random 0:07:59.414,0:08:03.401 and pull out something that squeaks,[br]maybe everything in the box squeaks. 0:08:03.875,0:08:08.320 So even though babies are going to see[br]much less evidence for squeaking, 0:08:08.320,0:08:10.562 and have many fewer actions to imitate 0:08:10.562,0:08:13.905 in this one ball condition than in[br]the condition you just saw, 0:08:13.905,0:08:17.797 we predicted that babies themselves[br]would squeeze more, 0:08:17.797,0:08:20.691 and that's exactly what we found. 0:08:20.691,0:08:25.102 So 15-month old babies,[br]in this respect, like scientists, 0:08:25.102,0:08:28.190 care whether evidence[br]is randomly sampled or not, 0:08:28.190,0:08:31.697 and they use this to develop[br]expectations about the world: 0:08:31.697,0:08:33.879 what squeaks and what doesn't, 0:08:33.879,0:08:37.024 what to explore and what to ignore. 0:08:38.384,0:08:40.450 Let me show you another example now, 0:08:40.450,0:08:43.180 this time about a problem[br]of causal reasoning. 0:08:43.180,0:08:45.619 And it starts with a problem[br]of confounded evidence 0:08:45.619,0:08:47.291 that all of us have, 0:08:47.291,0:08:49.311 which is that we are part of the world. 0:08:49.311,0:08:52.747 And this may not seem like a problem[br]to you, but like most problems, 0:08:52.747,0:08:55.084 it's only a problem when things go wrong. 0:08:55.464,0:08:57.275 Take this baby, for instance. 0:08:57.275,0:08:58.980 Things are going wrong for him. 0:08:58.980,0:09:01.251 He would like to make[br]this toy go, and he can't. 0:09:01.251,0:09:03.780 I'll show you a few second clip. 0:09:09.340,0:09:11.260 And there's two possibilities, broadly: 0:09:11.260,0:09:13.894 maybe he's doing something wrong, 0:09:13.894,0:09:18.110 or maybe there's something[br]wrong with the toy. 0:09:18.110,0:09:20.221 So in this next experiment, 0:09:20.221,0:09:23.518 we're going to give babies[br]just a tiny bit of statistical data 0:09:23.518,0:09:26.100 supporting one hypothesis over the other, 0:09:26.100,0:09:29.555 and we're going to see if babies[br]can use that to make different decisions 0:09:29.555,0:09:31.389 about what to do. 0:09:31.389,0:09:33.411 Here's the setup. 0:09:34.071,0:09:37.101 Yowan is going to try to make[br]the toy go and succeed. 0:09:37.101,0:09:40.421 I am then going to try twice[br]and fail both times, 0:09:40.421,0:09:43.533 and then Yowan is going[br]to try again and succeed, 0:09:43.533,0:09:46.705 and this roughly sums up my relationship[br]to my graduate students 0:09:46.705,0:09:49.540 in technology across the board. 0:09:50.030,0:09:53.322 But the important point here is[br]it provides a little bit of evidence 0:09:53.322,0:09:57.010 that the problem isn't with the toy,[br]it's with the person. 0:09:57.610,0:09:59.340 Some people can make this toy go, 0:09:59.340,0:10:00.299 and some can't. 0:10:00.799,0:10:04.212 Now, when the baby gets the toy,[br]he's going to have a choice. 0:10:04.212,0:10:06.400 His mom is right there, 0:10:06.400,0:10:09.715 so he can go ahead and hand off the toy[br]and change the person, 0:10:09.715,0:10:12.873 but there's also going to be[br]another toy at the end of that cloth, 0:10:12.873,0:10:16.425 and he can pull the cloth towards him[br]and change the toy. 0:10:16.425,0:10:18.515 So let's see what the baby does. 0:10:18.515,0:10:21.278 (Video) YG: Two, three. Go! 0:10:21.278,0:10:25.829 (Video) LS: One, two, three, go! 0:10:25.829,0:10:33.211 Arthur, I'm going to try again.[br]One, two, three, go! 0:10:33.677,0:10:36.277 (Video) YG: Arthur,[br]let me try again, okay? 0:10:36.277,0:10:40.827 One, two, three, go! 0:10:41.583,0:10:43.466 Look at that. Remember these toys? 0:10:43.466,0:10:46.730 See these toys? Yeah, I'm going[br]to put this one over here, 0:10:46.730,0:10:48.792 and I'm going to give this one to you. 0:10:48.792,0:10:51.127 You can go ahead and play. 0:11:11.213,0:11:15.950 LS: Okay, Laura, but of course,[br]babies love their mommies. 0:11:15.950,0:11:18.132 Of course babies give toys[br]to their mommies 0:11:18.132,0:11:20.162 when they can't make them work. 0:11:20.162,0:11:23.755 So again, the really important question[br]is what happens when we change 0:11:23.755,0:11:26.909 the statistical data ever so slightly. 0:11:26.909,0:11:30.996 This time, babies are going to see the toy[br]work and fail in exactly the same order, 0:11:30.996,0:11:33.411 but we're changing[br]the distribution of evidence. 0:11:33.411,0:11:37.822 This time, Yowan is going to succeed[br]once and fail once, and so am I. 0:11:37.822,0:11:43.459 And this suggests it doesn't matter[br]who tries this toy, the toy is broken. 0:11:43.459,0:11:45.345 It doesn't work all the time. 0:11:45.345,0:11:47.310 Again, the baby's going to have a choice. 0:11:47.310,0:11:50.706 Her mom is right next to her,[br]so she can change the person, 0:11:50.706,0:11:53.910 and there's going to be another toy[br]at the end of the cloth. 0:11:53.910,0:11:55.288 Let's watch what she does. 0:11:55.288,0:11:59.636 (Video) YG: Two, three, go! 0:11:59.636,0:12:04.620 Let me try one more time.[br]One, two, three, go! 0:12:05.460,0:12:07.157 Hmm. 0:12:08.870,0:12:10.642 (Video) LS: Let me try, Clara. 0:12:10.642,0:12:14.587 One, two, three, go! 0:12:15.935,0:12:17.770 Hmm, let me try again. 0:12:17.770,0:12:22.870 One, two, three, go! 0:12:23.009,0:12:25.242 (Video) YG: I'm going[br]to put this one over here, 0:12:25.242,0:12:27.243 and I'm going to give this one to you. 0:12:27.243,0:12:30.840 You can go ahead and play. 0:12:46.376,0:12:51.273 (Applause) 0:12:52.993,0:12:55.385 LS: Let me show you[br]the experimental results. 0:12:55.385,0:12:57.860 On the vertical axis,[br]you'll see the distribution 0:12:57.860,0:13:00.437 of children's choices in each condition, 0:13:00.437,0:13:04.988 and you'll see that the distribution[br]of the choices children make 0:13:04.988,0:13:07.775 depends on the evidence they observe. 0:13:07.775,0:13:09.632 So in the second year of life, 0:13:09.632,0:13:12.209 babies can use a tiny bit[br]of statistical data 0:13:12.209,0:13:15.576 to decide between two[br]fundamentally different strategies 0:13:15.576,0:13:17.457 for acting on the world: 0:13:17.457,0:13:20.200 asking for help and exploring. 0:13:21.700,0:13:25.134 I've just shown you[br]two laboratory experiments 0:13:25.134,0:13:28.825 out of literally hundreds in the field[br]that make similar points, 0:13:28.825,0:13:31.217 because the really critical point 0:13:31.217,0:13:36.325 is that children's ability[br]to make rich inferences from sparse data 0:13:36.325,0:13:41.666 underlies all the species-specific[br]cultural learning that we do. 0:13:41.666,0:13:46.263 Children learn about new tools[br]from just a few examples. 0:13:46.263,0:13:50.980 They learn new causal relationships[br]from just a few examples. 0:13:51.928,0:13:56.799 They even learn new words,[br]in this case in American sign language. 0:13:56.799,0:13:59.110 I want to close with just two points. 0:14:00.050,0:14:03.738 If you've been following my world,[br]the field of brain and cognitive sciences, 0:14:03.738,0:14:05.665 for the past few years, 0:14:05.665,0:14:08.080 three big ideas will have come[br]to your attention. 0:14:08.570,0:14:11.516 The first is that this is[br]the era of the brain. 0:14:11.516,0:14:15.185 And indeed, there have been[br]staggering discoveries in neuroscience: 0:14:15.185,0:14:18.621 localizing, functionally specialized[br]regions of cortex, 0:14:18.621,0:14:21.222 turning mouse brains transparent, 0:14:21.222,0:14:24.998 activating neurons with light. 0:14:24.998,0:14:26.994 A second big idea 0:14:26.994,0:14:31.098 is that this is the era of big data[br]and machine learning, 0:14:31.098,0:14:34.239 and machine learning promises[br]to revolutionize our understanding 0:14:34.239,0:14:38.906 of everything from social networks[br]to epidemiology. 0:14:38.906,0:14:41.599 And maybe, as it tackles problems[br]of seen understanding 0:14:41.599,0:14:43.592 and natural language processing 0:14:43.592,0:14:46.916 to tell us something[br]about human cognition. 0:14:47.756,0:14:49.693 And the final big idea you'll have heard 0:14:49.693,0:14:53.080 is that maybe it's a good idea we're going[br]to know so much about brains 0:14:53.080,0:14:54.997 and have so much access to big data, 0:14:54.997,0:14:57.504 because left to our own devices, 0:14:57.504,0:15:01.335 humans are fallible, we take shortcuts, 0:15:01.335,0:15:04.772 we err, we make mistakes, 0:15:04.772,0:15:08.246 we're biased, and in innumerable ways, 0:15:09.326,0:15:11.425 we get the world wrong. 0:15:12.843,0:15:15.792 I think these are all important stories, 0:15:15.792,0:15:19.577 and they have a lot to tell us[br]about what it means to be human, 0:15:19.577,0:15:23.106 but I want you to note that today[br]I told you a very different story. 0:15:23.966,0:15:27.773 It's a story about minds and not brains, 0:15:27.773,0:15:30.779 and in particular, it's a story[br]about the kinds of computations 0:15:30.779,0:15:33.369 that uniquely human minds can perform, 0:15:33.369,0:15:37.313 which involve rich, structured knowledge[br]and the ability to learn 0:15:37.313,0:15:42.581 from small amounts of data,[br]the evidence of just a few examples. 0:15:44.301,0:15:48.600 And fundamentally, it's a story[br]about how starting as very small children 0:15:48.600,0:15:52.780 and continuing out all the way[br]to the greatest accomplishments 0:15:52.780,0:15:55.223 of our culture, 0:15:56.623,0:15:58.620 we get the world right. 0:16:00.433,0:16:05.700 Folks, human minds do not only learn[br]from small amounts of data. 0:16:06.285,0:16:08.386 Human minds think[br]of altogether new ideas. 0:16:08.746,0:16:11.437 Human minds generate[br]research and discovery, 0:16:11.787,0:16:16.320 and human minds generate[br]art and literature and poetry and theater, 0:16:17.070,0:16:20.600 and human minds take care of other humans: 0:16:21.330,0:16:24.257 our old, our young, our sick. 0:16:24.517,0:16:26.884 We even heal them. 0:16:27.624,0:16:30.667 In the years to come, we're going[br]to see technological innovation 0:16:30.667,0:16:33.284 beyond anything I can even envision, 0:16:34.594,0:16:36.614 but we are very unlikely 0:16:36.614,0:16:42.323 to see anything even approximating[br]the computational power of a human child 0:16:42.323,0:16:46.621 in my lifetime or in yours. 0:16:46.621,0:16:51.668 If we invest in these most powerful[br]learners and their development, 0:16:51.668,0:16:54.585 in babies and children 0:16:54.585,0:16:56.411 and mothers and fathers 0:16:56.411,0:16:59.110 and caregivers and teachers 0:16:59.110,0:17:03.280 the ways we invest in our other[br]most powerful and elegant forms 0:17:03.280,0:17:05.848 of technology, engineering, and design, 0:17:06.488,0:17:09.437 we will not just be dreaming[br]of a better future, 0:17:09.437,0:17:11.564 we will be planning for one. 0:17:11.564,0:17:13.909 Thank you very much. 0:17:13.909,0:17:17.810 (Applause) 0:17:17.810,0:17:22.236 Chris Anderson: Laura, thank you.[br]I do actually have a question for you. 0:17:22.236,0:17:24.595 First of all, the research is insane. 0:17:24.595,0:17:28.320 I mean, who would design[br]an experiment like that? (Laughter) 0:17:29.150,0:17:30.940 I've seen that a couple of times, 0:17:30.940,0:17:34.162 and I still don't honestly believe[br]that that can truly be happening, 0:17:34.162,0:17:37.320 but other people have done[br]similar experiments. It checks out. 0:17:37.320,0:17:38.953 They really are that genius. 0:17:38.953,0:17:41.960 LS: You know, they look really impressive[br]in our experiments, 0:17:41.960,0:17:44.592 but think about what they[br]look like in real life, right? 0:17:44.592,0:17:45.762 It starts out as a baby. 0:17:45.762,0:17:47.769 Eighteen months later,[br]it's talking to you, 0:17:47.769,0:17:50.810 and babies' first words aren't just[br]things like balls and ducks, 0:17:50.810,0:17:53.691 they're things like "all gone,"[br]which refer to disappearance, 0:17:53.691,0:17:55.974 or "uh oh," which refer[br]to unintentional actions. 0:17:55.974,0:17:57.536 It has to be that powerful. 0:17:57.536,0:18:00.311 It has to be much more powerful[br]than anything I showed you. 0:18:00.311,0:18:02.285 They're figuring out the entire world. 0:18:02.285,0:18:05.429 A four-year old can talk to you[br]about almost anything. 0:18:05.429,0:18:07.030 (Applause) 0:18:07.030,0:18:10.444 CA: And if I understand you right,[br]the other key point you're making is, 0:18:10.444,0:18:13.198 we've been through these years[br]where there's all this talk 0:18:13.198,0:18:15.130 of how quirky and buggy our minds are, 0:18:15.130,0:18:17.997 that behavioral economics[br]and the whole theories behind that 0:18:17.997,0:18:19.600 that we're not rational agents. 0:18:19.600,0:18:23.816 You're really saying that the biggest[br]story is how extraordinary, 0:18:23.816,0:18:28.760 and there really is genius there[br]that is underappreciated. 0:18:28.760,0:18:30.830 LS: One of my favorite[br]quotes in psychology 0:18:30.830,0:18:33.120 comes from the social[br]psychologist Solomon Asch, 0:18:33.120,0:18:35.927 and he said the fundamental task[br]of psychology is to remove 0:18:35.927,0:18:38.553 the veil of self-evidence from things. 0:18:38.553,0:18:43.104 There are a million orders of magnitude[br]more decisions you make every day 0:18:43.104,0:18:44.451 that get the world right. 0:18:44.451,0:18:46.583 You know about objects[br]and their properties. 0:18:46.583,0:18:49.612 You know them when they're occluded.[br]You know them in the dark. 0:18:49.612,0:18:50.920 You can walk through rooms. 0:18:50.920,0:18:54.452 You can figure out what other people[br]are thinking. You can talk to them. 0:18:54.452,0:18:56.682 You can navigate space.[br]You know about numbers. 0:18:56.682,0:18:59.704 You know causal relationships.[br]You know about moral reasoning. 0:18:59.704,0:19:02.060 You do this effortlessly,[br]so we don't see it, 0:19:02.060,0:19:04.972 but that is how we get the world right,[br]and it's a remarkable 0:19:04.972,0:19:07.290 and very difficult-to-understand[br]accomplishment. 0:19:07.290,0:19:09.918 CA: I suspect there are people[br]in the audience who have 0:19:09.918,0:19:12.156 this view of accelerating[br]technological power 0:19:12.156,0:19:15.114 who might dispute your statement[br]that never in our lifetimes 0:19:15.114,0:19:17.732 will a computer do what[br]a three-year old child can do, 0:19:17.732,0:19:20.980 but what's clear is that in any scenario, 0:19:20.980,0:19:24.750 our machines have so much to learn[br]from our toddlers. 0:19:26.230,0:19:29.446 LS: I think so. You'll have some[br]machine learning folks up here. 0:19:29.446,0:19:33.649 I mean, you should never bet[br]against babies or chimpanzees 0:19:33.649,0:19:37.294 or technology as a matter of practice, 0:19:37.294,0:19:41.822 but it's not just[br]a difference in quantity, 0:19:41.822,0:19:43.586 it's a difference in kind. Right? 0:19:43.586,0:19:45.746 We have incredibly powerful computers, 0:19:45.746,0:19:48.137 and they do do amazingly[br]sophisticated things, 0:19:48.137,0:19:51.341 often with very big amounts of data. 0:19:51.341,0:19:53.948 Human minds do, I think,[br]something quite different, 0:19:53.948,0:19:57.843 and I think it's the structured,[br]hierarchical nature of human knowledge 0:19:57.843,0:19:59.875 that remains a real challenge. 0:19:59.875,0:20:02.936 CA: Laura Schulz, wonderful[br]food for thought. Thank you so much. 0:20:02.936,0:20:05.858 LS: Thank you.[br](Applause)