0:00:00.883,0:00:03.430 I'm going to talk about[br]a failure of intuition 0:00:03.430,0:00:05.532 that many of us suffer from. 0:00:05.532,0:00:09.478 It's really a failure to detect[br]a certain kind of danger. 0:00:09.478,0:00:11.314 I'm going to describe a scenario 0:00:11.314,0:00:14.315 that I think is both terrifying 0:00:14.315,0:00:16.947 and likely to occur, 0:00:16.947,0:00:18.837 and that's not a good combination, 0:00:18.837,0:00:20.333 as it turns out. 0:00:20.333,0:00:22.685 And yet rather than be scared,[br]most of you will feel 0:00:22.685,0:00:25.499 that what I'm talking about[br]is kind of cool. 0:00:25.499,0:00:28.094 I'm going to describe how[br]the gains we make 0:00:28.094,0:00:30.018 in artificial intelligence 0:00:30.018,0:00:32.057 could ultimately destroy us. 0:00:32.057,0:00:35.049 And in fact, I think it's very difficult[br]to see how they won't destroy us 0:00:35.049,0:00:37.363 or inspire us to destroy ourselves. 0:00:37.363,0:00:39.491 And yet if you're anything like me, 0:00:39.491,0:00:42.201 you'll find that it's fun[br]to think about these things. 0:00:42.201,0:00:45.178 And that response is part of the problem. 0:00:45.178,0:00:48.040 Okay? That response should worry you. 0:00:48.040,0:00:50.407 And if I were to convince you in this talk 0:00:50.407,0:00:54.212 that we were likely to suffer[br]a global famine, 0:00:54.212,0:00:57.069 either because of climate change[br]or some other catastrophe, 0:00:57.069,0:01:00.716 and that your grandchildren,[br]or their grandchildren, 0:01:00.716,0:01:03.397 are very likely to live like this, 0:01:03.397,0:01:05.107 you wouldn't think, 0:01:05.107,0:01:07.088 "Interesting. 0:01:07.088,0:01:09.070 I like this TEDTalk." 0:01:09.070,0:01:11.898 Famine isn't fun. 0:01:11.898,0:01:15.482 Death by science fiction,[br]on the other hand, is fun, 0:01:15.482,0:01:19.253 and one of the things that worries me most[br]about the development of AI at this point 0:01:19.253,0:01:21.393 is that we seem unable to marshal 0:01:21.393,0:01:23.513 an appropriate emotional response 0:01:23.513,0:01:25.456 to the dangers that lie ahead. 0:01:25.456,0:01:27.524 I am unable to marshal this response,[br]and I'm giving this talk. 0:01:27.524,0:01:32.725 It's as though we stand before two doors. 0:01:32.725,0:01:34.628 Behind door number one, 0:01:34.628,0:01:37.784 we stop making progress[br]in building intelligent machines. 0:01:37.784,0:01:41.665 Our computer hardware and software[br]just stops getting better for some reason. 0:01:41.665,0:01:43.605 Now take a moment to consider 0:01:43.605,0:01:45.217 why this might happen. 0:01:45.217,0:01:48.651 I mean, given how valuable[br]intelligence and automation are, 0:01:48.651,0:01:53.388 we will continue to improve our technology[br]if we are at all able to. 0:01:53.388,0:01:55.560 What could stop us from doing this? 0:01:55.560,0:01:58.514 A full-scale nuclear war? 0:01:58.514,0:02:01.874 A global pandemic? 0:02:01.874,0:02:04.812 An asteroid impact? 0:02:04.812,0:02:08.894 Justin Bieber becoming[br]President of the United States? 0:02:09.162,0:02:11.524 (Laughter) 0:02:13.038,0:02:17.610 The point is, something would have[br]to destroy civilization as we know it. 0:02:17.610,0:02:21.653 You have to imagine[br]how bad it would have to be 0:02:21.653,0:02:25.077 to prevent us from making[br]improvements in our technology 0:02:25.077,0:02:26.587 permanently, 0:02:26.587,0:02:28.597 generation after generation. 0:02:28.597,0:02:31.128 Almost by definition, this is[br]the worst thing that's ever happened 0:02:31.128,0:02:32.802 in human history. 0:02:32.802,0:02:34.150 So the only alternative, 0:02:34.150,0:02:36.388 and this is what lies behind[br]door number two, 0:02:36.388,0:02:39.593 is that we continued to improve[br]our intelligent machines 0:02:39.593,0:02:41.978 year after year after year. 0:02:41.978,0:02:46.400 At a certain point, we will build[br]machines that are smarter than we are, 0:02:46.400,0:02:48.275 and once we have machines[br]that are smarter than we are, 0:02:48.275,0:02:50.859 they will begin to improve themselves. 0:02:50.859,0:02:53.750 And then we risk what[br]the mathematician I.J. Good called 0:02:53.750,0:02:55.562 an "intelligence explosion," 0:02:55.562,0:02:58.376 that the process could get away from us. 0:02:58.376,0:03:01.188 Now this is often caricatured,[br]as I have here, 0:03:01.188,0:03:04.213 as a fear that armies of malicious robots 0:03:04.213,0:03:05.726 will attack us. 0:03:05.726,0:03:07.683 But that isn't the most likely scenario. 0:03:07.683,0:03:13.471 It's not that our machines[br]will become spontaneously malevolent. 0:03:13.471,0:03:16.446 The concern is really that we will build[br]machines that are so much 0:03:16.446,0:03:17.828 more competent than we are 0:03:17.828,0:03:21.511 that the slightest divergence[br]between their goals and our own 0:03:21.511,0:03:23.485 could destroy us. 0:03:23.485,0:03:27.039 Just think about how we relate to ants. 0:03:27.039,0:03:28.513 We don't hate them. 0:03:28.513,0:03:30.668 We don't go out of our way to harm them. 0:03:30.668,0:03:32.413 In fact, sometimes we take pains[br]not to harm them. 0:03:32.413,0:03:34.879 We step over them on the sidewalk. 0:03:34.879,0:03:36.359 But whenever their presence 0:03:36.359,0:03:39.552 seriously conflicts[br]with one of our goals, 0:03:39.552,0:03:42.149 let's say when constructing[br]a building like this one, 0:03:42.149,0:03:44.712 we annihilate them without a qualm. 0:03:44.712,0:03:47.639 The concern is that we[br]will one day build machines 0:03:47.639,0:03:50.382 that, whether they're conscious or not, 0:03:50.382,0:03:53.722 could treat us with similar disregard. 0:03:53.722,0:03:57.519 Now, I suspect this seems[br]farfetched to many of you. 0:03:57.519,0:04:04.015 I bet there are those of you who doubt[br]that superintelligent AI is possible, 0:04:04.015,0:04:05.726 much less inevitable. 0:04:05.726,0:04:09.329 But then you must find something wrong[br]with one of the following assumptions. 0:04:09.329,0:04:11.463 And there are only three of them. 0:04:11.463,0:04:17.600 Intelligence is a matter of information[br]processing in physical systems. 0:04:17.600,0:04:20.622 Actually, this is a little bit more[br]than an assumption. 0:04:20.622,0:04:23.977 We have already built narrow intelligence[br]into our machines, 0:04:23.977,0:04:25.835 and many of these machines perform 0:04:25.835,0:04:29.123 at a level of superhuman[br]intelligence already. 0:04:29.123,0:04:31.541 And we know that mere matter 0:04:31.541,0:04:33.908 can give rise to what is called[br]"general intelligence," 0:04:33.908,0:04:37.463 an ability to think flexibly[br]across multiple domains, 0:04:37.463,0:04:40.698 because our brains have managed it. Right? 0:04:40.698,0:04:44.711 There's just atoms in here, 0:04:44.711,0:04:47.374 and as long as we continue to 0:04:47.374,0:04:49.611 build systems of atoms 0:04:49.611,0:04:52.214 that display more and more[br]intelligent behavior, 0:04:52.214,0:04:53.985 we will eventually, 0:04:53.985,0:04:58.467 unless we are interrupted, we will[br]eventually build general intelligence 0:04:58.467,0:05:00.134 into our machines. 0:05:00.134,0:05:03.308 It's crucial to realize that[br]the rate of progress doesn't matter, 0:05:03.308,0:05:06.563 because any progress is enough[br]to get us into the end zone. 0:05:06.563,0:05:09.714 We don't need Moore's Law to continue.[br]We don't need exponential progress. 0:05:09.714,0:05:13.772 We just need to keep going. 0:05:13.772,0:05:17.323 The second assumption[br]is that we will keep going. 0:05:17.323,0:05:21.043 We will continue to improve[br]our intelligent machines. 0:05:21.043,0:05:25.644 And given the value of intelligence, 0:05:25.644,0:05:29.147 I mean, intelligence is either[br]the source of everything we value 0:05:29.147,0:05:32.001 or we need it to safeguard[br]everything we value. 0:05:32.001,0:05:34.136 It is our most valuable resource. 0:05:34.136,0:05:35.911 So we want to do this. 0:05:35.911,0:05:39.052 We have problems that we[br]desperately need to solve. 0:05:39.052,0:05:42.781 We want to cure diseases[br]like Alzheimer's and cancer. 0:05:42.781,0:05:47.124 We want to understand economic systems.[br]We want to improve our climate science. 0:05:47.124,0:05:49.526 So we will do this, if we can. 0:05:49.526,0:05:54.130 The train is already out of the station,[br]and there's no brake to pull. 0:05:54.130,0:05:59.654 Finally, we don't stand on a peak[br]of intelligence, 0:05:59.654,0:06:01.992 or anywhere near it, likely. 0:06:01.992,0:06:03.470 And this really is the crucial insight. 0:06:03.470,0:06:06.266 This is what makes our situation[br]so precarious, 0:06:06.266,0:06:10.934 and this is what makes our intuitions[br]about risk so unreliable. 0:06:10.934,0:06:14.223 Now, just consider the smartest person[br]who has ever lived. 0:06:14.223,0:06:18.432 On almost everyone's shortlist here[br]is John Von Neumann. 0:06:18.432,0:06:21.557 I mean, the impression that Von Neumann[br]made on the people around him, 0:06:21.557,0:06:25.963 and this included the greatest[br]mathematicians and physicists of his time, 0:06:25.963,0:06:27.755 is fairly well documented. 0:06:27.755,0:06:31.375 If only half the stories about him[br]are half true, 0:06:31.375,0:06:35.039 there's no question he is one of[br]the smartest people who has ever lived. 0:06:35.039,0:06:38.344 So consider the spectrum of intelligence. 0:06:38.344,0:06:41.255 We have John Von Neumann. 0:06:41.255,0:06:44.346 And then we have you and me. 0:06:44.346,0:06:45.890 And then we have a chicken. 0:06:45.890,0:06:47.486 (Laughter) 0:06:47.486,0:06:50.221 Sorry, a chicken. 0:06:50.221,0:06:50.471 (Laughter) 0:06:50.503,0:06:54.290 There's no reason for me to make this talk[br]more depressing than it needs to be. 0:06:54.290,0:06:56.690 (Laughter) 0:06:56.690,0:07:00.094 It seems overwhelmingly, however,[br]that the spectrum of intelligence 0:07:00.094,0:07:04.090 extends much further[br]than we current conceive, 0:07:04.090,0:07:07.323 and if we build machines[br]that are more intelligent than we are, 0:07:07.323,0:07:09.698 they will very likely[br]explore this spectrum 0:07:09.698,0:07:11.620 in ways that we can't imagine, 0:07:11.620,0:07:15.224 and exceed us in ways[br]that we can't imagine. 0:07:15.224,0:07:19.663 And it's important to recognize that this[br]is true by virtue of speed alone. 0:07:19.663,0:07:24.725 Right? So imagine if we just built[br]a super-intelligent AI, right, 0:07:24.725,0:07:27.931 that was no smarter than[br]your average team of researchers 0:07:27.931,0:07:30.479 at Stanford or at MIT. 0:07:30.479,0:07:33.720 Well, electronic circuits function[br]about a million times faster 0:07:33.720,0:07:34.870 than biochemical ones, 0:07:34.870,0:07:39.737 so this machine should think[br]about a million times faster 0:07:39.737,0:07:41.282 than the minds that built it. 0:07:41.282,0:07:41.616 So you set it running for a week, 0:07:41.616,0:07:46.545 and it will perform 20,000 years[br]of human-level intellectual work, 0:07:46.545,0:07:49.454 week after week after week. 0:07:49.454,0:07:53.105 How could we even understand,[br]much less constrain, 0:07:53.105,0:07:56.142 a mind making this sort of progress? 0:07:56.142,0:07:59.719 The other thing that's worrying, frankly, 0:07:59.719,0:08:04.336 is that, imagine the best case scenario. 0:08:04.336,0:08:08.330 So imagine we hit upon a design[br]of super-intelligent AI 0:08:08.330,0:08:09.778 that has no safety concerns. 0:08:09.778,0:08:12.790 We have the perfect design[br]the first time around. 0:08:12.790,0:08:15.221 It's as though we've been handed an oracle 0:08:15.221,0:08:17.440 that behaves exactly as intended. 0:08:17.440,0:08:21.616 Well, this machine would be[br]the perfect labor-saving device. 0:08:21.616,0:08:24.017 It can design the machine[br]that can build the machine 0:08:24.017,0:08:25.579 that can do any physical work, 0:08:25.579,0:08:27.618 powered by sunlight, 0:08:27.618,0:08:30.233 more or less for the cost[br]of raw materials. 0:08:30.233,0:08:33.688 So we're talking about[br]the end of human drudgery. 0:08:33.688,0:08:37.105 We're also talking about the end[br]of most intellectual work. 0:08:37.105,0:08:40.591 So what would apes like ourselves[br]do in this circumstance? 0:08:40.591,0:08:44.783 Well, we'd be free to play frisbee[br]and give each other massages. 0:08:44.783,0:08:48.847 Add some LSD and some[br]questionable wardrobe choices, 0:08:48.847,0:08:51.296 and the whole world[br]could be like Burning Man. 0:08:51.296,0:08:54.471 (Laughter) 0:08:54.471,0:08:57.542 Now, that might sound pretty good, 0:08:57.542,0:08:59.781 but ask yourself what would happen 0:08:59.781,0:09:02.674 under our current economic[br]and political order? 0:09:02.674,0:09:06.982 It seems likely that we would witness[br]a level of wealth inequality 0:09:06.982,0:09:10.470 and unemployment[br]that we have never seen before. 0:09:10.470,0:09:13.115 Absent a willingness to immediately[br]put this new wealth 0:09:13.115,0:09:16.057 to the service of all humanity, 0:09:16.057,0:09:19.446 a few trillionaires could grace[br]the covers of our business magazines 0:09:19.446,0:09:22.586 while the rest of the world[br]would be free to starve. 0:09:22.586,0:09:24.855 And what would the Russians[br]or the Chinese do 0:09:24.855,0:09:27.454 if they heard that some company[br]in Silicon Valley 0:09:27.454,0:09:30.380 was about to deploy[br]a super-intelligent AI? 0:09:30.380,0:09:32.747 This machine would be capable[br]of waging war, 0:09:32.747,0:09:35.263 whether terrestrial or cyber, 0:09:35.263,0:09:38.304 with unprecedented power. 0:09:38.304,0:09:40.228 This is a winner-take-all scenario. 0:09:40.228,0:09:43.402 To be six months ahead[br]of the competition here 0:09:43.402,0:09:47.530 is to be 500,000 years ahead,[br]at a minimum. 0:09:47.530,0:09:52.364 So even mere rumors[br]of this kind of breakthrough 0:09:52.364,0:09:54.945 could cause our species to go berserk. 0:09:54.945,0:09:57.016 Now, one of the most frightening things, 0:09:57.016,0:09:59.849 in my view, at this moment, 0:09:59.849,0:10:01.900 are the kinds of things 0:10:01.900,0:10:03.660 that AI researchers say 0:10:03.660,0:10:07.227 when they want to be reassuring. 0:10:07.227,0:10:10.681 And the most common reason[br]we're told not to worry is time. 0:10:10.681,0:10:12.556 This is all a long way off,[br]don't you know. 0:10:12.556,0:10:16.023 This is probably 50 or 100 years away. 0:10:16.023,0:10:17.173 One researcher has said, 0:10:17.173,0:10:18.671 "Worrying about AI safety 0:10:18.671,0:10:22.240 is like worrying about[br]overpopulation on Mars." 0:10:22.240,0:10:24.243 This is the Silicon Valley version of 0:10:24.243,0:10:26.532 "don't worry your[br]pretty little head about it." 0:10:26.532,0:10:27.830 (Laughter) 0:10:27.830,0:10:30.165 No one seems to notice 0:10:30.165,0:10:32.418 that referencing the time horizon 0:10:32.418,0:10:34.432 is a total non sequitur. 0:10:34.432,0:10:37.943 If intelligence is just a matter[br]of information processing, 0:10:37.943,0:10:40.689 and we continue to improve our machines, 0:10:40.689,0:10:44.407 we will improve some form[br]of super-intelligence. 0:10:44.407,0:10:46.360 And we have no idea 0:10:46.360,0:10:48.088 how long it will take us 0:10:48.088,0:10:51.064 to create the conditions[br]to do that safely. 0:10:51.064,0:10:53.533 Let me say that again. 0:10:53.533,0:10:57.346 And we have no idea[br]how long it will take us 0:10:57.346,0:11:00.618 to create the conditions[br]to do that safely. 0:11:00.618,0:11:02.492 And if you haven't noticed, 0:11:02.492,0:11:04.728 50 years is not what it used to be. 0:11:04.728,0:11:06.816 This is 50 years in months. 0:11:06.816,0:11:09.615 This is how long we've had the iPhone. 0:11:09.615,0:11:12.933 This is how long "The Simpsons"[br]has been on television. 0:11:12.933,0:11:15.267 Fifty years is not that much time 0:11:15.267,0:11:19.871 to meet one of the greatest challenges[br]our species will ever face. 0:11:19.871,0:11:23.423 Once again, we seem to be failing[br]to have an appropriate emotional response 0:11:23.423,0:11:26.699 to what we have every reason[br]to believe is coming. 0:11:26.699,0:11:30.674 The computer scientist Stuart Russell[br]has a nice analogy here. 0:11:30.674,0:11:35.016 He said, imagine that we received[br]a message from an alien civilization, 0:11:35.016,0:11:36.150 which read: 0:11:36.150,0:11:38.703 "People of Earth, 0:11:38.703,0:11:41.723 we will arrive on your planet in 50 years. 0:11:41.723,0:11:43.664 Get ready." 0:11:43.664,0:11:47.330 And now we're just counting down[br]the months until the mothership lands? 0:11:47.330,0:11:52.970 We would feel a little[br]more urgency than we do. 0:11:52.970,0:11:54.813 Another reason we're told not to worry 0:11:54.813,0:11:57.805 is that these machines can't help[br]but share our values 0:11:57.805,0:12:00.209 because they will be literally[br]extensions of ourselves. 0:12:00.209,0:12:02.081 They'll be grafted onto our brains, 0:12:02.081,0:12:04.843 and we'll essentially become[br]their limbic systems. 0:12:04.843,0:12:09.446 Now take a moment to consider that the[br]safest and only prudent path forward, 0:12:09.446,0:12:11.255 recommended, 0:12:11.255,0:12:14.938 is to implant this technology[br]directly into our brains. 0:12:14.938,0:12:18.161 Now, this may in fact be the safest[br]and only prudent path forward, 0:12:18.161,0:12:20.909 but usually one's safety concerns[br]about a technology 0:12:20.909,0:12:25.412 have to be pretty much worked out[br]before you stick it inside your head. 0:12:25.412,0:12:27.450 (Laughter) 0:12:27.450,0:12:29.029 The deeper problem is that 0:12:29.029,0:12:32.345 building super-intelligent AI on its own 0:12:32.345,0:12:34.227 seems likely to be easier 0:12:34.227,0:12:36.133 than building super-intelligent AI 0:12:36.133,0:12:39.093 and having the completed neuroscience[br]that allows us to seamlessly 0:12:39.093,0:12:41.115 integrate our minds with it. 0:12:41.115,0:12:44.074 And given that the companies[br]and governments doing this work 0:12:44.074,0:12:47.494 are likely to perceive themselves[br]as being in a race against all others, 0:12:47.494,0:12:51.275 given that to win this race[br]is to win the world, 0:12:51.275,0:12:53.515 provided you don't destroy it[br]in the next moment, 0:12:53.515,0:12:56.212 then it seems likely[br]that whatever is easier to do 0:12:56.212,0:12:58.807 will get done first. 0:12:58.807,0:13:01.323 Now, unfortunately, I don't have[br]a solution to this problem, 0:13:01.323,0:13:03.871 apart from recommending[br]that more of us think about it. 0:13:03.871,0:13:06.224 I think we need something like[br]a Manhattan Project 0:13:06.224,0:13:08.689 on the topic of artificial intelligence. 0:13:08.689,0:13:11.057 Not to build it, because I think[br]we'll inevitably do that, 0:13:11.057,0:13:15.562 but to understand how to avoid[br]an arms race and to build it 0:13:15.562,0:13:18.357 in a way that is aligned[br]with our interests. 0:13:18.357,0:13:20.380 When you're talking about[br]super-intelligent AI 0:13:20.380,0:13:22.388 that can make changes to itself, 0:13:22.388,0:13:27.532 it seems that we only have one chance[br]to get the initial conditions right, 0:13:27.532,0:13:30.229 and even then we will need[br]to absorb the economic 0:13:30.229,0:13:33.714 and political consequences[br]of getting them right. 0:13:33.714,0:13:35.884 But the moment we admit 0:13:35.884,0:13:40.769 that information processing[br]is the source of intelligence, 0:13:40.769,0:13:46.408 that some appropriate computational system[br]is what the basis of intelligence is, 0:13:46.408,0:13:51.555 and we admit that we will improve[br]these systems continuously, 0:13:51.555,0:13:55.912 and we admit that the horizon[br]of cognition very likely far exceeds 0:13:55.912,0:13:58.427 what we currently know, 0:13:58.427,0:14:00.795 then we have to admit that we[br]are in the process of building 0:14:00.795,0:14:03.672 some sort of god. 0:14:03.672,0:14:05.317 Now would be a good time 0:14:05.317,0:14:08.111 to make sure it's a god we can live with. 0:14:08.111,0:14:10.103 Thank you very much. 0:14:10.103,0:14:14.794 (Applause)