WEBVTT 00:00:00.883 --> 00:00:03.430 I'm going to talk about a failure of intuition 00:00:03.430 --> 00:00:05.532 that many of us suffer from. 00:00:05.532 --> 00:00:09.478 It's really a failure to detect a certain kind of danger. 00:00:09.478 --> 00:00:11.314 I'm going to describe a scenario 00:00:11.314 --> 00:00:14.315 that I think is both terrifying 00:00:14.315 --> 00:00:16.947 and likely to occur, 00:00:16.947 --> 00:00:18.837 and that's not a good combination, 00:00:18.837 --> 00:00:20.333 as it turns out. 00:00:20.333 --> 00:00:22.685 And yet rather than be scared, most of you will feel 00:00:22.685 --> 00:00:25.499 that what I'm talking about is kind of cool. 00:00:25.499 --> 00:00:28.094 I'm going to describe how the gains we make 00:00:28.094 --> 00:00:30.018 in artificial intelligence 00:00:30.018 --> 00:00:32.057 could ultimately destroy us. 00:00:32.057 --> 00:00:35.049 And in fact, I think it's very difficult to see how they won't destroy us 00:00:35.049 --> 00:00:37.363 or inspire us to destroy ourselves. 00:00:37.363 --> 00:00:39.491 And yet if you're anything like me, 00:00:39.491 --> 00:00:42.201 you'll find that it's fun to think about these things. 00:00:42.201 --> 00:00:45.178 And that response is part of the problem. 00:00:45.178 --> 00:00:48.040 Okay? That response should worry you. NOTE Paragraph 00:00:48.040 --> 00:00:50.407 And if I were to convince you in this talk 00:00:50.407 --> 00:00:54.212 that we were likely to suffer a global famine, 00:00:54.212 --> 00:00:57.069 either because of climate change or some other catastrophe, 00:00:57.069 --> 00:01:00.716 and that your grandchildren, or their grandchildren, 00:01:00.716 --> 00:01:03.397 are very likely to live like this, 00:01:03.397 --> 00:01:05.107 you wouldn't think, 00:01:05.107 --> 00:01:07.088 "Interesting. 00:01:07.088 --> 00:01:09.070 I like this TEDTalk." 00:01:09.070 --> 00:01:11.898 Famine isn't fun. 00:01:11.898 --> 00:01:15.482 Death by science fiction, on the other hand, is fun, 00:01:15.482 --> 00:01:19.253 and one of the things that worries me most about the development of AI at this point 00:01:19.253 --> 00:01:21.393 is that we seem unable to marshal 00:01:21.393 --> 00:01:23.513 an appropriate emotional response 00:01:23.513 --> 00:01:25.456 to the dangers that lie ahead. 00:01:25.456 --> 00:01:27.524 I am unable to marshal this response, and I'm giving this talk. NOTE Paragraph 00:01:27.524 --> 00:01:32.725 It's as though we stand before two doors. 00:01:32.725 --> 00:01:34.628 Behind door number one, 00:01:34.628 --> 00:01:37.784 we stop making progress in building intelligent machines. 00:01:37.784 --> 00:01:41.665 Our computer hardware and software just stops getting better for some reason. 00:01:41.665 --> 00:01:43.605 Now take a moment to consider 00:01:43.605 --> 00:01:45.217 why this might happen. 00:01:45.217 --> 00:01:48.651 I mean, given how valuable intelligence and automation are, 00:01:48.651 --> 00:01:53.388 we will continue to improve our technology if we are at all able to. 00:01:53.388 --> 00:01:55.560 What could stop us from doing this? 00:01:55.560 --> 00:01:58.514 A full-scale nuclear war? 00:01:58.514 --> 00:02:01.874 A global pandemic? 00:02:01.874 --> 00:02:04.812 An asteroid impact? 00:02:04.812 --> 00:02:08.894 Justin Bieber becoming President of the United States? NOTE Paragraph 00:02:09.162 --> 00:02:11.524 (Laughter) NOTE Paragraph 00:02:13.038 --> 00:02:17.610 The point is, something would have to destroy civilization as we know it. 00:02:17.610 --> 00:02:21.653 You have to imagine how bad it would have to be 00:02:21.653 --> 00:02:25.077 to prevent us from making improvements in our technology 00:02:25.077 --> 00:02:26.587 permanently, 00:02:26.587 --> 00:02:28.597 generation after generation. 00:02:28.597 --> 00:02:31.128 Almost by definition, this is the worst thing that's ever happened 00:02:31.128 --> 00:02:32.802 in human history. NOTE Paragraph 00:02:32.802 --> 00:02:34.150 So the only alternative, 00:02:34.150 --> 00:02:36.388 and this is what lies behind door number two, 00:02:36.388 --> 00:02:39.593 is that we continued to improve our intelligent machines 00:02:39.593 --> 00:02:41.978 year after year after year. 00:02:41.978 --> 00:02:46.400 At a certain point, we will build machines that are smarter than we are, 00:02:46.400 --> 00:02:48.275 and once we have machines that are smarter than we are, 00:02:48.275 --> 00:02:50.859 they will begin to improve themselves. 00:02:50.859 --> 00:02:53.750 And then we risk what the mathematician I.J. Good called 00:02:53.750 --> 00:02:55.562 an "intelligence explosion," 00:02:55.562 --> 00:02:58.376 that the process could get away from us. 00:02:58.376 --> 00:03:01.188 Now this is often caricatured, as I have here, 00:03:01.188 --> 00:03:04.213 as a fear that armies of malicious robots 00:03:04.213 --> 00:03:05.726 will attack us. 00:03:05.726 --> 00:03:07.683 But that isn't the most likely scenario. 00:03:07.683 --> 00:03:13.471 It's not that our machines will become spontaneously malevolent. 00:03:13.471 --> 00:03:16.446 The concern is really that we will build machines that are so much 00:03:16.446 --> 00:03:17.828 more competent than we are 00:03:17.828 --> 00:03:21.511 that the slightest divergence between their goals and our own 00:03:21.511 --> 00:03:23.485 could destroy us. NOTE Paragraph 00:03:23.485 --> 00:03:27.039 Just think about how we relate to ants. 00:03:27.039 --> 00:03:28.513 We don't hate them. 00:03:28.513 --> 00:03:30.668 We don't go out of our way to harm them. 00:03:30.668 --> 00:03:32.413 In fact, sometimes we take pains not to harm them. 00:03:32.413 --> 00:03:34.879 We step over them on the sidewalk. 00:03:34.879 --> 00:03:36.359 But whenever their presence 00:03:36.359 --> 00:03:39.552 seriously conflicts with one of our goals, 00:03:39.552 --> 00:03:42.149 let's say when constructing a building like this one, 00:03:42.149 --> 00:03:44.712 we annihilate them without a qualm. 00:03:44.712 --> 00:03:47.639 The concern is that we will one day build machines 00:03:47.639 --> 00:03:50.382 that, whether they're conscious or not, 00:03:50.382 --> 00:03:53.722 could treat us with similar disregard. NOTE Paragraph 00:03:53.722 --> 00:03:57.519 Now, I suspect this seems farfetched to many of you. 00:03:57.519 --> 00:04:04.015 I bet there are those of you who doubt that superintelligent AI is possible, 00:04:04.015 --> 00:04:05.726 much less inevitable. 00:04:05.726 --> 00:04:09.329 But then you must find something wrong with one of the following assumptions. 00:04:09.329 --> 00:04:11.463 And there are only three of them. NOTE Paragraph 00:04:11.463 --> 00:04:17.600 Intelligence is a matter of information processing in physical systems. 00:04:17.600 --> 00:04:20.622 Actually, this is a little bit more than an assumption. 00:04:20.622 --> 00:04:23.977 We have already built narrow intelligence into our machines, 00:04:23.977 --> 00:04:25.835 and many of these machines perform 00:04:25.835 --> 00:04:29.123 at a level of superhuman intelligence already. 00:04:29.123 --> 00:04:31.541 And we know that mere matter 00:04:31.541 --> 00:04:33.908 can give rise to what is called "general intelligence," 00:04:33.908 --> 00:04:37.463 an ability to think flexibly across multiple domains, 00:04:37.463 --> 00:04:40.698 because our brains have managed it. Right? 00:04:40.698 --> 00:04:44.711 There's just atoms in here, 00:04:44.711 --> 00:04:47.374 and as long as we continue to 00:04:47.374 --> 00:04:49.611 build systems of atoms 00:04:49.611 --> 00:04:52.214 that display more and more intelligent behavior, 00:04:52.214 --> 00:04:53.985 we will eventually, 00:04:53.985 --> 00:04:58.467 unless we are interrupted, we will eventually build general intelligence 00:04:58.467 --> 00:05:00.134 into our machines. 00:05:00.134 --> 00:05:03.308 It's crucial to realize that the rate of progress doesn't matter, 00:05:03.308 --> 00:05:06.563 because any progress is enough to get us into the end zone. 00:05:06.563 --> 00:05:09.714 We don't need Moore's Law to continue. We don't need exponential progress. 00:05:09.714 --> 00:05:13.772 We just need to keep going. NOTE Paragraph 00:05:13.772 --> 00:05:17.323 The second assumption is that we will keep going. 00:05:17.323 --> 00:05:21.043 We will continue to improve our intelligent machines. 00:05:21.043 --> 00:05:25.644 And given the value of intelligence, 00:05:25.644 --> 00:05:29.147 I mean, intelligence is either the source of everything we value 00:05:29.147 --> 00:05:32.001 or we need it to safeguard everything we value. 00:05:32.001 --> 00:05:34.136 It is our most valuable resource. 00:05:34.136 --> 00:05:35.911 So we want to do this. 00:05:35.911 --> 00:05:39.052 We have problems that we desperately need to solve. 00:05:39.052 --> 00:05:42.781 We want to cure diseases like Alzheimer's and cancer. 00:05:42.781 --> 00:05:47.124 We want to understand economic systems. We want to improve our climate science. 00:05:47.124 --> 00:05:49.526 So we will do this, if we can. 00:05:49.526 --> 00:05:54.130 The train is already out of the station, and there's no brake to pull. NOTE Paragraph 00:05:54.130 --> 00:05:59.654 Finally, we don't stand on a peak of intelligence, 00:05:59.654 --> 00:06:01.992 or anywhere near it, likely. 00:06:01.992 --> 00:06:03.470 And this really is the crucial insight. 00:06:03.470 --> 00:06:06.266 This is what makes our situation so precarious, 00:06:06.266 --> 00:06:10.934 and this is what makes our intuitions about risk so unreliable. 00:06:10.934 --> 00:06:14.223 Now, just consider the smartest person who has ever lived. 00:06:14.223 --> 00:06:18.432 On almost everyone's shortlist here is John Von Neumann. 00:06:18.432 --> 00:06:21.557 I mean, the impression that Von Neumann made on the people around him, 00:06:21.557 --> 00:06:25.963 and this included the greatest mathematicians and physicists of his time, 00:06:25.963 --> 00:06:27.755 is fairly well documented. 00:06:27.755 --> 00:06:31.375 If only half the stories about him are half true, 00:06:31.375 --> 00:06:35.039 there's no question he is one of the smartest people who has ever lived. 00:06:35.039 --> 00:06:38.344 So consider the spectrum of intelligence. 00:06:38.344 --> 00:06:41.255 We have John Von Neumann. 00:06:41.255 --> 00:06:44.346 And then we have you and me. 00:06:44.346 --> 00:06:45.890 And then we have a chicken. NOTE Paragraph 00:06:45.890 --> 00:06:47.486 (Laughter) NOTE Paragraph 00:06:47.486 --> 00:06:50.221 Sorry, a chicken. NOTE Paragraph 00:06:50.221 --> 00:06:50.471 (Laughter) NOTE Paragraph 00:06:50.503 --> 00:06:54.290 There's no reason for me to make this talk more depressing than it needs to be. NOTE Paragraph 00:06:54.290 --> 00:06:56.690 (Laughter) NOTE Paragraph 00:06:56.690 --> 00:07:00.094 It seems overwhelmingly, however, that the spectrum of intelligence 00:07:00.094 --> 00:07:04.090 extends much further than we current conceive, 00:07:04.090 --> 00:07:07.323 and if we build machines that are more intelligent than we are, 00:07:07.323 --> 00:07:09.698 they will very likely explore this spectrum 00:07:09.698 --> 00:07:11.620 in ways that we can't imagine, 00:07:11.620 --> 00:07:15.224 and exceed us in ways that we can't imagine. NOTE Paragraph 00:07:15.224 --> 00:07:19.663 And it's important to recognize that this is true by virtue of speed alone. 00:07:19.663 --> 00:07:24.725 Right? So imagine if we just built a super-intelligent AI, right, 00:07:24.725 --> 00:07:27.931 that was no smarter than your average team of researchers 00:07:27.931 --> 00:07:30.479 at Stanford or at MIT. 00:07:30.479 --> 00:07:33.720 Well, electronic circuits function about a million times faster 00:07:33.720 --> 00:07:34.870 than biochemical ones, 00:07:34.870 --> 00:07:39.737 so this machine should think about a million times faster 00:07:39.737 --> 00:07:41.282 than the minds that built it. 00:07:41.282 --> 00:07:41.616 So you set it running for a week, 00:07:41.616 --> 00:07:46.545 and it will perform 20,000 years of human-level intellectual work, 00:07:46.545 --> 00:07:49.454 week after week after week. 00:07:49.454 --> 00:07:53.105 How could we even understand, much less constrain, 00:07:53.105 --> 00:07:56.142 a mind making this sort of progress? NOTE Paragraph 00:07:56.142 --> 00:07:59.719 The other thing that's worrying, frankly, 00:07:59.719 --> 00:08:04.336 is that, imagine the best case scenario. 00:08:04.336 --> 00:08:08.330 So imagine we hit upon a design of super-intelligent AI 00:08:08.330 --> 00:08:09.778 that has no safety concerns. 00:08:09.778 --> 00:08:12.790 We have the perfect design the first time around. 00:08:12.790 --> 00:08:15.221 It's as though we've been handed an oracle 00:08:15.221 --> 00:08:17.440 that behaves exactly as intended. 00:08:17.440 --> 00:08:21.616 Well, this machine would be the perfect labor-saving device. 00:08:21.616 --> 00:08:24.017 It can design the machine that can build the machine 00:08:24.017 --> 00:08:25.579 that can do any physical work, 00:08:25.579 --> 00:08:27.618 powered by sunlight, 00:08:27.618 --> 00:08:30.233 more or less for the cost of raw materials. 00:08:30.233 --> 00:08:33.688 So we're talking about the end of human drudgery. 00:08:33.688 --> 00:08:37.105 We're also talking about the end of most intellectual work. NOTE Paragraph 00:08:37.105 --> 00:08:40.591 So what would apes like ourselves do in this circumstance? 00:08:40.591 --> 00:08:44.783 Well, we'd be free to play frisbee and give each other massages. 00:08:44.783 --> 00:08:48.847 Add some LSD and some questionable wardrobe choices, 00:08:48.847 --> 00:08:51.296 and the whole world could be like Burning Man. NOTE Paragraph 00:08:51.296 --> 00:08:54.471 (Laughter) NOTE Paragraph 00:08:54.471 --> 00:08:57.542 Now, that might sound pretty good, 00:08:57.542 --> 00:08:59.781 but ask yourself what would happen 00:08:59.781 --> 00:09:02.674 under our current economic and political order? 00:09:02.674 --> 00:09:06.982 It seems likely that we would witness a level of wealth inequality 00:09:06.982 --> 00:09:10.470 and unemployment that we have never seen before. 00:09:10.470 --> 00:09:13.115 Absent a willingness to immediately put this new wealth 00:09:13.115 --> 00:09:16.057 to the service of all humanity, 00:09:16.057 --> 00:09:19.446 a few trillionaires could grace the covers of our business magazines 00:09:19.446 --> 00:09:22.586 while the rest of the world would be free to starve. NOTE Paragraph 00:09:22.586 --> 00:09:24.855 And what would the Russians or the Chinese do 00:09:24.855 --> 00:09:27.454 if they heard that some company in Silicon Valley 00:09:27.454 --> 00:09:30.380 was about to deploy a super-intelligent AI? 00:09:30.380 --> 00:09:32.747 This machine would be capable of waging war, 00:09:32.747 --> 00:09:35.263 whether terrestrial or cyber, 00:09:35.263 --> 00:09:38.304 with unprecedented power. 00:09:38.304 --> 00:09:40.228 This is a winner-take-all scenario. 00:09:40.228 --> 00:09:43.402 To be six months ahead of the competition here 00:09:43.402 --> 00:09:47.530 is to be 500,000 years ahead, at a minimum. 00:09:47.530 --> 00:09:52.364 So even mere rumors of this kind of breakthrough 00:09:52.364 --> 00:09:54.945 could cause our species to go berserk. NOTE Paragraph 00:09:54.945 --> 00:09:57.016 Now, one of the most frightening things, 00:09:57.016 --> 00:09:59.849 in my view, at this moment, 00:09:59.849 --> 00:10:01.900 are the kinds of things 00:10:01.900 --> 00:10:03.660 that AI researchers say 00:10:03.660 --> 00:10:07.227 when they want to be reassuring. 00:10:07.227 --> 00:10:10.681 And the most common reason we're told not to worry is time. 00:10:10.681 --> 00:10:12.556 This is all a long way off, don't you know. 00:10:12.556 --> 00:10:16.023 This is probably 50 or 100 years away. 00:10:16.023 --> 00:10:17.173 One researcher has said, 00:10:17.173 --> 00:10:18.671 "Worrying about AI safety 00:10:18.671 --> 00:10:22.240 is like worrying about overpopulation on Mars." 00:10:22.240 --> 00:10:24.243 This is the Silicon Valley version of 00:10:24.243 --> 00:10:26.532 "don't worry your pretty little head about it." NOTE Paragraph 00:10:26.532 --> 00:10:27.830 (Laughter) NOTE Paragraph 00:10:27.830 --> 00:10:30.165 No one seems to notice 00:10:30.165 --> 00:10:32.418 that referencing the time horizon 00:10:32.418 --> 00:10:34.432 is a total non sequitur. 00:10:34.432 --> 00:10:37.943 If intelligence is just a matter of information processing, 00:10:37.943 --> 00:10:40.689 and we continue to improve our machines, 00:10:40.689 --> 00:10:44.407 we will improve some form of super-intelligence. 00:10:44.407 --> 00:10:46.360 And we have no idea 00:10:46.360 --> 00:10:48.088 how long it will take us 00:10:48.088 --> 00:10:51.064 to create the conditions to do that safely. 00:10:51.064 --> 00:10:53.533 Let me say that again. 00:10:53.533 --> 00:10:57.346 And we have no idea how long it will take us 00:10:57.346 --> 00:11:00.618 to create the conditions to do that safely. 00:11:00.618 --> 00:11:02.492 And if you haven't noticed, 00:11:02.492 --> 00:11:04.728 50 years is not what it used to be. 00:11:04.728 --> 00:11:06.816 This is 50 years in months. 00:11:06.816 --> 00:11:09.615 This is how long we've had the iPhone. 00:11:09.615 --> 00:11:12.933 This is how long "The Simpsons" has been on television. 00:11:12.933 --> 00:11:15.267 Fifty years is not that much time 00:11:15.267 --> 00:11:19.871 to meet one of the greatest challenges our species will ever face. 00:11:19.871 --> 00:11:23.423 Once again, we seem to be failing to have an appropriate emotional response 00:11:23.423 --> 00:11:26.699 to what we have every reason to believe is coming. 00:11:26.699 --> 00:11:30.674 The computer scientist Stuart Russell has a nice analogy here. 00:11:30.674 --> 00:11:35.016 He said, imagine that we received a message from an alien civilization, 00:11:35.016 --> 00:11:36.150 which read: 00:11:36.150 --> 00:11:38.703 "People of Earth, 00:11:38.703 --> 00:11:41.723 we will arrive on your planet in 50 years. 00:11:41.723 --> 00:11:43.664 Get ready." 00:11:43.664 --> 00:11:47.330 And now we're just counting down the months until the mothership lands? 00:11:47.330 --> 00:11:52.970 We would feel a little more urgency than we do. NOTE Paragraph 00:11:52.970 --> 00:11:54.813 Another reason we're told not to worry 00:11:54.813 --> 00:11:57.805 is that these machines can't help but share our values 00:11:57.805 --> 00:12:00.209 because they will be literally extensions of ourselves. 00:12:00.209 --> 00:12:02.081 They'll be grafted onto our brains, 00:12:02.081 --> 00:12:04.843 and we'll essentially become their limbic systems. 00:12:04.843 --> 00:12:09.446 Now take a moment to consider that the safest and only prudent path forward, 00:12:09.446 --> 00:12:11.255 recommended, 00:12:11.255 --> 00:12:14.938 is to implant this technology directly into our brains. 00:12:14.938 --> 00:12:18.161 Now, this may in fact be the safest and only prudent path forward, 00:12:18.161 --> 00:12:20.909 but usually one's safety concerns about a technology 00:12:20.909 --> 00:12:25.412 have to be pretty much worked out before you stick it inside your head. NOTE Paragraph 00:12:25.412 --> 00:12:27.450 (Laughter) NOTE Paragraph 00:12:27.450 --> 00:12:29.029 The deeper problem is that 00:12:29.029 --> 00:12:32.345 building super-intelligent AI on its own 00:12:32.345 --> 00:12:34.227 seems likely to be easier 00:12:34.227 --> 00:12:36.133 than building super-intelligent AI 00:12:36.133 --> 00:12:39.093 and having the completed neuroscience that allows us to seamlessly 00:12:39.093 --> 00:12:41.115 integrate our minds with it. 00:12:41.115 --> 00:12:44.074 And given that the companies and governments doing this work 00:12:44.074 --> 00:12:47.494 are likely to perceive themselves as being in a race against all others, 00:12:47.494 --> 00:12:51.275 given that to win this race is to win the world, 00:12:51.275 --> 00:12:53.515 provided you don't destroy it in the next moment, 00:12:53.515 --> 00:12:56.212 then it seems likely that whatever is easier to do 00:12:56.212 --> 00:12:58.807 will get done first. NOTE Paragraph 00:12:58.807 --> 00:13:01.323 Now, unfortunately, I don't have a solution to this problem, 00:13:01.323 --> 00:13:03.871 apart from recommending that more of us think about it. 00:13:03.871 --> 00:13:06.224 I think we need something like a Manhattan Project 00:13:06.224 --> 00:13:08.689 on the topic of artificial intelligence. 00:13:08.689 --> 00:13:11.057 Not to build it, because I think we'll inevitably do that, 00:13:11.057 --> 00:13:15.562 but to understand how to avoid an arms race and to build it 00:13:15.562 --> 00:13:18.357 in a way that is aligned with our interests. 00:13:18.357 --> 00:13:20.380 When you're talking about super-intelligent AI 00:13:20.380 --> 00:13:22.388 that can make changes to itself, 00:13:22.388 --> 00:13:27.532 it seems that we only have one chance to get the initial conditions right, 00:13:27.532 --> 00:13:30.229 and even then we will need to absorb the economic 00:13:30.229 --> 00:13:33.714 and political consequences of getting them right. NOTE Paragraph 00:13:33.714 --> 00:13:35.884 But the moment we admit 00:13:35.884 --> 00:13:40.769 that information processing is the source of intelligence, 00:13:40.769 --> 00:13:46.408 that some appropriate computational system is what the basis of intelligence is, 00:13:46.408 --> 00:13:51.555 and we admit that we will improve these systems continuously, 00:13:51.555 --> 00:13:55.912 and we admit that the horizon of cognition very likely far exceeds 00:13:55.912 --> 00:13:58.427 what we currently know, 00:13:58.427 --> 00:14:00.795 then we have to admit that we are in the process of building 00:14:00.795 --> 00:14:03.672 some sort of god. 00:14:03.672 --> 00:14:05.317 Now would be a good time 00:14:05.317 --> 00:14:08.111 to make sure it's a god we can live with. NOTE Paragraph 00:14:08.111 --> 00:14:10.103 Thank you very much. NOTE Paragraph 00:14:10.103 --> 00:14:14.794 (Applause)