1 00:00:00,883 --> 00:00:03,430 I'm going to talk about a failure of intuition 2 00:00:03,430 --> 00:00:05,532 that many of us suffer from. 3 00:00:05,532 --> 00:00:09,478 It's really a failure to detect a certain kind of danger. 4 00:00:09,478 --> 00:00:11,314 I'm going to describe a scenario 5 00:00:11,314 --> 00:00:14,315 that I think is both terrifying 6 00:00:14,315 --> 00:00:16,947 and likely to occur, 7 00:00:16,947 --> 00:00:18,837 and that's not a good combination, 8 00:00:18,837 --> 00:00:20,333 as it turns out. 9 00:00:20,333 --> 00:00:22,685 And yet rather than be scared, most of you will feel 10 00:00:22,685 --> 00:00:25,499 that what I'm talking about is kind of cool. 11 00:00:25,499 --> 00:00:28,094 I'm going to describe how the gains we make 12 00:00:28,094 --> 00:00:30,018 in artificial intelligence 13 00:00:30,018 --> 00:00:32,057 could ultimately destroy us. 14 00:00:32,057 --> 00:00:35,049 And in fact, I think it's very difficult to see how they won't destroy us 15 00:00:35,049 --> 00:00:37,363 or inspire us to destroy ourselves. 16 00:00:37,363 --> 00:00:39,491 And yet if you're anything like me, 17 00:00:39,491 --> 00:00:42,201 you'll find that it's fun to think about these things. 18 00:00:42,201 --> 00:00:45,178 And that response is part of the problem. 19 00:00:45,178 --> 00:00:48,040 Okay? That response should worry you. 20 00:00:48,040 --> 00:00:50,407 And if I were to convince you in this talk 21 00:00:50,407 --> 00:00:54,212 that we were likely to suffer a global famine, 22 00:00:54,212 --> 00:00:57,069 either because of climate change or some other catastrophe, 23 00:00:57,069 --> 00:01:00,716 and that your grandchildren, or their grandchildren, 24 00:01:00,716 --> 00:01:03,397 are very likely to live like this, 25 00:01:03,397 --> 00:01:05,107 you wouldn't think, 26 00:01:05,107 --> 00:01:07,088 "Interesting. 27 00:01:07,088 --> 00:01:09,070 I like this TEDTalk." 28 00:01:09,070 --> 00:01:11,898 Famine isn't fun. 29 00:01:11,898 --> 00:01:15,482 Death by science fiction, on the other hand, is fun, 30 00:01:15,482 --> 00:01:19,253 and one of the things that worries me most about the development of AI at this point 31 00:01:19,253 --> 00:01:21,393 is that we seem unable to marshal 32 00:01:21,393 --> 00:01:23,513 an appropriate emotional response 33 00:01:23,513 --> 00:01:25,456 to the dangers that lie ahead. 34 00:01:25,456 --> 00:01:27,524 I am unable to marshal this response, and I'm giving this talk. 35 00:01:27,524 --> 00:01:32,725 It's as though we stand before two doors. 36 00:01:32,725 --> 00:01:34,628 Behind door number one, 37 00:01:34,628 --> 00:01:37,784 we stop making progress in building intelligent machines. 38 00:01:37,784 --> 00:01:41,665 Our computer hardware and software just stops getting better for some reason. 39 00:01:41,665 --> 00:01:43,605 Now take a moment to consider 40 00:01:43,605 --> 00:01:45,217 why this might happen. 41 00:01:45,217 --> 00:01:48,651 I mean, given how valuable intelligence and automation are, 42 00:01:48,651 --> 00:01:53,388 we will continue to improve our technology if we are at all able to. 43 00:01:53,388 --> 00:01:55,560 What could stop us from doing this? 44 00:01:55,560 --> 00:01:58,514 A full-scale nuclear war? 45 00:01:58,514 --> 00:02:01,874 A global pandemic? 46 00:02:01,874 --> 00:02:04,812 An asteroid impact? 47 00:02:04,812 --> 00:02:08,894 Justin Bieber becoming President of the United States? 48 00:02:09,162 --> 00:02:11,524 (Laughter) 49 00:02:13,038 --> 00:02:17,610 The point is, something would have to destroy civilization as we know it. 50 00:02:17,610 --> 00:02:21,653 You have to imagine how bad it would have to be 51 00:02:21,653 --> 00:02:25,077 to prevent us from making improvements in our technology 52 00:02:25,077 --> 00:02:26,587 permanently, 53 00:02:26,587 --> 00:02:28,597 generation after generation. 54 00:02:28,597 --> 00:02:31,128 Almost by definition, this is the worst thing that's ever happened 55 00:02:31,128 --> 00:02:32,802 in human history. 56 00:02:32,802 --> 00:02:34,150 So the only alternative, 57 00:02:34,150 --> 00:02:36,388 and this is what lies behind door number two, 58 00:02:36,388 --> 00:02:39,593 is that we continued to improve our intelligent machines 59 00:02:39,593 --> 00:02:41,978 year after year after year. 60 00:02:41,978 --> 00:02:46,400 At a certain point, we will build machines that are smarter than we are, 61 00:02:46,400 --> 00:02:48,275 and once we have machines that are smarter than we are, 62 00:02:48,275 --> 00:02:50,859 they will begin to improve themselves. 63 00:02:50,859 --> 00:02:53,750 And then we risk what the mathematician I.J. Good called 64 00:02:53,750 --> 00:02:55,562 an "intelligence explosion," 65 00:02:55,562 --> 00:02:58,376 that the process could get away from us. 66 00:02:58,376 --> 00:03:01,188 Now this is often caricatured, as I have here, 67 00:03:01,188 --> 00:03:04,213 as a fear that armies of malicious robots 68 00:03:04,213 --> 00:03:05,726 will attack us. 69 00:03:05,726 --> 00:03:07,683 But that isn't the most likely scenario. 70 00:03:07,683 --> 00:03:13,471 It's not that our machines will become spontaneously malevolent. 71 00:03:13,471 --> 00:03:16,446 The concern is really that we will build machines that are so much 72 00:03:16,446 --> 00:03:17,828 more competent than we are 73 00:03:17,828 --> 00:03:21,511 that the slightest divergence between their goals and our own 74 00:03:21,511 --> 00:03:23,485 could destroy us. 75 00:03:23,485 --> 00:03:27,039 Just think about how we relate to ants. 76 00:03:27,039 --> 00:03:28,513 We don't hate them. 77 00:03:28,513 --> 00:03:30,668 We don't go out of our way to harm them. 78 00:03:30,668 --> 00:03:32,413 In fact, sometimes we take pains not to harm them. 79 00:03:32,413 --> 00:03:34,879 We step over them on the sidewalk. 80 00:03:34,879 --> 00:03:36,359 But whenever their presence 81 00:03:36,359 --> 00:03:39,552 seriously conflicts with one of our goals, 82 00:03:39,552 --> 00:03:42,149 let's say when constructing a building like this one, 83 00:03:42,149 --> 00:03:44,712 we annihilate them without a qualm. 84 00:03:44,712 --> 00:03:47,639 The concern is that we will one day build machines 85 00:03:47,639 --> 00:03:50,382 that, whether they're conscious or not, 86 00:03:50,382 --> 00:03:53,722 could treat us with similar disregard. 87 00:03:53,722 --> 00:03:57,519 Now, I suspect this seems farfetched to many of you. 88 00:03:57,519 --> 00:04:04,015 I bet there are those of you who doubt that superintelligent AI is possible, 89 00:04:04,015 --> 00:04:05,726 much less inevitable. 90 00:04:05,726 --> 00:04:09,329 But then you must find something wrong with one of the following assumptions. 91 00:04:09,329 --> 00:04:11,463 And there are only three of them. 92 00:04:11,463 --> 00:04:17,600 Intelligence is a matter of information processing in physical systems. 93 00:04:17,600 --> 00:04:20,622 Actually, this is a little bit more than an assumption. 94 00:04:20,622 --> 00:04:23,977 We have already built narrow intelligence into our machines, 95 00:04:23,977 --> 00:04:25,835 and many of these machines perform 96 00:04:25,835 --> 00:04:29,123 at a level of superhuman intelligence already. 97 00:04:29,123 --> 00:04:31,541 And we know that mere matter 98 00:04:31,541 --> 00:04:33,908 can give rise to what is called "general intelligence," 99 00:04:33,908 --> 00:04:37,463 an ability to think flexibly across multiple domains, 100 00:04:37,463 --> 00:04:40,698 because our brains have managed it. Right? 101 00:04:40,698 --> 00:04:44,711 There's just atoms in here, 102 00:04:44,711 --> 00:04:47,374 and as long as we continue to 103 00:04:47,374 --> 00:04:49,611 build systems of atoms 104 00:04:49,611 --> 00:04:52,214 that display more and more intelligent behavior, 105 00:04:52,214 --> 00:04:53,985 we will eventually, 106 00:04:53,985 --> 00:04:58,467 unless we are interrupted, we will eventually build general intelligence 107 00:04:58,467 --> 00:05:00,134 into our machines. 108 00:05:00,134 --> 00:05:03,308 It's crucial to realize that the rate of progress doesn't matter, 109 00:05:03,308 --> 00:05:06,563 because any progress is enough to get us into the end zone. 110 00:05:06,563 --> 00:05:09,714 We don't need Moore's Law to continue. We don't need exponential progress. 111 00:05:09,714 --> 00:05:13,772 We just need to keep going. 112 00:05:13,772 --> 00:05:17,323 The second assumption is that we will keep going. 113 00:05:17,323 --> 00:05:21,043 We will continue to improve our intelligent machines. 114 00:05:21,043 --> 00:05:25,644 And given the value of intelligence, 115 00:05:25,644 --> 00:05:29,147 I mean, intelligence is either the source of everything we value 116 00:05:29,147 --> 00:05:32,001 or we need it to safeguard everything we value. 117 00:05:32,001 --> 00:05:34,136 It is our most valuable resource. 118 00:05:34,136 --> 00:05:35,911 So we want to do this. 119 00:05:35,911 --> 00:05:39,052 We have problems that we desperately need to solve. 120 00:05:39,052 --> 00:05:42,781 We want to cure diseases like Alzheimer's and cancer. 121 00:05:42,781 --> 00:05:47,124 We want to understand economic systems. We want to improve our climate science. 122 00:05:47,124 --> 00:05:49,526 So we will do this, if we can. 123 00:05:49,526 --> 00:05:54,130 The train is already out of the station, and there's no brake to pull. 124 00:05:54,130 --> 00:05:59,654 Finally, we don't stand on a peak of intelligence, 125 00:05:59,654 --> 00:06:01,992 or anywhere near it, likely. 126 00:06:01,992 --> 00:06:03,470 And this really is the crucial insight. 127 00:06:03,470 --> 00:06:06,266 This is what makes our situation so precarious, 128 00:06:06,266 --> 00:06:10,934 and this is what makes our intuitions about risk so unreliable. 129 00:06:10,934 --> 00:06:14,223 Now, just consider the smartest person who has ever lived. 130 00:06:14,223 --> 00:06:18,432 On almost everyone's shortlist here is John Von Neumann. 131 00:06:18,432 --> 00:06:21,557 I mean, the impression that Von Neumann made on the people around him, 132 00:06:21,557 --> 00:06:25,963 and this included the greatest mathematicians and physicists of his time, 133 00:06:25,963 --> 00:06:27,755 is fairly well documented. 134 00:06:27,755 --> 00:06:31,375 If only half the stories about him are half true, 135 00:06:31,375 --> 00:06:35,039 there's no question he is one of the smartest people who has ever lived. 136 00:06:35,039 --> 00:06:38,344 So consider the spectrum of intelligence. 137 00:06:38,344 --> 00:06:41,255 We have John Von Neumann. 138 00:06:41,255 --> 00:06:44,346 And then we have you and me. 139 00:06:44,346 --> 00:06:45,890 And then we have a chicken. 140 00:06:45,890 --> 00:06:47,486 (Laughter) 141 00:06:47,486 --> 00:06:50,221 Sorry, a chicken. 142 00:06:50,221 --> 00:06:50,471 (Laughter) 143 00:06:50,503 --> 00:06:54,290 There's no reason for me to make this talk more depressing than it needs to be. 144 00:06:54,290 --> 00:06:56,690 (Laughter) 145 00:06:56,690 --> 00:07:00,094 It seems overwhelmingly, however, that the spectrum of intelligence 146 00:07:00,094 --> 00:07:04,090 extends much further than we current conceive, 147 00:07:04,090 --> 00:07:07,323 and if we build machines that are more intelligent than we are, 148 00:07:07,323 --> 00:07:09,698 they will very likely explore this spectrum 149 00:07:09,698 --> 00:07:11,620 in ways that we can't imagine, 150 00:07:11,620 --> 00:07:15,224 and exceed us in ways that we can't imagine. 151 00:07:15,224 --> 00:07:19,663 And it's important to recognize that this is true by virtue of speed alone. 152 00:07:19,663 --> 00:07:24,725 Right? So imagine if we just built a super-intelligent AI, right, 153 00:07:24,725 --> 00:07:27,931 that was no smarter than your average team of researchers 154 00:07:27,931 --> 00:07:30,479 at Stanford or at MIT. 155 00:07:30,479 --> 00:07:33,720 Well, electronic circuits function about a million times faster 156 00:07:33,720 --> 00:07:34,870 than biochemical ones, 157 00:07:34,870 --> 00:07:39,737 so this machine should think about a million times faster 158 00:07:39,737 --> 00:07:41,282 than the minds that built it. 159 00:07:41,282 --> 00:07:41,616 So you set it running for a week, 160 00:07:41,616 --> 00:07:46,545 and it will perform 20,000 years of human-level intellectual work, 161 00:07:46,545 --> 00:07:49,454 week after week after week. 162 00:07:49,454 --> 00:07:53,105 How could we even understand, much less constrain, 163 00:07:53,105 --> 00:07:56,142 a mind making this sort of progress? 164 00:07:56,142 --> 00:07:59,719 The other thing that's worrying, frankly, 165 00:07:59,719 --> 00:08:04,336 is that, imagine the best case scenario. 166 00:08:04,336 --> 00:08:08,330 So imagine we hit upon a design of super-intelligent AI 167 00:08:08,330 --> 00:08:09,778 that has no safety concerns. 168 00:08:09,778 --> 00:08:12,790 We have the perfect design the first time around. 169 00:08:12,790 --> 00:08:15,221 It's as though we've been handed an oracle 170 00:08:15,221 --> 00:08:17,440 that behaves exactly as intended. 171 00:08:17,440 --> 00:08:21,616 Well, this machine would be the perfect labor-saving device. 172 00:08:21,616 --> 00:08:24,017 It can design the machine that can build the machine 173 00:08:24,017 --> 00:08:25,579 that can do any physical work, 174 00:08:25,579 --> 00:08:27,618 powered by sunlight, 175 00:08:27,618 --> 00:08:30,233 more or less for the cost of raw materials. 176 00:08:30,233 --> 00:08:33,688 So we're talking about the end of human drudgery. 177 00:08:33,688 --> 00:08:37,105 We're also talking about the end of most intellectual work. 178 00:08:37,105 --> 00:08:40,591 So what would apes like ourselves do in this circumstance? 179 00:08:40,591 --> 00:08:44,783 Well, we'd be free to play frisbee and give each other massages. 180 00:08:44,783 --> 00:08:48,847 Add some LSD and some questionable wardrobe choices, 181 00:08:48,847 --> 00:08:51,296 and the whole world could be like Burning Man. 182 00:08:51,296 --> 00:08:54,471 (Laughter) 183 00:08:54,471 --> 00:08:57,542 Now, that might sound pretty good, 184 00:08:57,542 --> 00:08:59,781 but ask yourself what would happen 185 00:08:59,781 --> 00:09:02,674 under our current economic and political order? 186 00:09:02,674 --> 00:09:06,982 It seems likely that we would witness a level of wealth inequality 187 00:09:06,982 --> 00:09:10,470 and unemployment that we have never seen before. 188 00:09:10,470 --> 00:09:13,115 Absent a willingness to immediately put this new wealth 189 00:09:13,115 --> 00:09:16,057 to the service of all humanity, 190 00:09:16,057 --> 00:09:19,446 a few trillionaires could grace the covers of our business magazines 191 00:09:19,446 --> 00:09:22,586 while the rest of the world would be free to starve. 192 00:09:22,586 --> 00:09:24,855 And what would the Russians or the Chinese do 193 00:09:24,855 --> 00:09:27,454 if they heard that some company in Silicon Valley 194 00:09:27,454 --> 00:09:30,380 was about to deploy a super-intelligent AI? 195 00:09:30,380 --> 00:09:32,747 This machine would be capable of waging war, 196 00:09:32,747 --> 00:09:35,263 whether terrestrial or cyber, 197 00:09:35,263 --> 00:09:38,304 with unprecedented power. 198 00:09:38,304 --> 00:09:40,228 This is a winner-take-all scenario. 199 00:09:40,228 --> 00:09:43,402 To be six months ahead of the competition here 200 00:09:43,402 --> 00:09:47,530 is to be 500,000 years ahead, at a minimum. 201 00:09:47,530 --> 00:09:52,364 So even mere rumors of this kind of breakthrough 202 00:09:52,364 --> 00:09:54,945 could cause our species to go berserk. 203 00:09:54,945 --> 00:09:57,016 Now, one of the most frightening things, 204 00:09:57,016 --> 00:09:59,849 in my view, at this moment, 205 00:09:59,849 --> 00:10:01,900 are the kinds of things 206 00:10:01,900 --> 00:10:03,660 that AI researchers say 207 00:10:03,660 --> 00:10:07,227 when they want to be reassuring. 208 00:10:07,227 --> 00:10:10,681 And the most common reason we're told not to worry is time. 209 00:10:10,681 --> 00:10:12,556 This is all a long way off, don't you know. 210 00:10:12,556 --> 00:10:16,023 This is probably 50 or 100 years away. 211 00:10:16,023 --> 00:10:17,173 One researcher has said, 212 00:10:17,173 --> 00:10:18,671 "Worrying about AI safety 213 00:10:18,671 --> 00:10:22,240 is like worrying about overpopulation on Mars." 214 00:10:22,240 --> 00:10:24,243 This is the Silicon Valley version of 215 00:10:24,243 --> 00:10:26,532 "don't worry your pretty little head about it." 216 00:10:26,532 --> 00:10:27,830 (Laughter) 217 00:10:27,830 --> 00:10:30,165 No one seems to notice 218 00:10:30,165 --> 00:10:32,418 that referencing the time horizon 219 00:10:32,418 --> 00:10:34,432 is a total non sequitur. 220 00:10:34,432 --> 00:10:37,943 If intelligence is just a matter of information processing, 221 00:10:37,943 --> 00:10:40,689 and we continue to improve our machines, 222 00:10:40,689 --> 00:10:44,407 we will improve some form of super-intelligence. 223 00:10:44,407 --> 00:10:46,360 And we have no idea 224 00:10:46,360 --> 00:10:48,088 how long it will take us 225 00:10:48,088 --> 00:10:51,064 to create the conditions to do that safely. 226 00:10:51,064 --> 00:10:53,533 Let me say that again. 227 00:10:53,533 --> 00:10:57,346 And we have no idea how long it will take us 228 00:10:57,346 --> 00:11:00,618 to create the conditions to do that safely. 229 00:11:00,618 --> 00:11:02,492 And if you haven't noticed, 230 00:11:02,492 --> 00:11:04,728 50 years is not what it used to be. 231 00:11:04,728 --> 00:11:06,816 This is 50 years in months. 232 00:11:06,816 --> 00:11:09,615 This is how long we've had the iPhone. 233 00:11:09,615 --> 00:11:12,933 This is how long "The Simpsons" has been on television. 234 00:11:12,933 --> 00:11:15,267 Fifty years is not that much time 235 00:11:15,267 --> 00:11:19,871 to meet one of the greatest challenges our species will ever face. 236 00:11:19,871 --> 00:11:23,423 Once again, we seem to be failing to have an appropriate emotional response 237 00:11:23,423 --> 00:11:26,699 to what we have every reason to believe is coming. 238 00:11:26,699 --> 00:11:30,674 The computer scientist Stuart Russell has a nice analogy here. 239 00:11:30,674 --> 00:11:35,016 He said, imagine that we received a message from an alien civilization, 240 00:11:35,016 --> 00:11:36,150 which read: 241 00:11:36,150 --> 00:11:38,703 "People of Earth, 242 00:11:38,703 --> 00:11:41,723 we will arrive on your planet in 50 years. 243 00:11:41,723 --> 00:11:43,664 Get ready." 244 00:11:43,664 --> 00:11:47,330 And now we're just counting down the months until the mothership lands? 245 00:11:47,330 --> 00:11:52,970 We would feel a little more urgency than we do. 246 00:11:52,970 --> 00:11:54,813 Another reason we're told not to worry 247 00:11:54,813 --> 00:11:57,805 is that these machines can't help but share our values 248 00:11:57,805 --> 00:12:00,209 because they will be literally extensions of ourselves. 249 00:12:00,209 --> 00:12:02,081 They'll be grafted onto our brains, 250 00:12:02,081 --> 00:12:04,843 and we'll essentially become their limbic systems. 251 00:12:04,843 --> 00:12:09,446 Now take a moment to consider that the safest and only prudent path forward, 252 00:12:09,446 --> 00:12:11,255 recommended, 253 00:12:11,255 --> 00:12:14,938 is to implant this technology directly into our brains. 254 00:12:14,938 --> 00:12:18,161 Now, this may in fact be the safest and only prudent path forward, 255 00:12:18,161 --> 00:12:20,909 but usually one's safety concerns about a technology 256 00:12:20,909 --> 00:12:25,412 have to be pretty much worked out before you stick it inside your head. 257 00:12:25,412 --> 00:12:27,450 (Laughter) 258 00:12:27,450 --> 00:12:29,029 The deeper problem is that 259 00:12:29,029 --> 00:12:32,345 building super-intelligent AI on its own 260 00:12:32,345 --> 00:12:34,227 seems likely to be easier 261 00:12:34,227 --> 00:12:36,133 than building super-intelligent AI 262 00:12:36,133 --> 00:12:39,093 and having the completed neuroscience that allows us to seamlessly 263 00:12:39,093 --> 00:12:41,115 integrate our minds with it. 264 00:12:41,115 --> 00:12:44,074 And given that the companies and governments doing this work 265 00:12:44,074 --> 00:12:47,494 are likely to perceive themselves as being in a race against all others, 266 00:12:47,494 --> 00:12:51,275 given that to win this race is to win the world, 267 00:12:51,275 --> 00:12:53,515 provided you don't destroy it in the next moment, 268 00:12:53,515 --> 00:12:56,212 then it seems likely that whatever is easier to do 269 00:12:56,212 --> 00:12:58,807 will get done first. 270 00:12:58,807 --> 00:13:01,323 Now, unfortunately, I don't have a solution to this problem, 271 00:13:01,323 --> 00:13:03,871 apart from recommending that more of us think about it. 272 00:13:03,871 --> 00:13:06,224 I think we need something like a Manhattan Project 273 00:13:06,224 --> 00:13:08,689 on the topic of artificial intelligence. 274 00:13:08,689 --> 00:13:11,057 Not to build it, because I think we'll inevitably do that, 275 00:13:11,057 --> 00:13:15,562 but to understand how to avoid an arms race and to build it 276 00:13:15,562 --> 00:13:18,357 in a way that is aligned with our interests. 277 00:13:18,357 --> 00:13:20,380 When you're talking about super-intelligent AI 278 00:13:20,380 --> 00:13:22,388 that can make changes to itself, 279 00:13:22,388 --> 00:13:27,532 it seems that we only have one chance to get the initial conditions right, 280 00:13:27,532 --> 00:13:30,229 and even then we will need to absorb the economic 281 00:13:30,229 --> 00:13:33,714 and political consequences of getting them right. 282 00:13:33,714 --> 00:13:35,884 But the moment we admit 283 00:13:35,884 --> 00:13:40,769 that information processing is the source of intelligence, 284 00:13:40,769 --> 00:13:46,408 that some appropriate computational system is what the basis of intelligence is, 285 00:13:46,408 --> 00:13:51,555 and we admit that we will improve these systems continuously, 286 00:13:51,555 --> 00:13:55,912 and we admit that the horizon of cognition very likely far exceeds 287 00:13:55,912 --> 00:13:58,427 what we currently know, 288 00:13:58,427 --> 00:14:00,795 then we have to admit that we are in the process of building 289 00:14:00,795 --> 00:14:03,672 some sort of god. 290 00:14:03,672 --> 00:14:05,317 Now would be a good time 291 00:14:05,317 --> 00:14:08,111 to make sure it's a god we can live with. 292 00:14:08,111 --> 00:14:10,103 Thank you very much. 293 00:14:10,103 --> 00:14:14,794 (Applause)