1 00:00:01,000 --> 00:00:03,216 I'm going to talk about a failure of intuition 2 00:00:03,240 --> 00:00:04,840 that many of us suffer from. 3 00:00:05,480 --> 00:00:08,520 It's really a failure to detect a certain kind of danger. 4 00:00:09,360 --> 00:00:11,096 I'm going to describe a scenario 5 00:00:11,120 --> 00:00:14,376 that I think is both terrifying 6 00:00:14,400 --> 00:00:16,160 and likely to occur, 7 00:00:16,840 --> 00:00:18,496 and that's not a good combination, 8 00:00:18,520 --> 00:00:20,056 as it turns out. 9 00:00:20,080 --> 00:00:22,536 And yet rather than be scared, most of you will feel 10 00:00:22,560 --> 00:00:24,640 that what I'm talking about is kind of cool. 11 00:00:25,200 --> 00:00:28,176 I'm going to describe how the gains we make 12 00:00:28,200 --> 00:00:29,976 in artificial intelligence 13 00:00:30,000 --> 00:00:31,776 could ultimately destroy us. 14 00:00:31,800 --> 00:00:35,256 And in fact, I think it's very difficult to see how they won't destroy us 15 00:00:35,280 --> 00:00:36,960 or inspire us to destroy ourselves. 16 00:00:37,400 --> 00:00:39,256 And yet if you're anything like me, 17 00:00:39,280 --> 00:00:41,936 you'll find that it's fun to think about these things. 18 00:00:41,960 --> 00:00:45,336 And that response is part of the problem. 19 00:00:45,360 --> 00:00:47,080 OK? That response should worry you. 20 00:00:47,920 --> 00:00:50,576 And if I were to convince you in this talk 21 00:00:50,600 --> 00:00:54,016 that we were likely to suffer a global famine, 22 00:00:54,040 --> 00:00:57,096 either because of climate change or some other catastrophe, 23 00:00:57,120 --> 00:01:00,536 and that your grandchildren, or their grandchildren, 24 00:01:00,560 --> 00:01:02,360 are very likely to live like this, 25 00:01:03,200 --> 00:01:04,400 you wouldn't think, 26 00:01:05,440 --> 00:01:06,776 "Interesting. 27 00:01:06,800 --> 00:01:08,000 I like this TED Talk." 28 00:01:09,200 --> 00:01:10,720 Famine isn't fun. 29 00:01:11,800 --> 00:01:15,176 Death by science fiction, on the other hand, is fun, 30 00:01:15,200 --> 00:01:19,176 and one of the things that worries me most about the development of AI at this point 31 00:01:19,200 --> 00:01:23,296 is that we seem unable to marshal an appropriate emotional response 32 00:01:23,320 --> 00:01:25,136 to the dangers that lie ahead. 33 00:01:25,160 --> 00:01:28,360 I am unable to marshal this response, and I'm giving this talk. 34 00:01:30,120 --> 00:01:32,816 It's as though we stand before two doors. 35 00:01:32,840 --> 00:01:34,096 Behind door number one, 36 00:01:34,120 --> 00:01:37,416 we stop making progress in building intelligent machines. 37 00:01:37,440 --> 00:01:41,456 Our computer hardware and software just stops getting better for some reason. 38 00:01:41,480 --> 00:01:44,480 Now take a moment to consider why this might happen. 39 00:01:45,080 --> 00:01:48,736 I mean, given how valuable intelligence and automation are, 40 00:01:48,760 --> 00:01:52,280 we will continue to improve our technology if we are at all able to. 41 00:01:53,200 --> 00:01:54,867 What could stop us from doing this? 42 00:01:55,800 --> 00:01:57,600 A full-scale nuclear war? 43 00:01:59,000 --> 00:02:00,560 A global pandemic? 44 00:02:02,320 --> 00:02:03,640 An asteroid impact? 45 00:02:05,640 --> 00:02:08,216 Justin Bieber becoming president of the United States? 46 00:02:08,240 --> 00:02:10,520 (Laughter) 47 00:02:12,760 --> 00:02:16,680 The point is, something would have to destroy civilization as we know it. 48 00:02:17,360 --> 00:02:21,656 You have to imagine how bad it would have to be 49 00:02:21,680 --> 00:02:25,016 to prevent us from making improvements in our technology 50 00:02:25,040 --> 00:02:26,256 permanently, 51 00:02:26,280 --> 00:02:28,296 generation after generation. 52 00:02:28,320 --> 00:02:30,456 Almost by definition, this is the worst thing 53 00:02:30,480 --> 00:02:32,496 that's ever happened in human history. 54 00:02:32,520 --> 00:02:33,816 So the only alternative, 55 00:02:33,840 --> 00:02:36,176 and this is what lies behind door number two, 56 00:02:36,200 --> 00:02:39,336 is that we continue to improve our intelligent machines 57 00:02:39,360 --> 00:02:40,960 year after year after year. 58 00:02:41,720 --> 00:02:45,360 At a certain point, we will build machines that are smarter than we are, 59 00:02:46,080 --> 00:02:48,696 and once we have machines that are smarter than we are, 60 00:02:48,720 --> 00:02:50,696 they will begin to improve themselves. 61 00:02:50,720 --> 00:02:53,456 And then we risk what the mathematician IJ Good called 62 00:02:53,480 --> 00:02:55,256 an "intelligence explosion," 63 00:02:55,280 --> 00:02:57,280 that the process could get away from us. 64 00:02:58,120 --> 00:03:00,936 Now, this is often caricatured, as I have here, 65 00:03:00,960 --> 00:03:04,176 as a fear that armies of malicious robots 66 00:03:04,200 --> 00:03:05,456 will attack us. 67 00:03:05,480 --> 00:03:08,176 But that isn't the most likely scenario. 68 00:03:08,200 --> 00:03:13,056 It's not that our machines will become spontaneously malevolent. 69 00:03:13,080 --> 00:03:15,696 The concern is really that we will build machines 70 00:03:15,720 --> 00:03:17,776 that are so much more competent than we are 71 00:03:17,800 --> 00:03:21,576 that the slightest divergence between their goals and our own 72 00:03:21,600 --> 00:03:22,800 could destroy us. 73 00:03:23,960 --> 00:03:26,040 Just think about how we relate to ants. 74 00:03:26,600 --> 00:03:28,256 We don't hate them. 75 00:03:28,280 --> 00:03:30,336 We don't go out of our way to harm them. 76 00:03:30,360 --> 00:03:32,736 In fact, sometimes we take pains not to harm them. 77 00:03:32,760 --> 00:03:34,776 We step over them on the sidewalk. 78 00:03:34,800 --> 00:03:36,936 But whenever their presence 79 00:03:36,960 --> 00:03:39,456 seriously conflicts with one of our goals, 80 00:03:39,480 --> 00:03:41,957 let's say when constructing a building like this one, 81 00:03:41,981 --> 00:03:43,941 we annihilate them without a qualm. 82 00:03:44,480 --> 00:03:47,416 The concern is that we will one day build machines 83 00:03:47,440 --> 00:03:50,176 that, whether they're conscious or not, 84 00:03:50,200 --> 00:03:52,200 could treat us with similar disregard. 85 00:03:53,760 --> 00:03:56,520 Now, I suspect this seems far-fetched to many of you. 86 00:03:57,360 --> 00:04:03,696 I bet there are those of you who doubt that superintelligent AI is possible, 87 00:04:03,720 --> 00:04:05,376 much less inevitable. 88 00:04:05,400 --> 00:04:09,020 But then you must find something wrong with one of the following assumptions. 89 00:04:09,044 --> 00:04:10,616 And there are only three of them. 90 00:04:11,800 --> 00:04:16,519 Intelligence is a matter of information processing in physical systems. 91 00:04:17,320 --> 00:04:19,935 Actually, this is a little bit more than an assumption. 92 00:04:19,959 --> 00:04:23,416 We have already built narrow intelligence into our machines, 93 00:04:23,440 --> 00:04:25,456 and many of these machines perform 94 00:04:25,480 --> 00:04:28,120 at a level of superhuman intelligence already. 95 00:04:28,840 --> 00:04:31,416 And we know that mere matter 96 00:04:31,440 --> 00:04:34,056 can give rise to what is called "general intelligence," 97 00:04:34,080 --> 00:04:37,736 an ability to think flexibly across multiple domains, 98 00:04:37,760 --> 00:04:40,896 because our brains have managed it. Right? 99 00:04:40,920 --> 00:04:44,856 I mean, there's just atoms in here, 100 00:04:44,880 --> 00:04:49,376 and as long as we continue to build systems of atoms 101 00:04:49,400 --> 00:04:52,096 that display more and more intelligent behavior, 102 00:04:52,120 --> 00:04:54,656 we will eventually, unless we are interrupted, 103 00:04:54,680 --> 00:04:58,056 we will eventually build general intelligence 104 00:04:58,080 --> 00:04:59,376 into our machines. 105 00:04:59,400 --> 00:05:03,056 It's crucial to realize that the rate of progress doesn't matter, 106 00:05:03,080 --> 00:05:06,256 because any progress is enough to get us into the end zone. 107 00:05:06,280 --> 00:05:10,056 We don't need Moore's law to continue. We don't need exponential progress. 108 00:05:10,080 --> 00:05:11,680 We just need to keep going. 109 00:05:13,480 --> 00:05:16,400 The second assumption is that we will keep going. 110 00:05:17,000 --> 00:05:19,760 We will continue to improve our intelligent machines. 111 00:05:21,000 --> 00:05:25,376 And given the value of intelligence -- 112 00:05:25,400 --> 00:05:28,936 I mean, intelligence is either the source of everything we value 113 00:05:28,960 --> 00:05:31,736 or we need it to safeguard everything we value. 114 00:05:31,760 --> 00:05:34,016 It is our most valuable resource. 115 00:05:34,040 --> 00:05:35,576 So we want to do this. 116 00:05:35,600 --> 00:05:38,936 We have problems that we desperately need to solve. 117 00:05:38,960 --> 00:05:42,160 We want to cure diseases like Alzheimer's and cancer. 118 00:05:42,960 --> 00:05:46,896 We want to understand economic systems. We want to improve our climate science. 119 00:05:46,920 --> 00:05:49,176 So we will do this, if we can. 120 00:05:49,200 --> 00:05:52,486 The train is already out of the station, and there's no brake to pull. 121 00:05:53,880 --> 00:05:59,336 Finally, we don't stand on a peak of intelligence, 122 00:05:59,360 --> 00:06:01,160 or anywhere near it, likely. 123 00:06:01,640 --> 00:06:03,536 And this really is the crucial insight. 124 00:06:03,560 --> 00:06:05,976 This is what makes our situation so precarious, 125 00:06:06,000 --> 00:06:10,040 and this is what makes our intuitions about risk so unreliable. 126 00:06:11,120 --> 00:06:13,840 Now, just consider the smartest person who has ever lived. 127 00:06:14,640 --> 00:06:18,056 On almost everyone's shortlist here is John von Neumann. 128 00:06:18,080 --> 00:06:21,416 I mean, the impression that von Neumann made on the people around him, 129 00:06:21,440 --> 00:06:25,496 and this included the greatest mathematicians and physicists of his time, 130 00:06:25,520 --> 00:06:27,456 is fairly well-documented. 131 00:06:27,480 --> 00:06:31,256 If only half the stories about him are half true, 132 00:06:31,280 --> 00:06:32,496 there's no question 133 00:06:32,520 --> 00:06:34,976 he's one of the smartest people who has ever lived. 134 00:06:35,000 --> 00:06:37,520 So consider the spectrum of intelligence. 135 00:06:38,320 --> 00:06:39,749 Here we have John von Neumann. 136 00:06:41,560 --> 00:06:42,894 And then we have you and me. 137 00:06:44,120 --> 00:06:45,416 And then we have a chicken. 138 00:06:45,440 --> 00:06:47,376 (Laughter) 139 00:06:47,400 --> 00:06:48,616 Sorry, a chicken. 140 00:06:48,640 --> 00:06:49,896 (Laughter) 141 00:06:49,920 --> 00:06:53,656 There's no reason for me to make this talk more depressing than it needs to be. 142 00:06:53,680 --> 00:06:55,280 (Laughter) 143 00:06:56,339 --> 00:06:59,816 It seems overwhelmingly likely, however, that the spectrum of intelligence 144 00:06:59,840 --> 00:07:02,960 extends much further than we currently conceive, 145 00:07:03,880 --> 00:07:07,096 and if we build machines that are more intelligent than we are, 146 00:07:07,120 --> 00:07:09,416 they will very likely explore this spectrum 147 00:07:09,440 --> 00:07:11,296 in ways that we can't imagine, 148 00:07:11,320 --> 00:07:13,840 and exceed us in ways that we can't imagine. 149 00:07:15,000 --> 00:07:19,336 And it's important to recognize that this is true by virtue of speed alone. 150 00:07:19,360 --> 00:07:24,416 Right? So imagine if we just built a superintelligent AI 151 00:07:24,440 --> 00:07:27,896 that was no smarter than your average team of researchers 152 00:07:27,920 --> 00:07:30,216 at Stanford or MIT. 153 00:07:30,240 --> 00:07:33,216 Well, electronic circuits function about a million times faster 154 00:07:33,240 --> 00:07:34,496 than biochemical ones, 155 00:07:34,520 --> 00:07:37,656 so this machine should think about a million times faster 156 00:07:37,680 --> 00:07:39,496 than the minds that built it. 157 00:07:39,520 --> 00:07:41,176 So you set it running for a week, 158 00:07:41,200 --> 00:07:45,760 and it will perform 20,000 years of human-level intellectual work, 159 00:07:46,400 --> 00:07:48,360 week after week after week. 160 00:07:49,640 --> 00:07:52,736 How could we even understand, much less constrain, 161 00:07:52,760 --> 00:07:55,040 a mind making this sort of progress? 162 00:07:56,840 --> 00:07:58,976 The other thing that's worrying, frankly, 163 00:07:59,000 --> 00:08:03,976 is that, imagine the best case scenario. 164 00:08:04,000 --> 00:08:08,176 So imagine we hit upon a design of superintelligent AI 165 00:08:08,200 --> 00:08:09,576 that has no safety concerns. 166 00:08:09,600 --> 00:08:12,856 We have the perfect design the first time around. 167 00:08:12,880 --> 00:08:15,096 It's as though we've been handed an oracle 168 00:08:15,120 --> 00:08:17,136 that behaves exactly as intended. 169 00:08:17,160 --> 00:08:20,880 Well, this machine would be the perfect labor-saving device. 170 00:08:21,680 --> 00:08:24,109 It can design the machine that can build the machine 171 00:08:24,133 --> 00:08:25,896 that can do any physical work, 172 00:08:25,920 --> 00:08:27,376 powered by sunlight, 173 00:08:27,400 --> 00:08:30,096 more or less for the cost of raw materials. 174 00:08:30,120 --> 00:08:33,376 So we're talking about the end of human drudgery. 175 00:08:33,400 --> 00:08:36,200 We're also talking about the end of most intellectual work. 176 00:08:37,200 --> 00:08:40,256 So what would apes like ourselves do in this circumstance? 177 00:08:40,280 --> 00:08:44,360 Well, we'd be free to play Frisbee and give each other massages. 178 00:08:45,840 --> 00:08:48,696 Add some LSD and some questionable wardrobe choices, 179 00:08:48,720 --> 00:08:50,896 and the whole world could be like Burning Man. 180 00:08:50,920 --> 00:08:52,560 (Laughter) 181 00:08:54,320 --> 00:08:56,320 Now, that might sound pretty good, 182 00:08:57,280 --> 00:08:59,656 but ask yourself what would happen 183 00:08:59,680 --> 00:09:02,416 under our current economic and political order? 184 00:09:02,440 --> 00:09:04,856 It seems likely that we would witness 185 00:09:04,880 --> 00:09:09,016 a level of wealth inequality and unemployment 186 00:09:09,040 --> 00:09:10,536 that we have never seen before. 187 00:09:10,560 --> 00:09:13,176 Absent a willingness to immediately put this new wealth 188 00:09:13,200 --> 00:09:14,680 to the service of all humanity, 189 00:09:15,640 --> 00:09:19,256 a few trillionaires could grace the covers of our business magazines 190 00:09:19,280 --> 00:09:21,720 while the rest of the world would be free to starve. 191 00:09:22,320 --> 00:09:24,616 And what would the Russians or the Chinese do 192 00:09:24,640 --> 00:09:27,256 if they heard that some company in Silicon Valley 193 00:09:27,280 --> 00:09:30,016 was about to deploy a superintelligent AI? 194 00:09:30,040 --> 00:09:32,896 This machine would be capable of waging war, 195 00:09:32,920 --> 00:09:35,136 whether terrestrial or cyber, 196 00:09:35,160 --> 00:09:36,840 with unprecedented power. 197 00:09:38,120 --> 00:09:39,976 This is a winner-take-all scenario. 198 00:09:40,000 --> 00:09:43,136 To be six months ahead of the competition here 199 00:09:43,160 --> 00:09:45,936 is to be 500,000 years ahead, 200 00:09:45,960 --> 00:09:47,456 at a minimum. 201 00:09:47,480 --> 00:09:52,216 So it seems that even mere rumors of this kind of breakthrough 202 00:09:52,240 --> 00:09:54,616 could cause our species to go berserk. 203 00:09:54,640 --> 00:09:57,536 Now, one of the most frightening things, 204 00:09:57,560 --> 00:10:00,336 in my view, at this moment, 205 00:10:00,360 --> 00:10:04,656 are the kinds of things that AI researchers say 206 00:10:04,680 --> 00:10:06,240 when they want to be reassuring. 207 00:10:07,000 --> 00:10:10,456 And the most common reason we're told not to worry is time. 208 00:10:10,480 --> 00:10:12,536 This is all a long way off, don't you know. 209 00:10:12,560 --> 00:10:15,000 This is probably 50 or 100 years away. 210 00:10:15,720 --> 00:10:16,976 One researcher has said, 211 00:10:17,000 --> 00:10:18,576 "Worrying about AI safety 212 00:10:18,600 --> 00:10:20,880 is like worrying about overpopulation on Mars." 213 00:10:22,116 --> 00:10:23,736 This is the Silicon Valley version 214 00:10:23,760 --> 00:10:26,136 of "don't worry your pretty little head about it." 215 00:10:26,160 --> 00:10:27,496 (Laughter) 216 00:10:27,520 --> 00:10:29,416 No one seems to notice 217 00:10:29,440 --> 00:10:32,056 that referencing the time horizon 218 00:10:32,080 --> 00:10:34,656 is a total non sequitur. 219 00:10:34,680 --> 00:10:37,936 If intelligence is just a matter of information processing, 220 00:10:37,960 --> 00:10:40,616 and we continue to improve our machines, 221 00:10:40,640 --> 00:10:43,520 we will produce some form of superintelligence. 222 00:10:44,320 --> 00:10:47,976 And we have no idea how long it will take us 223 00:10:48,000 --> 00:10:50,400 to create the conditions to do that safely. 224 00:10:52,200 --> 00:10:53,496 Let me say that again. 225 00:10:53,520 --> 00:10:57,336 We have no idea how long it will take us 226 00:10:57,360 --> 00:10:59,600 to create the conditions to do that safely. 227 00:11:00,920 --> 00:11:04,376 And if you haven't noticed, 50 years is not what it used to be. 228 00:11:04,400 --> 00:11:06,856 This is 50 years in months. 229 00:11:06,880 --> 00:11:08,720 This is how long we've had the iPhone. 230 00:11:09,440 --> 00:11:12,040 This is how long "The Simpsons" has been on television. 231 00:11:12,680 --> 00:11:15,056 Fifty years is not that much time 232 00:11:15,080 --> 00:11:18,240 to meet one of the greatest challenges our species will ever face. 233 00:11:19,640 --> 00:11:23,656 Once again, we seem to be failing to have an appropriate emotional response 234 00:11:23,680 --> 00:11:26,376 to what we have every reason to believe is coming. 235 00:11:26,400 --> 00:11:30,376 The computer scientist Stuart Russell has a nice analogy here. 236 00:11:30,400 --> 00:11:35,296 He said, imagine that we received a message from an alien civilization, 237 00:11:35,320 --> 00:11:37,016 which read: 238 00:11:37,040 --> 00:11:38,576 "People of Earth, 239 00:11:38,600 --> 00:11:40,960 we will arrive on your planet in 50 years. 240 00:11:41,800 --> 00:11:43,376 Get ready." 241 00:11:43,400 --> 00:11:47,656 And now we're just counting down the months until the mothership lands? 242 00:11:47,680 --> 00:11:50,680 We would feel a little more urgency than we do. 243 00:11:52,680 --> 00:11:54,536 Another reason we're told not to worry 244 00:11:54,560 --> 00:11:57,576 is that these machines can't help but share our values 245 00:11:57,600 --> 00:12:00,216 because they will be literally extensions of ourselves. 246 00:12:00,240 --> 00:12:02,056 They'll be grafted onto our brains, 247 00:12:02,080 --> 00:12:04,440 and we'll essentially become their limbic systems. 248 00:12:05,120 --> 00:12:06,536 Now take a moment to consider 249 00:12:06,560 --> 00:12:09,736 that the safest and only prudent path forward, 250 00:12:09,760 --> 00:12:11,096 recommended, 251 00:12:11,120 --> 00:12:13,920 is to implant this technology directly into our brains. 252 00:12:14,600 --> 00:12:17,976 Now, this may in fact be the safest and only prudent path forward, 253 00:12:18,000 --> 00:12:21,056 but usually one's safety concerns about a technology 254 00:12:21,080 --> 00:12:24,736 have to be pretty much worked out before you stick it inside your head. 255 00:12:24,760 --> 00:12:26,776 (Laughter) 256 00:12:26,800 --> 00:12:32,136 The deeper problem is that building superintelligent AI on its own 257 00:12:32,160 --> 00:12:33,896 seems likely to be easier 258 00:12:33,920 --> 00:12:35,776 than building superintelligent AI 259 00:12:35,800 --> 00:12:37,576 and having the completed neuroscience 260 00:12:37,600 --> 00:12:40,280 that allows us to seamlessly integrate our minds with it. 261 00:12:40,800 --> 00:12:43,976 And given that the companies and governments doing this work 262 00:12:44,000 --> 00:12:47,656 are likely to perceive themselves as being in a race against all others, 263 00:12:47,680 --> 00:12:50,936 given that to win this race is to win the world, 264 00:12:50,960 --> 00:12:53,416 provided you don't destroy it in the next moment, 265 00:12:53,440 --> 00:12:56,056 then it seems likely that whatever is easier to do 266 00:12:56,080 --> 00:12:57,280 will get done first. 267 00:12:58,560 --> 00:13:01,416 Now, unfortunately, I don't have a solution to this problem, 268 00:13:01,440 --> 00:13:04,056 apart from recommending that more of us think about it. 269 00:13:04,080 --> 00:13:06,456 I think we need something like a Manhattan Project 270 00:13:06,480 --> 00:13:08,496 on the topic of artificial intelligence. 271 00:13:08,520 --> 00:13:11,256 Not to build it, because I think we'll inevitably do that, 272 00:13:11,280 --> 00:13:14,616 but to understand how to avoid an arms race 273 00:13:14,640 --> 00:13:18,136 and to build it in a way that is aligned with our interests. 274 00:13:18,160 --> 00:13:20,296 When you're talking about superintelligent AI 275 00:13:20,320 --> 00:13:22,576 that can make changes to itself, 276 00:13:22,600 --> 00:13:27,216 it seems that we only have one chance to get the initial conditions right, 277 00:13:27,240 --> 00:13:29,296 and even then we will need to absorb 278 00:13:29,320 --> 00:13:32,360 the economic and political consequences of getting them right. 279 00:13:33,760 --> 00:13:35,816 But the moment we admit 280 00:13:35,840 --> 00:13:39,840 that information processing is the source of intelligence, 281 00:13:40,720 --> 00:13:45,520 that some appropriate computational system is what the basis of intelligence is, 282 00:13:46,360 --> 00:13:50,120 and we admit that we will improve these systems continuously, 283 00:13:51,280 --> 00:13:55,736 and we admit that the horizon of cognition very likely far exceeds 284 00:13:55,760 --> 00:13:56,960 what we currently know, 285 00:13:58,120 --> 00:13:59,336 then we have to admit 286 00:13:59,360 --> 00:14:02,000 that we are in the process of building some sort of god. 287 00:14:03,400 --> 00:14:04,976 Now would be a good time 288 00:14:05,000 --> 00:14:06,953 to make sure it's a god we can live with. 289 00:14:08,120 --> 00:14:09,656 Thank you very much. 290 00:14:09,680 --> 00:14:14,773 (Applause)