1 00:00:00,760 --> 00:00:05,176 After 13.8 billion years of cosmic history, 2 00:00:05,200 --> 00:00:07,296 our universe has woken up 3 00:00:07,320 --> 00:00:08,840 and become aware of itself. 4 00:00:09,480 --> 00:00:11,416 From a small blue planet, 5 00:00:11,440 --> 00:00:15,576 tiny, conscious parts of our universe have begun gazing out into the cosmos 6 00:00:15,600 --> 00:00:16,976 with telescopes, 7 00:00:17,000 --> 00:00:18,480 discovering something humbling. 8 00:00:19,320 --> 00:00:22,216 We've discovered that our universe is vastly grander 9 00:00:22,240 --> 00:00:23,576 than our ancestors imagined 10 00:00:23,600 --> 00:00:27,856 and that life seems to be an almost imperceptibly small perturbation 11 00:00:27,880 --> 00:00:29,600 on an otherwise dead universe. 12 00:00:30,320 --> 00:00:33,336 But we've also discovered something inspiring, 13 00:00:33,360 --> 00:00:36,336 which is that the technology we're developing has the potential 14 00:00:36,360 --> 00:00:39,216 to help life flourish like never before, 15 00:00:39,240 --> 00:00:42,336 not just for centuries but for billions of years, 16 00:00:42,360 --> 00:00:46,480 and not just on Earth but throughout much of this amazing cosmos. 17 00:00:47,680 --> 00:00:51,016 I think of the earliest life as "Life 1.0" 18 00:00:51,040 --> 00:00:52,416 because it was really dumb, 19 00:00:52,440 --> 00:00:56,736 like bacteria, unable to learn anything during its lifetime. 20 00:00:56,760 --> 00:01:00,136 I think of us humans as Life 2.0 because we can learn, 21 00:01:00,160 --> 00:01:01,656 which we in nerdy, geek speak, 22 00:01:01,680 --> 00:01:04,896 might think of as installing new software into our brains, 23 00:01:04,920 --> 00:01:07,040 like languages and job skills. 24 00:01:07,680 --> 00:01:11,976 Life 3.0, which can design not only its software but also its hardware 25 00:01:12,000 --> 00:01:13,656 of course doesn't exist yet. 26 00:01:13,680 --> 00:01:17,456 But perhaps our technology has already made us life 2.1, 27 00:01:17,480 --> 00:01:21,816 with our artificial knees, pacemakers and cochlear implants. 28 00:01:21,840 --> 00:01:25,720 So let's take a closer look at our relationship with technology, OK? 29 00:01:26,800 --> 00:01:28,016 As an example, 30 00:01:28,040 --> 00:01:33,336 the Apollo 11 moon mission was both successful and inspiring, 31 00:01:33,360 --> 00:01:36,376 showing that when we humans use technology wisely, 32 00:01:36,400 --> 00:01:40,336 we can accomplish things that our ancestors could only dream of. 33 00:01:40,360 --> 00:01:43,336 But there's an even more inspiring journey 34 00:01:43,360 --> 00:01:46,040 propelled by something more powerful than rocket engines, 35 00:01:47,200 --> 00:01:49,536 where the passengers aren't just three astronauts 36 00:01:49,560 --> 00:01:51,336 but all of humanity. 37 00:01:51,360 --> 00:01:54,296 Let's talk about our collective journey into the future 38 00:01:54,320 --> 00:01:56,320 with artificial intelligence. 39 00:01:56,960 --> 00:02:01,496 My friend Jaan Tallinn likes to point out that just as with rocketry, 40 00:02:01,520 --> 00:02:04,680 it's not enough to make our technology powerful. 41 00:02:05,560 --> 00:02:08,735 We also have to figure out, if we're going to be really ambitious, 42 00:02:08,759 --> 00:02:10,175 how to steer it 43 00:02:10,199 --> 00:02:11,880 and where we want to go with it. 44 00:02:12,880 --> 00:02:15,720 So let's talk about all three for artificial intelligence: 45 00:02:16,440 --> 00:02:19,496 the power, the steering and the destination. 46 00:02:19,520 --> 00:02:20,806 Let's start with the power. 47 00:02:21,600 --> 00:02:24,696 I define intelligence very inclusively -- 48 00:02:24,720 --> 00:02:29,056 simply as our ability to accomplish complex goals, 49 00:02:29,080 --> 00:02:32,896 because I want to include both biological and artificial intelligence 50 00:02:32,920 --> 00:02:36,936 and I want to avoid the silly carbon-chauvinism idea 51 00:02:36,960 --> 00:02:39,320 that you can only be smart if you're made of meat. 52 00:02:40,880 --> 00:02:45,056 It's really amazing how the power of AI has grown recently. 53 00:02:45,080 --> 00:02:46,336 Just think about it. 54 00:02:46,360 --> 00:02:49,560 Not long ago, robots couldn't walk. 55 00:02:51,040 --> 00:02:52,760 Now, they can do backflips. 56 00:02:54,080 --> 00:02:55,896 Not long ago, 57 00:02:55,920 --> 00:02:57,680 we didn't have self-driving cars. 58 00:02:58,920 --> 00:03:01,400 Now, we have self-flying rockets. 59 00:03:03,960 --> 00:03:05,376 Not long ago, 60 00:03:05,400 --> 00:03:08,016 AI couldn't do face recognition. 61 00:03:08,040 --> 00:03:11,016 Now, AI can generate fake faces 62 00:03:11,040 --> 00:03:15,200 and simulate your face saying stuff that you never said. 63 00:03:16,400 --> 00:03:17,976 Not long ago, 64 00:03:18,000 --> 00:03:19,880 AI couldn't beat us at the game of Go. 65 00:03:20,400 --> 00:03:25,496 Then, Google DeepMind's AlphaZero AI took 3,000 years of human Go games 66 00:03:25,520 --> 00:03:26,776 and Go wisdom, 67 00:03:26,800 --> 00:03:31,776 ignored it all and became the world's best player by just playing against itself. 68 00:03:31,800 --> 00:03:35,496 And the most impressive feat here wasn't that it crushed human gamers, 69 00:03:35,520 --> 00:03:38,096 but that it crushed human AI researchers 70 00:03:38,120 --> 00:03:41,800 who had spent decades handcrafting game-playing software. 71 00:03:42,200 --> 00:03:46,856 And AlphaZero crushed human AI researchers not just in Go but even at chess, 72 00:03:46,880 --> 00:03:49,360 which we have been working on since 1950. 73 00:03:50,000 --> 00:03:54,240 So all this amazing recent progress in AI really begs the question: 74 00:03:55,280 --> 00:03:56,840 how far will it go? 75 00:03:57,800 --> 00:03:59,496 I like to think about this question 76 00:03:59,520 --> 00:04:02,496 in terms of this abstract landscape of tasks, 77 00:04:02,520 --> 00:04:05,976 where the elevation represents how hard it is for AI to do each task 78 00:04:06,000 --> 00:04:07,216 at human level, 79 00:04:07,240 --> 00:04:10,000 and the sea level represents what AI can do today. 80 00:04:11,120 --> 00:04:13,176 The sea level is rising as the AI improves, 81 00:04:13,200 --> 00:04:16,640 so there's a kind of global warming going on here in the task landscape. 82 00:04:18,040 --> 00:04:21,375 And the obvious takeaway is to avoid careers at the waterfront -- 83 00:04:21,399 --> 00:04:22,656 (Laughter) 84 00:04:22,680 --> 00:04:25,536 which will soon be automated and disrupted. 85 00:04:25,560 --> 00:04:28,536 But there's a much bigger question as well. 86 00:04:28,560 --> 00:04:30,370 How high will the water end up rising? 87 00:04:31,440 --> 00:04:34,640 Will it eventually rise to flood everything, 88 00:04:35,840 --> 00:04:38,336 matching human intelligence at all tasks. 89 00:04:38,360 --> 00:04:42,096 This is the definition of artificial general intelligence -- 90 00:04:42,120 --> 00:04:43,416 AGI, 91 00:04:43,440 --> 00:04:46,520 which has been the holy grail of AI research since its inception. 92 00:04:47,000 --> 00:04:48,776 By this definition, people who say, 93 00:04:48,800 --> 00:04:52,216 "Ah, there will always be jobs that humans can do better than machines," 94 00:04:52,240 --> 00:04:55,160 are simply saying that we'll never get AGI. 95 00:04:55,680 --> 00:04:59,256 Sure, we might still choose to have some human jobs 96 00:04:59,280 --> 00:05:02,376 or to give humans income and purpose with our jobs, 97 00:05:02,400 --> 00:05:06,136 but AGI will in any case transform life as we know it 98 00:05:06,160 --> 00:05:08,896 with humans no longer being the most intelligent. 99 00:05:08,920 --> 00:05:12,616 Now, if the water level does reach AGI, 100 00:05:12,640 --> 00:05:17,936 then further AI progress will be driven mainly not by humans but by AI, 101 00:05:17,960 --> 00:05:19,816 which means that there's a possibility 102 00:05:19,840 --> 00:05:22,176 that further AI progress could be way faster 103 00:05:22,200 --> 00:05:25,576 than the typical human research and development timescale of years, 104 00:05:25,600 --> 00:05:29,616 raising the controversial possibility of an intelligence explosion 105 00:05:29,640 --> 00:05:31,936 where recursively self-improving AI 106 00:05:31,960 --> 00:05:35,376 rapidly leaves human intelligence far behind, 107 00:05:35,400 --> 00:05:37,840 creating what's known as superintelligence. 108 00:05:39,800 --> 00:05:42,080 Alright, reality check: 109 00:05:43,120 --> 00:05:45,560 are we going to get AGI any time soon? 110 00:05:46,360 --> 00:05:49,056 Some famous AI researchers, like Rodney Brooks, 111 00:05:49,080 --> 00:05:51,576 think it won't happen for hundreds of years. 112 00:05:51,600 --> 00:05:55,496 But others, like Google DeepMind founder Demis Hassabis, 113 00:05:55,520 --> 00:05:56,776 are more optimistic 114 00:05:56,800 --> 00:05:59,376 and are working to try to make it happen much sooner. 115 00:05:59,400 --> 00:06:02,696 And recent surveys have shown that most AI researchers 116 00:06:02,720 --> 00:06:05,576 actually share Demis's optimism, 117 00:06:05,600 --> 00:06:08,680 expecting that we will get AGI within decades, 118 00:06:09,640 --> 00:06:11,896 so within the lifetime of many of us, 119 00:06:11,920 --> 00:06:13,880 which begs the question -- and then what? 120 00:06:15,040 --> 00:06:17,256 What do we want the role of humans to be 121 00:06:17,280 --> 00:06:19,960 if machines can do everything better and cheaper than us? 122 00:06:23,000 --> 00:06:25,000 The way I see it, we face a choice. 123 00:06:26,000 --> 00:06:27,576 One option is to be complacent. 124 00:06:27,600 --> 00:06:31,376 We can say, "Oh, let's just build machines that can do everything we can do 125 00:06:31,400 --> 00:06:33,216 and not worry about the consequences. 126 00:06:33,240 --> 00:06:36,496 Come on, if we build technology that makes all humans obsolete, 127 00:06:36,520 --> 00:06:38,616 what could possibly go wrong?" 128 00:06:38,640 --> 00:06:40,296 (Laughter) 129 00:06:40,320 --> 00:06:43,080 But I think that would be embarrassingly lame. 130 00:06:44,080 --> 00:06:47,576 I think we should be more ambitious -- in the spirit of TED. 131 00:06:47,600 --> 00:06:51,096 Let's envision the truly inspiring high-tech future 132 00:06:51,120 --> 00:06:52,520 and try to steer towards it. 133 00:06:53,720 --> 00:06:57,256 This brings us to the second part of our rocket metaphor: the steering. 134 00:06:57,280 --> 00:06:59,176 We're making AI more powerful, 135 00:06:59,200 --> 00:07:03,016 but how can we steer towards a future 136 00:07:03,040 --> 00:07:06,120 where AI helps humanity flourish rather than flounder? 137 00:07:06,760 --> 00:07:08,016 To help with this, 138 00:07:08,040 --> 00:07:10,016 I cofounded the Future of Life Institute. 139 00:07:10,040 --> 00:07:12,816 It's a small nonprofit promoting beneficial technology use 140 00:07:12,840 --> 00:07:15,576 and our goal is simply for the future of life to exist 141 00:07:15,600 --> 00:07:17,656 and to be as inspiring as possible. 142 00:07:17,680 --> 00:07:20,856 You know, I love technology. 143 00:07:20,880 --> 00:07:23,800 Technology is why today is better than the Stone Age. 144 00:07:24,600 --> 00:07:28,680 And I'm optimistic that we can create a really inspiring high-tech future ... 145 00:07:29,680 --> 00:07:31,136 if -- and this is a big if -- 146 00:07:31,160 --> 00:07:33,616 if we win the wisdom race -- 147 00:07:33,640 --> 00:07:36,496 the race between the growing power of our technology 148 00:07:36,520 --> 00:07:38,720 and the growing wisdom with which we manage it. 149 00:07:39,240 --> 00:07:41,536 But this is going to require a change of strategy 150 00:07:41,560 --> 00:07:44,600 because our old strategy has been learning from mistakes. 151 00:07:45,280 --> 00:07:46,816 We invented fire, 152 00:07:46,840 --> 00:07:48,376 screwed up a bunch of times -- 153 00:07:48,400 --> 00:07:50,216 invented the fire extinguisher. 154 00:07:50,240 --> 00:07:51,576 (Laughter) 155 00:07:51,600 --> 00:07:54,016 We invented the car, screwed up a bunch of times -- 156 00:07:54,040 --> 00:07:56,707 invented the traffic light, the seat belt and the airbag, 157 00:07:56,731 --> 00:08:00,576 but with more powerful technology like nuclear weapons and AGI, 158 00:08:00,600 --> 00:08:03,976 learning from mistakes is a lousy strategy, 159 00:08:04,000 --> 00:08:05,216 don't you think? 160 00:08:05,240 --> 00:08:06,256 (Laughter) 161 00:08:06,280 --> 00:08:08,856 It's much better to be proactive rather than reactive; 162 00:08:08,880 --> 00:08:11,176 plan ahead and get things right the first time 163 00:08:11,200 --> 00:08:13,696 because that might be the only time we'll get. 164 00:08:13,720 --> 00:08:16,056 But it is funny because sometimes people tell me, 165 00:08:16,080 --> 00:08:18,816 "Max, shhh, don't talk like that. 166 00:08:18,840 --> 00:08:20,560 That's Luddite scaremongering." 167 00:08:22,040 --> 00:08:23,576 But it's not scaremongering. 168 00:08:23,600 --> 00:08:26,480 It's what we at MIT call safety engineering. 169 00:08:27,200 --> 00:08:28,416 Think about it: 170 00:08:28,440 --> 00:08:30,656 before NASA launched the Apollo 11 mission, 171 00:08:30,680 --> 00:08:33,816 they systematically thought through everything that could go wrong 172 00:08:33,840 --> 00:08:36,216 when you put people on top of explosive fuel tanks 173 00:08:36,240 --> 00:08:38,856 and launch them somewhere where no one could help them. 174 00:08:38,880 --> 00:08:40,816 And there was a lot that could go wrong. 175 00:08:40,840 --> 00:08:42,320 Was that scaremongering? 176 00:08:43,159 --> 00:08:44,376 No. 177 00:08:44,400 --> 00:08:46,416 That's was precisely the safety engineering 178 00:08:46,440 --> 00:08:48,376 that ensured the success of the mission, 179 00:08:48,400 --> 00:08:52,576 and that is precisely the strategy I think we should take with AGI. 180 00:08:52,600 --> 00:08:56,656 Think through what can go wrong to make sure it goes right. 181 00:08:56,680 --> 00:08:59,216 So in this spirit, we've organized conferences, 182 00:08:59,240 --> 00:09:02,056 bringing together leading AI researchers and other thinkers 183 00:09:02,080 --> 00:09:05,816 to discuss how to grow this wisdom we need to keep AI beneficial. 184 00:09:05,840 --> 00:09:09,136 Our last conference was in Asilomar, California last year 185 00:09:09,160 --> 00:09:12,216 and produced this list of 23 principles 186 00:09:12,240 --> 00:09:15,136 which have since been signed by over 1,000 AI researchers 187 00:09:15,160 --> 00:09:16,456 and key industry leaders, 188 00:09:16,480 --> 00:09:19,656 and I want to tell you about three of these principles. 189 00:09:19,680 --> 00:09:24,640 One is that we should avoid an arms race and lethal autonomous weapons. 190 00:09:25,480 --> 00:09:29,096 The idea here is that any science can be used for new ways of helping people 191 00:09:29,120 --> 00:09:30,656 or new ways of harming people. 192 00:09:30,680 --> 00:09:34,616 For example, biology and chemistry are much more likely to be used 193 00:09:34,640 --> 00:09:39,496 for new medicines or new cures than for new ways of killing people, 194 00:09:39,520 --> 00:09:41,696 because biologists and chemists pushed hard -- 195 00:09:41,720 --> 00:09:42,976 and successfully -- 196 00:09:43,000 --> 00:09:45,176 for bans on biological and chemical weapons. 197 00:09:45,200 --> 00:09:46,456 And in the same spirit, 198 00:09:46,480 --> 00:09:50,920 most AI researchers want to stigmatize and ban lethal autonomous weapons. 199 00:09:51,600 --> 00:09:53,416 Another Asilomar AI principle 200 00:09:53,440 --> 00:09:57,136 is that we should mitigate AI-fueled income inequality. 201 00:09:57,160 --> 00:10:01,616 I think that if we can grow the economic pie dramatically with AI 202 00:10:01,640 --> 00:10:04,096 and we still can't figure out how to divide this pie 203 00:10:04,120 --> 00:10:05,696 so that everyone is better off, 204 00:10:05,720 --> 00:10:06,976 then shame on us. 205 00:10:07,000 --> 00:10:11,096 (Applause) 206 00:10:11,120 --> 00:10:14,720 Alright, now raise your hand if your computer has ever crashed. 207 00:10:15,480 --> 00:10:16,736 (Laughter) 208 00:10:16,760 --> 00:10:18,416 Wow, that's a lot of hands. 209 00:10:18,440 --> 00:10:20,616 Well, then you'll appreciate this principle 210 00:10:20,640 --> 00:10:23,776 that we should invest much more in AI safety research, 211 00:10:23,800 --> 00:10:27,456 because as we put AI in charge of even more decisions and infrastructure, 212 00:10:27,480 --> 00:10:31,096 we need to figure out how to transform today's buggy and hackable computers 213 00:10:31,120 --> 00:10:33,536 into robust AI systems that we can really trust, 214 00:10:33,560 --> 00:10:34,776 because otherwise, 215 00:10:34,800 --> 00:10:37,616 all this awesome new technology can malfunction and harm us, 216 00:10:37,640 --> 00:10:39,616 or get hacked and be turned against us. 217 00:10:39,640 --> 00:10:45,336 And this AI safety work has to include work on AI value alignment, 218 00:10:45,360 --> 00:10:48,176 because the real threat from AGI isn't malice, 219 00:10:48,200 --> 00:10:49,856 like in silly Hollywood movies, 220 00:10:49,880 --> 00:10:51,616 but competence -- 221 00:10:51,640 --> 00:10:55,056 AGI accomplishing goals that just aren't aligned with ours. 222 00:10:55,080 --> 00:10:59,816 For example, when we humans drove the West African black rhino extinct, 223 00:10:59,840 --> 00:11:03,736 we didn't do it because we were a bunch of evil rhinoceros haters, did we? 224 00:11:03,760 --> 00:11:05,816 We did it because we were smarter than them 225 00:11:05,840 --> 00:11:08,416 and our goals weren't aligned with theirs. 226 00:11:08,440 --> 00:11:11,096 But AGI is by definition smarter than us, 227 00:11:11,120 --> 00:11:14,696 so to make sure that we don't put ourselves in the position of those rhinos 228 00:11:14,720 --> 00:11:16,696 if we create AGI, 229 00:11:16,720 --> 00:11:20,896 we need to figure out how to make machines understand our goals, 230 00:11:20,920 --> 00:11:24,080 adopt our goals and retain our goals. 231 00:11:25,320 --> 00:11:28,176 And whose goals should these be, anyway? 232 00:11:28,200 --> 00:11:30,096 Which goals should they be? 233 00:11:30,120 --> 00:11:33,680 This brings us to the third part of our rocket metaphor: the destination. 234 00:11:35,160 --> 00:11:37,016 We're making AI more powerful, 235 00:11:37,040 --> 00:11:38,856 trying to figure out how to steer it, 236 00:11:38,880 --> 00:11:40,560 but where do we want to go with it? 237 00:11:41,760 --> 00:11:45,416 This is the elephant in the room that almost nobody talks about -- 238 00:11:45,440 --> 00:11:47,296 not even here at TED -- 239 00:11:47,320 --> 00:11:51,400 because we're so fixated on short-term AI challenges. 240 00:11:52,080 --> 00:11:56,736 Look, our species is trying to build AGI, 241 00:11:56,760 --> 00:12:00,256 motivated by curiosity and economics, 242 00:12:00,280 --> 00:12:03,960 but what sort of future society are we hoping for if we succeed? 243 00:12:04,680 --> 00:12:06,616 We did an opinion poll on this recently, 244 00:12:06,640 --> 00:12:07,856 and I was struck to see 245 00:12:07,880 --> 00:12:10,776 that most people actually want us to build superintelligence: 246 00:12:10,800 --> 00:12:13,960 AI that's vastly smarter than us in all ways. 247 00:12:15,120 --> 00:12:18,536 What there was the greatest agreement on was that we should be ambitious 248 00:12:18,560 --> 00:12:20,576 and help life spread into the cosmos, 249 00:12:20,600 --> 00:12:25,096 but there was much less agreement about who or what should be in charge. 250 00:12:25,120 --> 00:12:26,856 And I was actually quite amused 251 00:12:26,880 --> 00:12:30,336 to see that there's some some people who want it to be just machines. 252 00:12:30,360 --> 00:12:32,056 (Laughter) 253 00:12:32,080 --> 00:12:35,936 And there was total disagreement about what the role of humans should be, 254 00:12:35,960 --> 00:12:37,936 even at the most basic level, 255 00:12:37,960 --> 00:12:40,776 so let's take a closer look at possible futures 256 00:12:40,800 --> 00:12:43,536 that we might choose to steer toward, alright? 257 00:12:43,560 --> 00:12:44,896 So don't get be wrong here. 258 00:12:44,920 --> 00:12:46,976 I'm not talking about space travel, 259 00:12:47,000 --> 00:12:50,200 merely about humanity's metaphorical journey into the future. 260 00:12:50,920 --> 00:12:54,416 So one option that some of my AI colleagues like 261 00:12:54,440 --> 00:12:58,056 is to build superintelligence and keep it under human control, 262 00:12:58,080 --> 00:12:59,816 like an enslaved god, 263 00:12:59,840 --> 00:13:01,416 disconnected from the internet 264 00:13:01,440 --> 00:13:04,696 and used to create unimaginable technology and wealth 265 00:13:04,720 --> 00:13:05,960 for whoever controls it. 266 00:13:06,800 --> 00:13:08,256 But Lord Acton warned us 267 00:13:08,280 --> 00:13:11,896 that power corrupts and absolute power corrupts absolutely, 268 00:13:11,920 --> 00:13:15,976 so you might worry that maybe we humans just aren't smart enough, 269 00:13:16,000 --> 00:13:17,536 or wise enough rather, 270 00:13:17,560 --> 00:13:18,800 to handle this much power. 271 00:13:19,640 --> 00:13:22,176 Also, aside from any moral qualms you might have 272 00:13:22,200 --> 00:13:24,496 about enslaving superior minds, 273 00:13:24,520 --> 00:13:28,496 you might worry that maybe the superintelligence could outsmart us, 274 00:13:28,520 --> 00:13:30,760 break out and take over. 275 00:13:31,560 --> 00:13:34,976 But I also have colleagues who are fine with AI taking over 276 00:13:35,000 --> 00:13:37,296 and even causing human extinction, 277 00:13:37,320 --> 00:13:40,896 as long as we feel the the AIs are our worthy descendants, 278 00:13:40,920 --> 00:13:42,656 like our children. 279 00:13:42,680 --> 00:13:48,296 But how would we know that the AIs have adopted our best values 280 00:13:48,320 --> 00:13:52,696 and aren't just unconscious zombies tricking us into anthropomorphizing them? 281 00:13:52,720 --> 00:13:55,576 Also, shouldn't those people who don't want human extinction 282 00:13:55,600 --> 00:13:57,040 have a say in the matter, too? 283 00:13:58,200 --> 00:14:01,576 Now, if you didn't like either of those two high-tech options, 284 00:14:01,600 --> 00:14:04,776 it's important to remember that low-tech is suicide 285 00:14:04,800 --> 00:14:06,056 from a cosmic perspective, 286 00:14:06,080 --> 00:14:08,576 because if we don't go far beyond today's technology, 287 00:14:08,600 --> 00:14:11,416 the question isn't whether humanity is going to go extinct, 288 00:14:11,440 --> 00:14:13,456 merely whether we're going to get taken out 289 00:14:13,480 --> 00:14:15,616 by the next killer asteroid, supervolcano, 290 00:14:15,640 --> 00:14:18,736 or some other problem that better technology could have solved. 291 00:14:18,760 --> 00:14:22,336 So, how about having our cake and eating it ... 292 00:14:22,360 --> 00:14:24,200 with AGI that's not enslaved 293 00:14:25,120 --> 00:14:28,296 but treats us well because its values are aligned with ours? 294 00:14:28,320 --> 00:14:32,496 This is the gist of what Eliezer Yudkowsky has called "friendly AI," 295 00:14:32,520 --> 00:14:35,200 and if we can do this, it could be awesome. 296 00:14:35,840 --> 00:14:40,656 It could not only eliminate negative experiences like disease, poverty, 297 00:14:40,680 --> 00:14:42,136 crime and other suffering, 298 00:14:42,160 --> 00:14:44,976 but it could also give us the freedom to choose 299 00:14:45,000 --> 00:14:49,056 from a fantastic new diversity of positive experiences -- 300 00:14:49,080 --> 00:14:52,240 basically making us the masters of our own destiny. 301 00:14:54,280 --> 00:14:55,656 So in summary, 302 00:14:55,680 --> 00:14:58,776 our situation with technology is complicated, 303 00:14:58,800 --> 00:15:01,216 but the big picture is rather simple. 304 00:15:01,240 --> 00:15:04,696 Most AI researchers expect AGI within decades 305 00:15:04,720 --> 00:15:07,856 and if we just bumble into this unprepared, 306 00:15:07,880 --> 00:15:11,216 it will probably be the biggest mistake in human history -- 307 00:15:11,240 --> 00:15:12,656 let's face it. 308 00:15:12,680 --> 00:15:15,256 It could enable brutal, global dictatorship 309 00:15:15,280 --> 00:15:18,816 with unprecedented inequality, surveillance and suffering, 310 00:15:18,840 --> 00:15:20,816 and maybe even human extinction. 311 00:15:20,840 --> 00:15:23,160 But if we steer carefully, 312 00:15:24,040 --> 00:15:27,936 we could end up in a fantastic future where everybody's better off: 313 00:15:27,960 --> 00:15:30,336 the poor are richer, the rich are richer, 314 00:15:30,360 --> 00:15:34,320 everybody is healthy and free to live out their dreams. 315 00:15:35,000 --> 00:15:36,536 Now, hang on. 316 00:15:36,560 --> 00:15:41,136 Do you folks want the future that's politically right or left? 317 00:15:41,160 --> 00:15:44,016 Do you want the pious society with strict moral rules, 318 00:15:44,040 --> 00:15:45,856 or do you an hedonistic free-for-all, 319 00:15:45,880 --> 00:15:48,096 more like Burning Man 24-7? 320 00:15:48,120 --> 00:15:50,536 Do you want beautiful beaches, forests and lakes, 321 00:15:50,560 --> 00:15:53,976 or would you prefer to rearrange some of those atoms with the computers, 322 00:15:54,000 --> 00:15:55,715 and they can be virtual experiences? 323 00:15:55,739 --> 00:15:58,896 With friendly AI, we could simply build all of these societies 324 00:15:58,920 --> 00:16:02,136 and give people the freedom to choose which one they want to live in 325 00:16:02,160 --> 00:16:05,256 because we would no longer be limited by our intelligence, 326 00:16:05,280 --> 00:16:06,736 merely by the laws of physics. 327 00:16:06,760 --> 00:16:11,376 So the resources and space for this would be astronomical -- 328 00:16:11,400 --> 00:16:12,720 literally. 329 00:16:13,320 --> 00:16:14,520 So here's our choice. 330 00:16:15,880 --> 00:16:18,200 We can either be complacent about our future, 331 00:16:19,440 --> 00:16:22,096 taking as an article of blind faith 332 00:16:22,120 --> 00:16:26,136 that any new technology is guaranteed to be beneficial, 333 00:16:26,160 --> 00:16:30,296 and just repeat that to ourselves as a mantra over and over and over again 334 00:16:30,320 --> 00:16:34,000 as we drift like a rudderless ship towards our own obsolescence. 335 00:16:34,920 --> 00:16:36,800 Or we can be ambitious -- 336 00:16:37,840 --> 00:16:40,296 thinking hard about how to steer our technology 337 00:16:40,320 --> 00:16:42,256 and where we want to go with it 338 00:16:42,280 --> 00:16:44,040 to create the age of amazement. 339 00:16:45,000 --> 00:16:47,856 We're all here to celebrate the age of amazement 340 00:16:47,880 --> 00:16:52,320 and I feel that its essence should lie in becoming not overpowered 341 00:16:53,240 --> 00:16:55,856 but empowered by our technology. 342 00:16:55,880 --> 00:16:57,256 Thank you. 343 00:16:57,280 --> 00:17:00,360 (Applause)