WEBVTT 00:00:00.760 --> 00:00:05.176 After 13.8 billion years of cosmic history, 00:00:05.200 --> 00:00:07.296 our universe has woken up 00:00:07.320 --> 00:00:08.840 and become aware of itself. 00:00:09.480 --> 00:00:11.416 From a small blue planet, 00:00:11.440 --> 00:00:15.576 tiny, conscious parts of our universe have begun gazing out into the cosmos 00:00:15.600 --> 00:00:16.976 with telescopes, 00:00:17.000 --> 00:00:18.480 discovering something humbling. 00:00:19.320 --> 00:00:22.216 We've discovered that our universe is vastly grander 00:00:22.240 --> 00:00:23.576 than our ancestors imagined 00:00:23.600 --> 00:00:27.856 and that life seems to be an almost imperceptibly small perturbation 00:00:27.880 --> 00:00:29.600 on an otherwise dead universe. 00:00:30.320 --> 00:00:33.336 But we've also discovered something inspiring, 00:00:33.360 --> 00:00:36.336 which is that the technology we're developing has the potential 00:00:36.360 --> 00:00:39.216 to help life flourish like never before, 00:00:39.240 --> 00:00:42.336 not just for centuries but for billions of years, 00:00:42.360 --> 00:00:46.480 and not just on Earth but throughout much of this amazing cosmos. 00:00:47.680 --> 00:00:51.016 I think of the earliest life as "Life 1.0" 00:00:51.040 --> 00:00:52.416 because it was really dumb, 00:00:52.440 --> 00:00:56.736 like bacteria, unable to learn anything during its lifetime. 00:00:56.760 --> 00:01:00.136 I think of us humans as Life 2.0 because we can learn, 00:01:00.160 --> 00:01:01.656 which we in nerdy, geek speak, 00:01:01.680 --> 00:01:04.896 might think of as installing new software into our brains, 00:01:04.920 --> 00:01:07.040 like languages and job skills. 00:01:07.680 --> 00:01:11.976 Life 3.0, which can design not only its software but also its hardware 00:01:12.000 --> 00:01:13.656 of course doesn't exist yet. 00:01:13.680 --> 00:01:17.456 But perhaps our technology has already made us life 2.1, 00:01:17.480 --> 00:01:21.816 with our artificial knees, pacemakers and cochlear implants. NOTE Paragraph 00:01:21.840 --> 00:01:25.720 So let's take a closer look at our relationship with technology, OK? 00:01:26.800 --> 00:01:28.016 As an example, 00:01:28.040 --> 00:01:33.336 the Apollo 11 moon mission was both successful and inspiring, 00:01:33.360 --> 00:01:36.376 showing that when we humans use technology wisely, 00:01:36.400 --> 00:01:40.336 we can accomplish things that our ancestors could only dream of. 00:01:40.360 --> 00:01:43.336 But there's an even more inspiring journey 00:01:43.360 --> 00:01:46.040 propelled by something more powerful than rocket engines, 00:01:47.200 --> 00:01:49.536 where the passengers aren't just three astronauts 00:01:49.560 --> 00:01:51.336 but all of humanity. 00:01:51.360 --> 00:01:54.296 Let's talk about our collective journey into the future 00:01:54.320 --> 00:01:56.320 with artificial intelligence. NOTE Paragraph 00:01:56.960 --> 00:02:01.496 My friend Jaan Tallinn likes to point out that just as with rocketry, 00:02:01.520 --> 00:02:04.680 it's not enough to make our technology powerful. 00:02:05.560 --> 00:02:08.735 We also have to figure out, if we're going to be really ambitious, 00:02:08.759 --> 00:02:10.175 how to steer it 00:02:10.199 --> 00:02:11.880 and where we want to go with it. 00:02:12.880 --> 00:02:15.720 So let's talk about all three for artificial intelligence: 00:02:16.440 --> 00:02:19.496 the power, the steering and the destination. 00:02:19.520 --> 00:02:20.806 Let's start with the power. 00:02:21.600 --> 00:02:24.696 I define intelligence very inclusively -- 00:02:24.720 --> 00:02:29.056 simply as our ability to accomplish complex goals, 00:02:29.080 --> 00:02:32.896 because I want to include both biological and artificial intelligence 00:02:32.920 --> 00:02:36.936 and I want to avoid the silly carbon-chauvinism idea 00:02:36.960 --> 00:02:39.320 that you can only be smart if you're made of meat. 00:02:40.880 --> 00:02:45.056 It's really amazing how the power of AI has grown recently. 00:02:45.080 --> 00:02:46.336 Just think about it. 00:02:46.360 --> 00:02:49.560 Not long ago, robots couldn't walk. 00:02:51.040 --> 00:02:52.760 Now, they can do backflips. 00:02:54.080 --> 00:02:55.896 Not long ago, 00:02:55.920 --> 00:02:57.680 we didn't have self-driving cars. 00:02:58.920 --> 00:03:01.400 Now, we have self-flying rockets. 00:03:03.960 --> 00:03:05.376 Not long ago, 00:03:05.400 --> 00:03:08.016 AI couldn't do face recognition. 00:03:08.040 --> 00:03:11.016 Now, AI can generate fake faces 00:03:11.040 --> 00:03:15.200 and simulate your face saying stuff that you never said. 00:03:16.400 --> 00:03:17.976 Not long ago, 00:03:18.000 --> 00:03:19.880 AI couldn't beat us at the game of Go. 00:03:20.400 --> 00:03:25.496 Then, Google DeepMind's AlphaZero AI took 3,000 years of human Go games 00:03:25.520 --> 00:03:26.776 and Go wisdom, 00:03:26.800 --> 00:03:31.776 ignored it all and became the world's best player by just playing against itself. 00:03:31.800 --> 00:03:35.496 And the most impressive feat here wasn't that it crushed human gamers, 00:03:35.520 --> 00:03:38.096 but that it crushed human AI researchers 00:03:38.120 --> 00:03:41.800 who had spent decades handcrafting game-playing software. 00:03:42.200 --> 00:03:46.856 And AlphaZero crushed human AI researchers not just in Go but even at chess, 00:03:46.880 --> 00:03:49.360 which we have been working on since 1950. NOTE Paragraph 00:03:50.000 --> 00:03:54.240 So all this amazing recent progress in AI really begs the question: 00:03:55.280 --> 00:03:56.840 how far will it go? 00:03:57.800 --> 00:03:59.496 I like to think about this question 00:03:59.520 --> 00:04:02.496 in terms of this abstract landscape of tasks, 00:04:02.520 --> 00:04:05.976 where the elevation represents how hard it is for AI to do each task 00:04:06.000 --> 00:04:07.216 at human level, 00:04:07.240 --> 00:04:10.000 and the sea level represents what AI can do today. 00:04:11.120 --> 00:04:13.176 The sea level is rising as the AI improves, 00:04:13.200 --> 00:04:16.640 so there's a kind of global warming going on here in the task landscape. 00:04:18.040 --> 00:04:21.375 And the obvious takeaway is to avoid careers at the waterfront -- NOTE Paragraph 00:04:21.399 --> 00:04:22.656 (Laughter) NOTE Paragraph 00:04:22.680 --> 00:04:25.536 which will soon be automated and disrupted. 00:04:25.560 --> 00:04:28.536 But there's a much bigger question as well. 00:04:28.560 --> 00:04:30.370 How high will the water end up rising? 00:04:31.440 --> 00:04:34.640 Will it eventually rise to flood everything, 00:04:35.840 --> 00:04:38.336 matching human intelligence at all tasks. 00:04:38.360 --> 00:04:42.096 This is the definition of artificial general intelligence -- 00:04:42.120 --> 00:04:43.416 AGI, 00:04:43.440 --> 00:04:46.520 which has been the holy grail of AI research since its inception. 00:04:47.000 --> 00:04:48.776 By this definition, people who say, 00:04:48.800 --> 00:04:52.216 "Ah, there will always be jobs that humans can do better than machines," 00:04:52.240 --> 00:04:55.160 are simply saying that we'll never get AGI. 00:04:55.680 --> 00:04:59.256 Sure, we might still choose to have some human jobs 00:04:59.280 --> 00:05:02.376 or to give humans income and purpose with our jobs, 00:05:02.400 --> 00:05:06.136 but AGI will in any case transform life as we know it 00:05:06.160 --> 00:05:08.896 with humans no longer being the most intelligent. 00:05:08.920 --> 00:05:12.616 Now, if the water level does reach AGI, 00:05:12.640 --> 00:05:17.936 then further AI progress will be driven mainly not by humans but by AI, 00:05:17.960 --> 00:05:19.816 which means that there's a possibility 00:05:19.840 --> 00:05:22.176 that further AI progress could be way faster 00:05:22.200 --> 00:05:25.576 than the typical human research and development timescale of years, 00:05:25.600 --> 00:05:29.616 raising the controversial possibility of an intelligence explosion 00:05:29.640 --> 00:05:31.936 where recursively self-improving AI 00:05:31.960 --> 00:05:35.376 rapidly leaves human intelligence far behind, 00:05:35.400 --> 00:05:37.840 creating what's known as superintelligence. NOTE Paragraph 00:05:39.800 --> 00:05:42.080 Alright, reality check: 00:05:43.120 --> 00:05:45.560 are we going to get AGI any time soon? 00:05:46.360 --> 00:05:49.056 Some famous AI researchers, like Rodney Brooks, 00:05:49.080 --> 00:05:51.576 think it won't happen for hundreds of years. 00:05:51.600 --> 00:05:55.496 But others, like Google DeepMind founder Demis Hassabis, 00:05:55.520 --> 00:05:56.776 are more optimistic 00:05:56.800 --> 00:05:59.376 and are working to try to make it happen much sooner. 00:05:59.400 --> 00:06:02.696 And recent surveys have shown that most AI researchers 00:06:02.720 --> 00:06:05.576 actually share Demis's optimism, 00:06:05.600 --> 00:06:08.680 expecting that we will get AGI within decades, 00:06:09.640 --> 00:06:11.896 so within the lifetime of many of us, 00:06:11.920 --> 00:06:13.880 which begs the question -- and then what? 00:06:15.040 --> 00:06:17.256 What do we want the role of humans to be 00:06:17.280 --> 00:06:19.960 if machines can do everything better and cheaper than us? NOTE Paragraph 00:06:23.000 --> 00:06:25.000 The way I see it, we face a choice. 00:06:26.000 --> 00:06:27.576 One option is to be complacent. 00:06:27.600 --> 00:06:31.376 We can say, "Oh, let's just build machines that can do everything we can do 00:06:31.400 --> 00:06:33.216 and not worry about the consequences. 00:06:33.240 --> 00:06:36.496 Come on, if we build technology that makes all humans obsolete, 00:06:36.520 --> 00:06:38.616 what could possibly go wrong?" NOTE Paragraph 00:06:38.640 --> 00:06:40.296 (Laughter) NOTE Paragraph 00:06:40.320 --> 00:06:43.080 But I think that would be embarrassingly lame. 00:06:44.080 --> 00:06:47.576 I think we should be more ambitious -- in the spirit of TED. 00:06:47.600 --> 00:06:51.096 Let's envision the truly inspiring high-tech future 00:06:51.120 --> 00:06:52.520 and try to steer towards it. 00:06:53.720 --> 00:06:57.256 This brings us to the second part of our rocket metaphor: the steering. 00:06:57.280 --> 00:06:59.176 We're making AI more powerful, 00:06:59.200 --> 00:07:03.016 but how can we steer towards a future 00:07:03.040 --> 00:07:06.120 where AI helps humanity flourish rather than flounder? 00:07:06.760 --> 00:07:08.016 To help with this, 00:07:08.040 --> 00:07:10.016 I cofounded the Future of Life Institute. 00:07:10.040 --> 00:07:12.816 It's a small nonprofit promoting beneficial technology use 00:07:12.840 --> 00:07:15.576 and our goal is simply for the future of life to exist 00:07:15.600 --> 00:07:17.656 and to be as inspiring as possible. 00:07:17.680 --> 00:07:20.856 You know, I love technology. 00:07:20.880 --> 00:07:23.800 Technology is why today is better than the Stone Age. 00:07:24.600 --> 00:07:28.680 And I'm optimistic that we can create a really inspiring high-tech future ... 00:07:29.680 --> 00:07:31.136 if -- and this is a big if -- 00:07:31.160 --> 00:07:33.616 if we win the wisdom race -- 00:07:33.640 --> 00:07:36.496 the race between the growing power of our technology 00:07:36.520 --> 00:07:38.720 and the growing wisdom with which we manage it. 00:07:39.240 --> 00:07:41.536 But this is going to require a change of strategy 00:07:41.560 --> 00:07:44.600 because our old strategy has been learning from mistakes. 00:07:45.280 --> 00:07:46.816 We invented fire, 00:07:46.840 --> 00:07:48.376 screwed up a bunch of times -- 00:07:48.400 --> 00:07:50.216 invented the fire extinguisher. NOTE Paragraph 00:07:50.240 --> 00:07:51.576 (Laughter) NOTE Paragraph 00:07:51.600 --> 00:07:54.016 We invented the car, screwed up a bunch of times -- 00:07:54.040 --> 00:07:56.707 invented the traffic light, the seat belt and the airbag, 00:07:56.731 --> 00:08:00.576 but with more powerful technology like nuclear weapons and AGI, 00:08:00.600 --> 00:08:03.976 learning from mistakes is a lousy strategy, 00:08:04.000 --> 00:08:05.216 don't you think? NOTE Paragraph 00:08:05.240 --> 00:08:06.256 (Laughter) NOTE Paragraph 00:08:06.280 --> 00:08:08.856 It's much better to be proactive rather than reactive; 00:08:08.880 --> 00:08:11.176 plan ahead and get things right the first time 00:08:11.200 --> 00:08:13.696 because that might be the only time we'll get. 00:08:13.720 --> 00:08:16.056 But it is funny because sometimes people tell me, 00:08:16.080 --> 00:08:18.816 "Max, shhh, don't talk like that. 00:08:18.840 --> 00:08:20.560 That's Luddite scaremongering." 00:08:22.040 --> 00:08:23.576 But it's not scaremongering. 00:08:23.600 --> 00:08:26.480 It's what we at MIT call safety engineering. 00:08:27.200 --> 00:08:28.416 Think about it: 00:08:28.440 --> 00:08:30.656 before NASA launched the Apollo 11 mission, 00:08:30.680 --> 00:08:33.816 they systematically thought through everything that could go wrong 00:08:33.840 --> 00:08:36.216 when you put people on top of explosive fuel tanks 00:08:36.240 --> 00:08:38.856 and launch them somewhere where no one could help them. 00:08:38.880 --> 00:08:40.816 And there was a lot that could go wrong. 00:08:40.840 --> 00:08:42.320 Was that scaremongering? 00:08:43.159 --> 00:08:44.376 No. 00:08:44.400 --> 00:08:46.416 That's was precisely the safety engineering 00:08:46.440 --> 00:08:48.376 that ensured the success of the mission, 00:08:48.400 --> 00:08:52.576 and that is precisely the strategy I think we should take with AGI. 00:08:52.600 --> 00:08:56.656 Think through what can go wrong to make sure it goes right. NOTE Paragraph 00:08:56.680 --> 00:08:59.216 So in this spirit, we've organized conferences, 00:08:59.240 --> 00:09:02.056 bringing together leading AI researchers and other thinkers 00:09:02.080 --> 00:09:05.816 to discuss how to grow this wisdom we need to keep AI beneficial. 00:09:05.840 --> 00:09:09.136 Our last conference was in Asilomar, California last year 00:09:09.160 --> 00:09:12.216 and produced this list of 23 principles 00:09:12.240 --> 00:09:15.136 which have since been signed by over 1,000 AI researchers 00:09:15.160 --> 00:09:16.456 and key industry leaders, 00:09:16.480 --> 00:09:19.656 and I want to tell you about three of these principles. 00:09:19.680 --> 00:09:24.640 One is that we should avoid an arms race and lethal autonomous weapons. 00:09:25.480 --> 00:09:29.096 The idea here is that any science can be used for new ways of helping people 00:09:29.120 --> 00:09:30.656 or new ways of harming people. 00:09:30.680 --> 00:09:34.616 For example, biology and chemistry are much more likely to be used 00:09:34.640 --> 00:09:39.496 for new medicines or new cures than for new ways of killing people, 00:09:39.520 --> 00:09:41.696 because biologists and chemists pushed hard -- 00:09:41.720 --> 00:09:42.976 and successfully -- 00:09:43.000 --> 00:09:45.176 for bans on biological and chemical weapons. 00:09:45.200 --> 00:09:46.456 And in the same spirit, 00:09:46.480 --> 00:09:50.920 most AI researchers want to stigmatize and ban lethal autonomous weapons. 00:09:51.600 --> 00:09:53.416 Another Asilomar AI principle 00:09:53.440 --> 00:09:57.136 is that we should mitigate AI-fueled income inequality. 00:09:57.160 --> 00:10:01.616 I think that if we can grow the economic pie dramatically with AI 00:10:01.640 --> 00:10:04.096 and we still can't figure out how to divide this pie 00:10:04.120 --> 00:10:05.696 so that everyone is better off, 00:10:05.720 --> 00:10:06.976 then shame on us. NOTE Paragraph 00:10:07.000 --> 00:10:11.096 (Applause) NOTE Paragraph 00:10:11.120 --> 00:10:14.720 Alright, now raise your hand if your computer has ever crashed. NOTE Paragraph 00:10:15.480 --> 00:10:16.736 (Laughter) NOTE Paragraph 00:10:16.760 --> 00:10:18.416 Wow, that's a lot of hands. 00:10:18.440 --> 00:10:20.616 Well, then you'll appreciate this principle 00:10:20.640 --> 00:10:23.776 that we should invest much more in AI safety research, 00:10:23.800 --> 00:10:27.456 because as we put AI in charge of even more decisions and infrastructure, 00:10:27.480 --> 00:10:31.096 we need to figure out how to transform today's buggy and hackable computers 00:10:31.120 --> 00:10:33.536 into robust AI systems that we can really trust, 00:10:33.560 --> 00:10:34.776 because otherwise, 00:10:34.800 --> 00:10:37.616 all this awesome new technology can malfunction and harm us, 00:10:37.640 --> 00:10:39.616 or get hacked and be turned against us. 00:10:39.640 --> 00:10:45.336 And this AI safety work has to include work on AI value alignment, 00:10:45.360 --> 00:10:48.176 because the real threat from AGI isn't malice, 00:10:48.200 --> 00:10:49.856 like in silly Hollywood movies, 00:10:49.880 --> 00:10:51.616 but competence -- 00:10:51.640 --> 00:10:55.056 AGI accomplishing goals that just aren't aligned with ours. 00:10:55.080 --> 00:10:59.816 For example, when we humans drove the West African black rhino extinct, 00:10:59.840 --> 00:11:03.736 we didn't do it because we were a bunch of evil rhinoceros haters, did we? 00:11:03.760 --> 00:11:05.816 We did it because we were smarter than them 00:11:05.840 --> 00:11:08.416 and our goals weren't aligned with theirs. 00:11:08.440 --> 00:11:11.096 But AGI is by definition smarter than us, 00:11:11.120 --> 00:11:14.696 so to make sure that we don't put ourselves in the position of those rhinos 00:11:14.720 --> 00:11:16.696 if we create AGI, 00:11:16.720 --> 00:11:20.896 we need to figure out how to make machines understand our goals, 00:11:20.920 --> 00:11:24.080 adopt our goals and retain our goals. NOTE Paragraph 00:11:25.320 --> 00:11:28.176 And whose goals should these be, anyway? 00:11:28.200 --> 00:11:30.096 Which goals should they be? 00:11:30.120 --> 00:11:33.680 This brings us to the third part of our rocket metaphor: the destination. 00:11:35.160 --> 00:11:37.016 We're making AI more powerful, 00:11:37.040 --> 00:11:38.856 trying to figure out how to steer it, 00:11:38.880 --> 00:11:40.560 but where do we want to go with it? 00:11:41.760 --> 00:11:45.416 This is the elephant in the room that almost nobody talks about -- 00:11:45.440 --> 00:11:47.296 not even here at TED -- 00:11:47.320 --> 00:11:51.400 because we're so fixated on short-term AI challenges. 00:11:52.080 --> 00:11:56.736 Look, our species is trying to build AGI, 00:11:56.760 --> 00:12:00.256 motivated by curiosity and economics, 00:12:00.280 --> 00:12:03.960 but what sort of future society are we hoping for if we succeed? 00:12:04.680 --> 00:12:06.616 We did an opinion poll on this recently, 00:12:06.640 --> 00:12:07.856 and I was struck to see 00:12:07.880 --> 00:12:10.776 that most people actually want us to build superintelligence: 00:12:10.800 --> 00:12:13.960 AI that's vastly smarter than us in all ways. 00:12:15.120 --> 00:12:18.536 What there was the greatest agreement on was that we should be ambitious 00:12:18.560 --> 00:12:20.576 and help life spread into the cosmos, 00:12:20.600 --> 00:12:25.096 but there was much less agreement about who or what should be in charge. 00:12:25.120 --> 00:12:26.856 And I was actually quite amused 00:12:26.880 --> 00:12:30.336 to see that there's some some people who want it to be just machines. NOTE Paragraph 00:12:30.360 --> 00:12:32.056 (Laughter) NOTE Paragraph 00:12:32.080 --> 00:12:35.936 And there was total disagreement about what the role of humans should be, 00:12:35.960 --> 00:12:37.936 even at the most basic level, 00:12:37.960 --> 00:12:40.776 so let's take a closer look at possible futures 00:12:40.800 --> 00:12:43.536 that we might choose to steer toward, alright? NOTE Paragraph 00:12:43.560 --> 00:12:44.896 So don't get be wrong here. 00:12:44.920 --> 00:12:46.976 I'm not talking about space travel, 00:12:47.000 --> 00:12:50.200 merely about humanity's metaphorical journey into the future. 00:12:50.920 --> 00:12:54.416 So one option that some of my AI colleagues like 00:12:54.440 --> 00:12:58.056 is to build superintelligence and keep it under human control, 00:12:58.080 --> 00:12:59.816 like an enslaved god, 00:12:59.840 --> 00:13:01.416 disconnected from the internet 00:13:01.440 --> 00:13:04.696 and used to create unimaginable technology and wealth 00:13:04.720 --> 00:13:05.960 for whoever controls it. 00:13:06.800 --> 00:13:08.256 But Lord Acton warned us 00:13:08.280 --> 00:13:11.896 that power corrupts and absolute power corrupts absolutely, 00:13:11.920 --> 00:13:15.976 so you might worry that maybe we humans just aren't smart enough, 00:13:16.000 --> 00:13:17.536 or wise enough rather, 00:13:17.560 --> 00:13:18.800 to handle this much power. 00:13:19.640 --> 00:13:22.176 Also, aside from any moral qualms you might have 00:13:22.200 --> 00:13:24.496 about enslaving superior minds, 00:13:24.520 --> 00:13:28.496 you might worry that maybe the superintelligence could outsmart us, 00:13:28.520 --> 00:13:30.760 break out and take over. 00:13:31.560 --> 00:13:34.976 But I also have colleagues who are fine with AI taking over 00:13:35.000 --> 00:13:37.296 and even causing human extinction, 00:13:37.320 --> 00:13:40.896 as long as we feel the the AIs are our worthy descendants, 00:13:40.920 --> 00:13:42.656 like our children. 00:13:42.680 --> 00:13:48.296 But how would we know that the AIs have adopted our best values 00:13:48.320 --> 00:13:52.696 and aren't just unconscious zombies tricking us into anthropomorphizing them? 00:13:52.720 --> 00:13:55.576 Also, shouldn't those people who don't want human extinction 00:13:55.600 --> 00:13:57.040 have a say in the matter, too? 00:13:58.200 --> 00:14:01.576 Now, if you didn't like either of those two high-tech options, 00:14:01.600 --> 00:14:04.776 it's important to remember that low-tech is suicide 00:14:04.800 --> 00:14:06.056 from a cosmic perspective, 00:14:06.080 --> 00:14:08.576 because if we don't go far beyond today's technology, 00:14:08.600 --> 00:14:11.416 the question isn't whether humanity is going to go extinct, 00:14:11.440 --> 00:14:13.456 merely whether we're going to get taken out 00:14:13.480 --> 00:14:15.616 by the next killer asteroid, supervolcano, 00:14:15.640 --> 00:14:18.736 or some other problem that better technology could have solved. NOTE Paragraph 00:14:18.760 --> 00:14:22.336 So, how about having our cake and eating it ... 00:14:22.360 --> 00:14:24.200 with AGI that's not enslaved 00:14:25.120 --> 00:14:28.296 but treats us well because its values are aligned with ours? 00:14:28.320 --> 00:14:32.496 This is the gist of what Eliezer Yudkowsky has called "friendly AI," 00:14:32.520 --> 00:14:35.200 and if we can do this, it could be awesome. 00:14:35.840 --> 00:14:40.656 It could not only eliminate negative experiences like disease, poverty, 00:14:40.680 --> 00:14:42.136 crime and other suffering, 00:14:42.160 --> 00:14:44.976 but it could also give us the freedom to choose 00:14:45.000 --> 00:14:49.056 from a fantastic new diversity of positive experiences -- 00:14:49.080 --> 00:14:52.240 basically making us the masters of our own destiny. NOTE Paragraph 00:14:54.280 --> 00:14:55.656 So in summary, 00:14:55.680 --> 00:14:58.776 our situation with technology is complicated, 00:14:58.800 --> 00:15:01.216 but the big picture is rather simple. 00:15:01.240 --> 00:15:04.696 Most AI researchers expect AGI within decades 00:15:04.720 --> 00:15:07.856 and if we just bumble into this unprepared, 00:15:07.880 --> 00:15:11.216 it will probably be the biggest mistake in human history -- 00:15:11.240 --> 00:15:12.656 let's face it. 00:15:12.680 --> 00:15:15.256 It could enable brutal, global dictatorship 00:15:15.280 --> 00:15:18.816 with unprecedented inequality, surveillance and suffering, 00:15:18.840 --> 00:15:20.816 and maybe even human extinction. 00:15:20.840 --> 00:15:23.160 But if we steer carefully, 00:15:24.040 --> 00:15:27.936 we could end up in a fantastic future where everybody's better off: 00:15:27.960 --> 00:15:30.336 the poor are richer, the rich are richer, 00:15:30.360 --> 00:15:34.320 everybody is healthy and free to live out their dreams. NOTE Paragraph 00:15:35.000 --> 00:15:36.536 Now, hang on. 00:15:36.560 --> 00:15:41.136 Do you folks want the future that's politically right or left? 00:15:41.160 --> 00:15:44.016 Do you want the pious society with strict moral rules, 00:15:44.040 --> 00:15:45.856 or do you an hedonistic free-for-all, 00:15:45.880 --> 00:15:48.096 more like Burning Man 24-7? 00:15:48.120 --> 00:15:50.536 Do you want beautiful beaches, forests and lakes, 00:15:50.560 --> 00:15:53.976 or would you prefer to rearrange some of those atoms with the computers, 00:15:54.000 --> 00:15:55.715 and they can be virtual experiences? 00:15:55.739 --> 00:15:58.896 With friendly AI, we could simply build all of these societies 00:15:58.920 --> 00:16:02.136 and give people the freedom to choose which one they want to live in 00:16:02.160 --> 00:16:05.256 because we would no longer be limited by our intelligence, 00:16:05.280 --> 00:16:06.736 merely by the laws of physics. 00:16:06.760 --> 00:16:11.376 So the resources and space for this would be astronomical -- 00:16:11.400 --> 00:16:12.720 literally. NOTE Paragraph 00:16:13.320 --> 00:16:14.520 So here's our choice. 00:16:15.880 --> 00:16:18.200 We can either be complacent about our future, 00:16:19.440 --> 00:16:22.096 taking as an article of blind faith 00:16:22.120 --> 00:16:26.136 that any new technology is guaranteed to be beneficial, 00:16:26.160 --> 00:16:30.296 and just repeat that to ourselves as a mantra over and over and over again 00:16:30.320 --> 00:16:34.000 as we drift like a rudderless ship towards our own obsolescence. 00:16:34.920 --> 00:16:36.800 Or we can be ambitious -- 00:16:37.840 --> 00:16:40.296 thinking hard about how to steer our technology 00:16:40.320 --> 00:16:42.256 and where we want to go with it 00:16:42.280 --> 00:16:44.040 to create the age of amazement. 00:16:45.000 --> 00:16:47.856 We're all here to celebrate the age of amazement 00:16:47.880 --> 00:16:52.320 and I feel that its essence should lie in becoming not overpowered 00:16:53.240 --> 00:16:55.856 but empowered by our technology. NOTE Paragraph 00:16:55.880 --> 00:16:57.256 Thank you. NOTE Paragraph 00:16:57.280 --> 00:17:00.360 (Applause)