0:00:00.760,0:00:05.176 After 13.8 billion years[br]of cosmic history, 0:00:05.200,0:00:07.296 our universe has woken up 0:00:07.320,0:00:08.840 and become aware of itself. 0:00:09.480,0:00:11.416 From a small blue planet, 0:00:11.440,0:00:15.576 tiny, conscious parts of our universe[br]have begun gazing out into the cosmos 0:00:15.600,0:00:16.976 with telescopes, 0:00:17.000,0:00:18.480 discovering something humbling. 0:00:19.320,0:00:22.216 We've discovered that our universe[br]is vastly grander 0:00:22.240,0:00:23.576 than our ancestors imagined 0:00:23.600,0:00:27.856 and that life seems to be an almost[br]imperceptibly small perturbation 0:00:27.880,0:00:29.600 on an otherwise dead universe. 0:00:30.320,0:00:33.336 But we've also discovered[br]something inspiring, 0:00:33.360,0:00:36.336 which is that the technology[br]we're developing has the potential 0:00:36.360,0:00:39.216 to help life flourish like never before, 0:00:39.240,0:00:42.336 not just for centuries[br]but for billions of years, 0:00:42.360,0:00:46.480 and not just on earth but throughout[br]much of this amazing cosmos. 0:00:47.680,0:00:51.016 I think of the earliest life as "Life 1.0" 0:00:51.040,0:00:52.416 because it was really dumb, 0:00:52.440,0:00:56.736 like bacteria, unable to learn[br]anything during its lifetime. 0:00:56.760,0:01:00.136 I think of us humans as "Life 2.0"[br]because we can learn, 0:01:00.160,0:01:01.656 which we in nerdy, geek speak, 0:01:01.680,0:01:04.896 might think of as installing[br]new software into our brains, 0:01:04.920,0:01:07.040 like languages and job skills. 0:01:07.680,0:01:11.976 "Life 3.0," which can design not only[br]its software but also its hardware 0:01:12.000,0:01:13.656 of course doesn't exist yet. 0:01:13.680,0:01:17.456 But perhaps our technology[br]has already made us "Life 2.1," 0:01:17.480,0:01:21.816 with our artificial knees,[br]pacemakers and cochlear implants. 0:01:21.840,0:01:25.720 So let's take a closer look[br]at our relationship with technology, OK? 0:01:26.800,0:01:28.016 As an example, 0:01:28.040,0:01:33.336 the Apollo 11 moon mission[br]was both successful and inspiring, 0:01:33.360,0:01:36.376 showing that when we humans[br]use technology wisely, 0:01:36.400,0:01:40.336 we can accomplish things[br]that our ancestors could only dream of. 0:01:40.360,0:01:43.336 But there's an even more inspiring journey 0:01:43.360,0:01:46.040 propelled by something[br]more powerful than rocket engines, 0:01:47.200,0:01:49.536 where the passengers[br]aren't just three astronauts 0:01:49.560,0:01:51.336 but all of humanity. 0:01:51.360,0:01:54.296 Let's talk about our collective[br]journey into the future 0:01:54.320,0:01:56.320 with artificial intelligence. 0:01:56.960,0:02:01.496 My friend Jaan Tallinn likes to point out[br]that just as with rocketry, 0:02:01.520,0:02:04.680 it's not enough to make[br]our technology powerful. 0:02:05.560,0:02:08.735 We also have to figure out,[br]if we're going to be really ambitious, 0:02:08.759,0:02:10.175 how to steer it 0:02:10.199,0:02:11.880 and where we want to go with it. 0:02:12.880,0:02:15.720 So let's talk about all three[br]for artificial intelligence: 0:02:16.440,0:02:19.496 the power, the steering[br]and the destination. 0:02:19.520,0:02:20.806 Let's start with the power. 0:02:21.600,0:02:24.696 I define intelligence very inclusively -- 0:02:24.720,0:02:29.056 simply as our ability[br]to accomplish complex goals, 0:02:29.080,0:02:32.896 because I want to include both[br]biological and artificial intelligence. 0:02:32.920,0:02:36.936 And I want to avoid[br]the silly carbon-chauvinism idea 0:02:36.960,0:02:39.320 that you can only be smart[br]if you're made of meat. 0:02:40.880,0:02:45.056 It's really amazing how the power[br]of AI has grown recently. 0:02:45.080,0:02:46.336 Just think about it. 0:02:46.360,0:02:49.560 Not long ago, robots couldn't walk. 0:02:51.040,0:02:52.760 Now, they can do backflips. 0:02:54.080,0:02:55.896 Not long ago, 0:02:55.920,0:02:57.680 we didn't have self-driving cars. 0:02:58.920,0:03:01.400 Now, we have self-flying rockets. 0:03:03.960,0:03:05.376 Not long ago, 0:03:05.400,0:03:08.016 AI couldn't do face recognition. 0:03:08.040,0:03:11.016 Now, AI can generate fake faces 0:03:11.040,0:03:15.200 and simulate your face[br]saying stuff that you never said. 0:03:16.400,0:03:17.976 Not long ago, 0:03:18.000,0:03:19.880 AI couldn't beat us at the game of Go. 0:03:20.400,0:03:25.496 Then, Google DeepMind's AlphaZero AI[br]took 3,000 years of human Go games 0:03:25.520,0:03:26.776 and Go wisdom, 0:03:26.800,0:03:31.776 ignored it all and became the world's best[br]player by just playing against itself. 0:03:31.800,0:03:35.496 And the most impressive feat here[br]wasn't that it crushed human gamers, 0:03:35.520,0:03:38.096 but that it crushed human AI researchers 0:03:38.120,0:03:41.800 who had spent decades[br]handcrafting game-playing software. 0:03:42.200,0:03:46.856 And AlphaZero crushed human AI researchers[br]not just in Go but even at chess, 0:03:46.880,0:03:49.360 which we have been working on since 1950. 0:03:50.000,0:03:54.240 So all this amazing recent progress in AI[br]really begs the question: 0:03:55.280,0:03:56.840 How far will it go? 0:03:57.800,0:03:59.496 I like to think about this question 0:03:59.520,0:04:02.496 in terms of this abstract[br]landscape of tasks, 0:04:02.520,0:04:05.976 where the elevation represents[br]how hard it is for AI to do each task 0:04:06.000,0:04:07.216 at human level, 0:04:07.240,0:04:10.000 and the sea level represents[br]what AI can do today. 0:04:11.120,0:04:13.176 The sea level is rising[br]as AI improves, 0:04:13.200,0:04:16.640 so there's a kind of global warming[br]going on here in the task landscape. 0:04:18.040,0:04:21.375 And the obvious takeaway[br]is to avoid careers at the waterfront -- 0:04:21.399,0:04:22.656 (Laughter) 0:04:22.680,0:04:25.536 which will soon be[br]automated and disrupted. 0:04:25.560,0:04:28.536 But there's a much[br]bigger question as well. 0:04:28.560,0:04:30.370 How high will the water end up rising? 0:04:31.440,0:04:34.640 Will it eventually rise[br]to flood everything, 0:04:35.840,0:04:38.336 matching human intelligence at all tasks. 0:04:38.360,0:04:42.096 This is the definition[br]of artificial general intelligence -- 0:04:42.120,0:04:43.416 AGI, 0:04:43.440,0:04:46.520 which has been the holy grail[br]of AI research since its inception. 0:04:47.000,0:04:48.776 By this definition, people who say, 0:04:48.800,0:04:52.216 "Ah, there will always be jobs[br]that humans can do better than machines," 0:04:52.240,0:04:55.160 are simply saying[br]that we'll never get AGI. 0:04:55.680,0:04:59.256 Sure, we might still choose[br]to have some human jobs 0:04:59.280,0:05:02.376 or to give humans income[br]and purpose with our jobs, 0:05:02.400,0:05:06.136 but AGI will in any case[br]transform life as we know it 0:05:06.160,0:05:08.896 with humans no longer being[br]the most intelligent. 0:05:08.920,0:05:12.616 Now, if the water level does reach AGI, 0:05:12.640,0:05:17.936 then further AI progress will be driven[br]mainly not by humans but by AI, 0:05:17.960,0:05:19.816 which means that there's a possibility 0:05:19.840,0:05:22.176 that further AI progress[br]could be way faster 0:05:22.200,0:05:25.576 than the typical human research[br]and development timescale of years, 0:05:25.600,0:05:29.616 raising the controversial possibility[br]of an intelligence explosion 0:05:29.640,0:05:31.936 where recursively self-improving AI 0:05:31.960,0:05:35.376 rapidly leaves human[br]intelligence far behind, 0:05:35.400,0:05:37.840 creating what's known[br]as superintelligence. 0:05:39.800,0:05:42.080 Alright, reality check: 0:05:43.120,0:05:45.560 Are we going to get AGI any time soon? 0:05:46.360,0:05:49.056 Some famous AI researchers,[br]like Rodney Brooks, 0:05:49.080,0:05:51.576 think it won't happen[br]for hundreds of years. 0:05:51.600,0:05:55.496 But others, like Google DeepMind[br]founder Demis Hassabis, 0:05:55.520,0:05:56.776 are more optimistic 0:05:56.800,0:05:59.376 and are working to try to make[br]it happen much sooner. 0:05:59.400,0:06:02.696 And recent surveys have shown[br]that most AI researchers 0:06:02.720,0:06:05.576 actually share Demis's optimism, 0:06:05.600,0:06:08.680 expecting that we will[br]get AGI within decades, 0:06:09.640,0:06:11.896 so within the lifetime of many of us, 0:06:11.920,0:06:13.880 which begs the question -- and then what? 0:06:15.040,0:06:17.256 What do we want the role of humans to be 0:06:17.280,0:06:19.960 if machines can do everything better[br]and cheaper than us? 0:06:23.000,0:06:25.000 The way I see it, we face a choice. 0:06:26.000,0:06:27.576 One option is to be complacent. 0:06:27.600,0:06:31.376 We can say, "Oh, let's just build machines[br]that can do everything we can do 0:06:31.400,0:06:33.216 and not worry about the consequences. 0:06:33.240,0:06:36.496 Come on, if we build technology[br]that makes all humans obsolete, 0:06:36.520,0:06:38.616 what could possibly go wrong?" 0:06:38.640,0:06:40.296 (Laughter) 0:06:40.320,0:06:43.080 But I think that would be[br]embarrassingly lame. 0:06:44.080,0:06:47.576 I think we should be more ambitious --[br]in the spirit of TED. 0:06:47.600,0:06:51.096 Let's envision a truly inspiring[br]high-tech future 0:06:51.120,0:06:52.520 and try to steer towards it. 0:06:53.720,0:06:57.256 This brings us to the second part[br]of our rocket metaphor: the steering. 0:06:57.280,0:06:59.176 We're making AI more powerful, 0:06:59.200,0:07:03.016 but how can we steer towards a future 0:07:03.040,0:07:06.120 where AI helps humanity flourish[br]rather than flounder? 0:07:06.760,0:07:08.016 To help with this, 0:07:08.040,0:07:10.016 I cofounded the Future of Life Institute. 0:07:10.040,0:07:12.816 It's a small nonprofit promoting[br]beneficial technology use, 0:07:12.840,0:07:15.576 and our goal is simply[br]for the future of life to exist 0:07:15.600,0:07:17.656 and to be as inspiring as possible. 0:07:17.680,0:07:20.856 You know, I love technology. 0:07:20.880,0:07:23.800 Technology is why today[br]is better than the Stone Age. 0:07:24.600,0:07:28.680 And I'm optimistic that we can create[br]a really inspiring high-tech future ... 0:07:29.680,0:07:31.136 if -- and this is a big if -- 0:07:31.160,0:07:33.616 if we win the wisdom race -- 0:07:33.640,0:07:36.496 the race between the growing[br]power of our technology 0:07:36.520,0:07:38.720 and the growing wisdom[br]with which we manage it. 0:07:39.240,0:07:41.536 But this is going to require[br]a change of strategy 0:07:41.560,0:07:44.600 because our old strategy[br]has been learning from mistakes. 0:07:45.280,0:07:46.816 We invented fire, 0:07:46.840,0:07:48.376 screwed up a bunch of times -- 0:07:48.400,0:07:50.216 invented the fire extinguisher. 0:07:50.240,0:07:51.576 (Laughter) 0:07:51.600,0:07:54.016 We invented the car,[br]screwed up a bunch of times -- 0:07:54.040,0:07:56.707 invented the traffic light,[br]the seat belt and the airbag, 0:07:56.731,0:08:00.576 but with more powerful technology[br]like nuclear weapons and AGI, 0:08:00.600,0:08:03.976 learning from mistakes[br]is a lousy strategy, 0:08:04.000,0:08:05.216 don't you think? 0:08:05.240,0:08:06.256 (Laughter) 0:08:06.280,0:08:08.856 It's much better to be proactive[br]rather than reactive; 0:08:08.880,0:08:11.176 plan ahead and get things[br]right the first time 0:08:11.200,0:08:13.696 because that might be[br]the only time we'll get. 0:08:13.720,0:08:16.056 But it is funny because[br]sometimes people tell me, 0:08:16.080,0:08:18.816 "Max, shhh, don't talk like that. 0:08:18.840,0:08:20.560 That's Luddite scaremongering." 0:08:22.040,0:08:23.576 But it's not scaremongering. 0:08:23.600,0:08:26.480 It's what we at MIT[br]call safety engineering. 0:08:27.200,0:08:28.416 Think about it: 0:08:28.440,0:08:30.656 before NASA launched[br]the Apollo 11 mission, 0:08:30.680,0:08:33.816 they systematically thought through[br]everything that could go wrong 0:08:33.840,0:08:36.216 when you put people[br]on top of explosive fuel tanks 0:08:36.240,0:08:38.856 and launch them somewhere[br]where no one could help them. 0:08:38.880,0:08:40.816 And there was a lot that could go wrong. 0:08:40.840,0:08:42.320 Was that scaremongering? 0:08:43.159,0:08:44.376 No. 0:08:44.400,0:08:46.416 That's was precisely[br]the safety engineering 0:08:46.440,0:08:48.376 that ensured the success of the mission, 0:08:48.400,0:08:52.576 and that is precisely the strategy[br]I think we should take with AGI. 0:08:52.600,0:08:56.656 Think through what can go wrong[br]to make sure it goes right. 0:08:56.680,0:08:59.216 So in this spirit,[br]we've organized conferences, 0:08:59.240,0:09:02.056 bringing together leading[br]AI researchers and other thinkers 0:09:02.080,0:09:05.816 to discuss how to grow this wisdom[br]we need to keep AI beneficial. 0:09:05.840,0:09:09.136 Our last conference[br]was in Asilomar, California last year 0:09:09.160,0:09:12.216 and produced this list of 23 principles 0:09:12.240,0:09:15.136 which have since been signed[br]by over 1,000 AI researchers 0:09:15.160,0:09:16.456 and key industry leaders, 0:09:16.480,0:09:19.656 and I want to tell you[br]about three of these principles. 0:09:19.680,0:09:24.640 One is that we should avoid an arms race[br]and lethal autonomous weapons. 0:09:25.480,0:09:29.096 The idea here is that any science[br]can be used for new ways of helping people 0:09:29.120,0:09:30.656 or new ways of harming people. 0:09:30.680,0:09:34.616 For example, biology and chemistry[br]are much more likely to be used 0:09:34.640,0:09:39.496 for new medicines or new cures[br]than for new ways of killing people, 0:09:39.520,0:09:41.696 because biologists[br]and chemists pushed hard -- 0:09:41.720,0:09:42.976 and successfully -- 0:09:43.000,0:09:45.176 for bans on biological[br]and chemical weapons. 0:09:45.200,0:09:46.456 And in the same spirit, 0:09:46.480,0:09:50.920 most AI researchers want to stigmatize[br]and ban lethal autonomous weapons. 0:09:51.600,0:09:53.416 Another Asilomar AI principle 0:09:53.440,0:09:57.136 is that we should mitigate[br]AI-fueled income inequality. 0:09:57.160,0:10:01.616 I think that if we can grow[br]the economic pie dramatically with AI 0:10:01.640,0:10:04.096 and we still can't figure out[br]how to divide this pie 0:10:04.120,0:10:05.696 so that everyone is better off, 0:10:05.720,0:10:06.976 then shame on us. 0:10:07.000,0:10:11.096 (Applause) 0:10:11.120,0:10:14.720 Alright, now raise your hand[br]if your computer has ever crashed. 0:10:15.480,0:10:16.736 (Laughter) 0:10:16.760,0:10:18.416 Wow, that's a lot of hands. 0:10:18.440,0:10:20.616 Well, then you'll appreciate[br]this principle 0:10:20.640,0:10:23.776 that we should invest much more[br]in AI safety research, 0:10:23.800,0:10:27.456 because as we put AI in charge[br]of even more decisions and infrastructure, 0:10:27.480,0:10:31.096 we need to figure out how to transform[br]today's buggy and hackable computers 0:10:31.120,0:10:33.536 into robust AI systems[br]that we can really trust, 0:10:33.560,0:10:34.776 because otherwise, 0:10:34.800,0:10:37.616 all this awesome new technology[br]can malfunction and harm us, 0:10:37.640,0:10:39.616 or get hacked and be turned against us. 0:10:39.640,0:10:45.336 And this AI safety work[br]has to include work on AI value alignment, 0:10:45.360,0:10:48.176 because the real threat[br]from AGI isn't malice, 0:10:48.200,0:10:49.856 like in silly Hollywood movies, 0:10:49.880,0:10:51.616 but competence -- 0:10:51.640,0:10:55.056 AGI accomplishing goals[br]that just aren't aligned with ours. 0:10:55.080,0:10:59.816 For example, when we humans drove[br]the West African black rhino extinct, 0:10:59.840,0:11:03.736 we didn't do it because we were a bunch[br]of evil rhinoceros haters, did we? 0:11:03.760,0:11:05.816 We did it because[br]we were smarter than them 0:11:05.840,0:11:08.416 and our goals weren't aligned with theirs. 0:11:08.440,0:11:11.096 But AGI is by definition smarter than us, 0:11:11.120,0:11:14.696 so to make sure that we don't put[br]ourselves in the position of those rhinos 0:11:14.720,0:11:16.696 if we create AGI, 0:11:16.720,0:11:20.896 we need to figure out how[br]to make machines understand our goals, 0:11:20.920,0:11:24.080 adopt our goals and retain our goals. 0:11:25.320,0:11:28.176 And whose goals should these be, anyway? 0:11:28.200,0:11:30.096 Which goals should they be? 0:11:30.120,0:11:33.680 This brings us to the third part[br]of our rocket metaphor: the destination. 0:11:35.160,0:11:37.016 We're making AI more powerful, 0:11:37.040,0:11:38.856 trying to figure out how to steer it, 0:11:38.880,0:11:40.560 but where do we want to go with it? 0:11:41.760,0:11:45.416 This is the elephant in the room[br]that almost nobody talks about -- 0:11:45.440,0:11:47.296 not even here at TED -- 0:11:47.320,0:11:51.400 because we're so fixated[br]on short-term AI challenges. 0:11:52.080,0:11:56.736 Look, our species is trying to build AGI, 0:11:56.760,0:12:00.256 motivated by curiosity and economics, 0:12:00.280,0:12:03.960 but what sort of future society[br]are we hoping for if we succeed? 0:12:04.680,0:12:06.616 We did an opinion poll on this recently, 0:12:06.640,0:12:07.856 and I was struck to see 0:12:07.880,0:12:10.776 that most people actually[br]want us to build superintelligence: 0:12:10.800,0:12:13.960 AI that's vastly smarter[br]than us in all ways. 0:12:15.120,0:12:18.536 What there was the greatest agreement on[br]was that we should be ambitious 0:12:18.560,0:12:20.576 and help life spread into the cosmos, 0:12:20.600,0:12:25.096 but there was much less agreement[br]about who or what should be in charge. 0:12:25.120,0:12:26.856 And I was actually quite amused 0:12:26.880,0:12:30.336 to see that there's some some people[br]who want it to be just machines. 0:12:30.360,0:12:32.056 (Laughter) 0:12:32.080,0:12:35.936 And there was total disagreement[br]about what the role of humans should be, 0:12:35.960,0:12:37.936 even at the most basic level, 0:12:37.960,0:12:40.776 so let's take a closer look[br]at possible futures 0:12:40.800,0:12:43.536 that we might choose[br]to steer toward, alright? 0:12:43.560,0:12:44.896 So don't get be wrong here. 0:12:44.920,0:12:46.976 I'm not talking about space travel, 0:12:47.000,0:12:50.200 merely about humanity's[br]metaphorical journey into the future. 0:12:50.920,0:12:54.416 So one option that some[br]of my AI colleagues like 0:12:54.440,0:12:58.056 is to build superintelligence[br]and keep it under human control, 0:12:58.080,0:12:59.816 like an enslaved god, 0:12:59.840,0:13:01.416 disconnected from the internet 0:13:01.440,0:13:04.696 and used to create unimaginable[br]technology and wealth 0:13:04.720,0:13:05.960 for whoever controls it. 0:13:06.800,0:13:08.256 But Lord Acton warned us 0:13:08.280,0:13:11.896 that power corrupts,[br]and absolute power corrupts absolutely, 0:13:11.920,0:13:15.976 so you might worry that maybe[br]we humans just aren't smart enough, 0:13:16.000,0:13:17.536 or wise enough rather, 0:13:17.560,0:13:18.800 to handle this much power. 0:13:19.640,0:13:22.176 Also, aside from any[br]moral qualms you might have 0:13:22.200,0:13:24.496 about enslaving superior minds, 0:13:24.520,0:13:28.496 you might worry that maybe[br]the superintelligence could outsmart us, 0:13:28.520,0:13:30.760 break out and take over. 0:13:31.560,0:13:34.976 But I also have colleagues[br]who are fine with AI taking over 0:13:35.000,0:13:37.296 and even causing human extinction, 0:13:37.320,0:13:40.896 as long as we feel the the AIs[br]are our worthy descendants, 0:13:40.920,0:13:42.656 like our children. 0:13:42.680,0:13:48.296 But how would we know that the AIs[br]have adopted our best values 0:13:48.320,0:13:52.696 and aren't just unconscious zombies[br]tricking us into anthropomorphizing them? 0:13:52.720,0:13:55.576 Also, shouldn't those people[br]who don't want human extinction 0:13:55.600,0:13:57.040 have a say in the matter, too? 0:13:58.200,0:14:01.576 Now, if you didn't like either[br]of those two high-tech options, 0:14:01.600,0:14:04.776 it's important to remember[br]that low-tech is suicide 0:14:04.800,0:14:06.056 from a cosmic perspective, 0:14:06.080,0:14:08.576 because if we don't go far[br]beyond today's technology, 0:14:08.600,0:14:11.416 the question isn't whether humanity[br]is going to go extinct, 0:14:11.440,0:14:13.456 merely whether[br]we're going to get taken out 0:14:13.480,0:14:15.616 by the next killer asteroid, supervolcano 0:14:15.640,0:14:18.736 or some other problem[br]that better technology could have solved. 0:14:18.760,0:14:22.336 So, how about having[br]our cake and eating it ... 0:14:22.360,0:14:24.200 with AGI that's not enslaved 0:14:25.120,0:14:28.296 but treats us well because its values[br]are aligned with ours? 0:14:28.320,0:14:32.496 This is the gist of what Eliezer Yudkowsky[br]has called "friendly AI," 0:14:32.520,0:14:35.200 and if we can do this,[br]it could be awesome. 0:14:35.840,0:14:40.656 It could not only eliminate negative[br]experiences like disease, poverty, 0:14:40.680,0:14:42.136 crime and other suffering, 0:14:42.160,0:14:44.976 but it could also give us[br]the freedom to choose 0:14:45.000,0:14:49.056 from a fantastic new diversity[br]of positive experiences -- 0:14:49.080,0:14:52.240 basically making us[br]the masters of our own destiny. 0:14:54.280,0:14:55.656 So in summary, 0:14:55.680,0:14:58.776 our situation with technology[br]is complicated, 0:14:58.800,0:15:01.216 but the big picture is rather simple. 0:15:01.240,0:15:04.696 Most AI researchers[br]expect AGI within decades, 0:15:04.720,0:15:07.856 and if we just bumble[br]into this unprepared, 0:15:07.880,0:15:11.216 it will probably be[br]the biggest mistake in human history -- 0:15:11.240,0:15:12.656 let's face it. 0:15:12.680,0:15:15.256 It could enable brutal,[br]global dictatorship 0:15:15.280,0:15:18.816 with unprecedented inequality,[br]surveillance and suffering, 0:15:18.840,0:15:20.816 and maybe even human extinction. 0:15:20.840,0:15:23.160 But if we steer carefully, 0:15:24.040,0:15:27.936 we could end up in a fantastic future[br]where everybody's better off: 0:15:27.960,0:15:30.336 the poor are richer, the rich are richer, 0:15:30.360,0:15:34.320 everybody is healthy[br]and free to live out their dreams. 0:15:35.000,0:15:36.536 Now, hang on. 0:15:36.560,0:15:41.136 Do you folks want the future[br]that's politically right or left? 0:15:41.160,0:15:44.016 Do you want the pious society[br]with strict moral rules, 0:15:44.040,0:15:45.856 or do you an hedonistic free-for-all, 0:15:45.880,0:15:48.096 more like Burning Man 24/7? 0:15:48.120,0:15:50.536 Do you want beautiful beaches,[br]forests and lakes, 0:15:50.560,0:15:53.976 or would you prefer to rearrange[br]some of those atoms with the computers, 0:15:54.000,0:15:55.715 enabling virtual experiences? 0:15:55.739,0:15:58.896 With friendly AI, we could simply[br]build all of these societies 0:15:58.920,0:16:02.136 and give people the freedom[br]to choose which one they want to live in 0:16:02.160,0:16:05.256 because we would no longer[br]be limited by our intelligence, 0:16:05.280,0:16:06.736 merely by the laws of physics. 0:16:06.760,0:16:11.376 So the resources and space[br]for this would be astronomical -- 0:16:11.400,0:16:12.720 literally. 0:16:13.320,0:16:14.520 So here's our choice. 0:16:15.880,0:16:18.200 We can either be complacent[br]about our future, 0:16:19.440,0:16:22.096 taking as an article of blind faith 0:16:22.120,0:16:26.136 that any new technology[br]is guaranteed to be beneficial, 0:16:26.160,0:16:30.296 and just repeat that to ourselves[br]as a mantra over and over and over again 0:16:30.320,0:16:34.000 as we drift like a rudderless ship[br]towards our own obsolescence. 0:16:34.920,0:16:36.800 Or we can be ambitious -- 0:16:37.840,0:16:40.296 thinking hard about how[br]to steer our technology 0:16:40.320,0:16:42.256 and where we want to go with it 0:16:42.280,0:16:44.040 to create the age of amazement. 0:16:45.000,0:16:47.856 We're all here to celebrate[br]the age of amazement, 0:16:47.880,0:16:52.320 and I feel that its essence should lie[br]in becoming not overpowered 0:16:53.240,0:16:55.856 but empowered by our technology. 0:16:55.880,0:16:57.256 Thank you. 0:16:57.280,0:17:00.360 (Applause)