0:00:01.075,0:00:05.337 After 13.8 billion years[br]of cosmic history, 0:00:05.337,0:00:07.481 our universe has woken up 0:00:07.481,0:00:08.935 and become aware of itself. 0:00:09.561,0:00:11.598 From a small blue planet, 0:00:11.598,0:00:15.701 tiny, conscious parts of our universe[br]have begun gazing out into the cosmos 0:00:15.701,0:00:17.090 with telescopes, 0:00:17.090,0:00:18.690 discovering something humbling. 0:00:19.510,0:00:22.277 We've discovered that our universe[br]is vastly grander 0:00:22.277,0:00:23.624 than our ancestors imagined 0:00:23.624,0:00:27.803 and that life seems to be an almost[br]imperceptibly small perturbation 0:00:27.803,0:00:29.615 on an otherwise dead universe. 0:00:30.463,0:00:33.434 But we've also discovered[br]something inspiring, 0:00:33.434,0:00:36.424 which is that the technology[br]we're developing has the potential 0:00:36.424,0:00:39.313 to help life flourish like never before, 0:00:39.313,0:00:42.395 not just for centuries[br]but for billions of years, 0:00:42.395,0:00:46.759 and not just on Earth but throughout[br]much of this amazing cosmos. 0:00:47.741,0:00:51.172 I think of the earliest life as "Life 1.0" 0:00:51.172,0:00:52.646 because it was really dumb, 0:00:52.646,0:00:56.530 like bacteria, unable to learn[br]anything during its lifetime. 0:00:56.837,0:01:00.446 I think of us humans as Life 2.0[br]because we can learn, 0:01:00.446,0:01:02.102 which we in nerdy, geek speak, 0:01:02.102,0:01:05.011 might think of as installing[br]new software into our brains, 0:01:05.011,0:01:07.178 like languages and job skills. 0:01:07.730,0:01:12.167 Life 3.0, which can design not only[br]its software but also its hardware 0:01:12.167,0:01:13.577 of course doesn't exist yet. 0:01:13.827,0:01:17.694 But perhaps our technology[br]has already made us life 2.1, 0:01:17.694,0:01:21.899 with our artificial knees, pacemakers[br]and cochlear implants. 0:01:21.899,0:01:25.556 So let's take a closer look[br]at our relationship with technology, OK? 0:01:26.888,0:01:28.000 As an example, 0:01:28.000,0:01:33.480 the Apollo 11 moon mission[br]was both successful and inspiring, 0:01:33.480,0:01:36.559 showing that when we humans[br]use technology wisely, 0:01:36.559,0:01:40.170 we can accomplish things[br]that our ancestors could only dream of. 0:01:40.474,0:01:43.439 But there's an even more inspiring journey 0:01:43.439,0:01:47.442 propelled by something[br]more powerful than rocket engines, 0:01:47.442,0:01:49.709 when passengers aren't[br]just three astronauts 0:01:49.709,0:01:50.965 but all of humanity. 0:01:51.553,0:01:54.427 Let's talk about our collective[br]journey into the future 0:01:54.427,0:01:56.425 with artificial intelligence. 0:01:57.168,0:02:01.698 My friend Jaan Tallinn likes to point out[br]that just as with rocketry, 0:02:01.698,0:02:04.848 it's not enough to make[br]our technology powerful. 0:02:05.612,0:02:06.987 We also have to figure out, 0:02:06.987,0:02:08.856 if we're going to be really ambitious, 0:02:08.856,0:02:10.353 how to steer it 0:02:10.353,0:02:12.063 and where we want to go with it. 0:02:12.932,0:02:16.581 So let's talk about all three[br][for] artificial intelligence: 0:02:16.581,0:02:19.527 the power, the steering[br]and the destination. 0:02:19.708,0:02:21.035 Let's start with the power. 0:02:21.691,0:02:24.796 I define intelligence very inclusively -- 0:02:24.796,0:02:29.307 simply as our ability[br]to accomplish complex goals, 0:02:29.307,0:02:33.142 because I want to include both[br]biological and artificial intelligence 0:02:33.142,0:02:37.096 and I want to avoid the silly[br]carbon-chauvinism idea 0:02:37.096,0:02:39.490 that you can only be smart[br]if you're made of meat. 0:02:41.003,0:02:45.167 It's really amazing how the power[br]of AI has grown recently. 0:02:45.167,0:02:46.291 Just think about it. 0:02:46.477,0:02:48.086 Not long ago, 0:02:48.086,0:02:49.669 robots couldn't walk. 0:02:51.112,0:02:53.035 Now, they can do backflips. 0:02:54.235,0:02:56.016 Not long ago, 0:02:56.016,0:02:57.969 we didn't have self-driving cars. 0:02:59.101,0:03:01.413 Now, we have self-flying rockets. 0:03:04.040,0:03:05.516 Not long ago, 0:03:05.516,0:03:07.665 AI couldn't do face recognition. 0:03:08.324,0:03:11.178 Now, AI can generate fake faces 0:03:11.178,0:03:15.072 and simulate your face saying stuff[br]that you never said. 0:03:16.529,0:03:18.104 Not long ago, 0:03:18.104,0:03:20.138 AI couldn't beat us at the game of Go. 0:03:20.452,0:03:25.602 Then, Google DeepMind's AlphaZero AI[br]took 3,000 years of human Go games 0:03:25.602,0:03:26.790 and Go wisdom, 0:03:26.790,0:03:31.348 ignored it all and became the world's best[br]player by just playing against itself. 0:03:31.905,0:03:35.648 And the most impressive feat here[br]wasn't that it crushed human gamers, 0:03:35.648,0:03:38.242 but that it crushed human AI researchers 0:03:38.242,0:03:41.525 who had spent decades[br]handcrafting game-playing software. 0:03:42.336,0:03:46.948 And AlphaZero crushed human AI researchers[br]not just in GO but even at chess, 0:03:46.948,0:03:49.371 which we have been working on since 1950. 0:03:50.225,0:03:55.473 So all this amazing recent progress in AI[br]really begs the question: 0:03:55.473,0:03:56.948 how far will it go? 0:03:57.869,0:03:59.625 I like to think about this question 0:03:59.625,0:04:02.697 in terms of this abstract[br]landscape of tasks, 0:04:02.697,0:04:06.193 where the elevation represents[br]how hard it is for AI to do each task 0:04:06.193,0:04:07.202 at human level, 0:04:07.202,0:04:10.250 and the sea level represents[br]what AI can do today. 0:04:11.241,0:04:13.327 The seal level is rising[br]as the AI improves, 0:04:13.327,0:04:16.808 so there's a kind of global warming[br]going on here in the task landscape. 0:04:18.106,0:04:21.686 And the obvious takeaway is to avoid[br]careers at the waterfront -- 0:04:21.686,0:04:22.952 (Laughter) 0:04:22.952,0:04:25.307 which will soon be[br]automated and disrupted. 0:04:25.649,0:04:28.273 But there's a much[br]bigger question as well. 0:04:28.589,0:04:30.803 How high will the water end up rising? 0:04:31.541,0:04:34.946 Will it eventually rise[br]to flood everything? 0:04:35.959,0:04:38.294 Imagine human intelligence at all tasks. 0:04:38.588,0:04:42.216 This is the definition[br]of artificial general intelligence -- 0:04:42.216,0:04:43.499 AGI, 0:04:43.499,0:04:46.564 which has been the holy grail[br]of AI research since its inception. 0:04:47.068,0:04:48.326 [By] this definition, 0:04:48.326,0:04:50.899 [when] people will say, [br]"Ah, there will always be jobs 0:04:50.899,0:04:52.877 that humans can do better than machines," 0:04:52.877,0:04:55.306 [they] are simply saying[br]that we'll never get AGI. 0:04:55.903,0:04:59.491 Sure, we might still choose[br]to have some human jobs 0:04:59.491,0:05:02.532 or to give humans income[br]and purpose with our jobs, 0:05:02.532,0:05:06.316 but AGI will in any case[br]transform life as we know it 0:05:06.316,0:05:08.678 with humans no longer being[br]the most intelligent. 0:05:08.983,0:05:12.838 Now, if the water level does reach AGI, 0:05:12.838,0:05:18.058 then further AI progress will be driven[br]mainly not by humans but by AI, 0:05:18.058,0:05:19.888 which means that there's a possibility 0:05:19.888,0:05:22.400 that further AI progress[br]could be way faster 0:05:22.400,0:05:25.715 than the typical human research[br]and development timescale of years, 0:05:25.715,0:05:29.819 raising the controversial possibility[br]of an intelligence explosion 0:05:29.819,0:05:31.957 where recursively self-improving AI 0:05:31.957,0:05:35.523 rapidly leaves human[br]intelligence far behind, 0:05:35.523,0:05:37.754 creating what's known[br]as superintelligence. 0:05:39.927,0:05:43.207 All right, reality check: 0:05:43.207,0:05:45.559 are we going to get AGI any time soon? 0:05:46.456,0:05:47.974 Some famous AI researchers, 0:05:47.974,0:05:49.184 like Rodney Brooks, 0:05:49.184,0:05:51.260 think it won't happen[br]for hundreds of years. 0:05:51.746,0:05:55.729 But others, like Google DeepMind[br]founder Demis Hassabis, 0:05:55.729,0:05:56.913 are more optimistic 0:05:56.913,0:05:59.413 and are working to try to make[br]it happen much sooner. 0:05:59.699,0:06:02.863 And recent surveys have shown[br]that most AI researchers 0:06:02.863,0:06:05.827 have actually shared Demis's optimism, 0:06:05.827,0:06:09.638 expecting that we will[br]get AGI within decades, 0:06:09.638,0:06:12.012 so within the lifetime of many of us, 0:06:12.012,0:06:13.315 which begs the question -- 0:06:13.315,0:06:14.322 and then what? 0:06:15.177,0:06:17.482 What do we want the role of humans to be 0:06:17.482,0:06:20.265 if machines can do everything better[br]and cheaper than us? 0:06:23.062,0:06:25.223 The way I see it, we face a choice. 0:06:26.049,0:06:27.598 One option is to be complacent. 0:06:27.708,0:06:31.595 We can say, "Oh, let's just build machines[br]that can do everything we can do 0:06:31.595,0:06:33.373 and not worry about the consequences. 0:06:33.373,0:06:36.572 Come on, if we build technology[br]that makes all humans obsolete, 0:06:36.572,0:06:38.619 what could possibly go wrong?" 0:06:38.935,0:06:39.940 (Laughter) 0:06:40.494,0:06:43.083 But I think that would be[br]embarrassingly lame. 0:06:44.090,0:06:45.972 I think we should be more ambitious -- 0:06:45.972,0:06:47.237 in the spirit of TED. 0:06:47.839,0:06:51.149 Let's envision the truly inspiring[br]high-tech future 0:06:51.149,0:06:52.750 and try to steer towards it. 0:06:53.820,0:06:56.492 This brings us to the second part[br]of our rocket metaphor: 0:06:56.492,0:06:57.486 the steering. 0:06:57.486,0:06:59.219 We're making AI more powerful, 0:06:59.219,0:07:03.167 but how can we steer towards a future 0:07:03.167,0:07:06.356 where AI helps humanity[br]flourish rather than flounder? 0:07:06.829,0:07:07.840 To help with this, 0:07:07.840,0:07:10.020 I cofounded the Future of Life Institute. 0:07:10.020,0:07:12.961 It's a small nonprofit promoting[br]beneficial technology use 0:07:12.961,0:07:15.793 and our goal is simply[br]for the future of life to exist 0:07:15.793,0:07:17.722 and to be as inspiring as possible. 0:07:18.032,0:07:20.685 You know, I love technology. 0:07:21.083,0:07:23.800 Technology is why today[br]is better than the Stone Age. 0:07:24.654,0:07:29.806 And I'm optimistic that we can create[br]a really inspiring high-tech future ... 0:07:29.806,0:07:30.814 if -- 0:07:30.814,0:07:32.006 and this is a big if -- 0:07:32.006,0:07:33.772 if we win the wisdom race -- 0:07:33.772,0:07:36.745 the race between the growing[br]power of our technology 0:07:36.745,0:07:38.976 and the growing wisdom[br]with which we manage it. 0:07:39.411,0:07:41.738 But this is going to require[br]a change of strategy 0:07:41.738,0:07:44.761 because our old strategy[br]has been learning from mistakes. 0:07:45.331,0:07:47.020 We invented fire, 0:07:47.020,0:07:48.588 screwed up a bunch of times -- 0:07:48.588,0:07:50.319 invented the fire extinguisher. 0:07:50.511,0:07:51.508 (Laughter) 0:07:51.754,0:07:52.960 We invented the car, 0:07:52.960,0:07:54.458 screwed up a bunch of times -- 0:07:54.458,0:07:57.098 invented the traffic light,[br]the seatbelt and the airbag, 0:07:57.098,0:08:00.749 but with more powerful technology[br]like nuclear weapons and AGI, 0:08:00.749,0:08:03.999 learning from mistakes is lousy strategy, 0:08:03.999,0:08:05.002 don't you think? 0:08:05.002,0:08:06.002 (Laughter) 0:08:06.002,0:08:08.988 It's much better to be proactive[br]rather than be reactive; 0:08:08.988,0:08:11.438 plan ahead and get things[br]right the first time 0:08:11.438,0:08:13.703 because that might be[br]the only time we'll get. 0:08:13.765,0:08:16.094 But it is funny because[br]sometimes people tell me, 0:08:16.094,0:08:17.437 "Max, shhh, 0:08:17.437,0:08:18.854 don't talk like that. 0:08:18.854,0:08:20.869 That's Luddite scaremongering." 0:08:22.242,0:08:23.822 But it's not scaremongering. 0:08:23.822,0:08:26.605 It's what we at MIT[br]call safety engineering. 0:08:27.250,0:08:28.455 Think about it: 0:08:28.455,0:08:30.843 before NASA launched[br]the Apollo 11 mission, 0:08:30.843,0:08:33.954 they systematically thought through[br]everything that could go wrong 0:08:33.954,0:08:36.461 when you put people[br]on top of explosive fuel tanks 0:08:36.461,0:08:39.041 and launch them somewhere[br]where no one could help them. 0:08:39.041,0:08:40.968 And there was a lot that could go wrong. 0:08:40.968,0:08:42.317 Was that scaremongering? 0:08:43.280,0:08:44.276 No. 0:08:44.276,0:08:46.283 That's was precisely[br]the safety engineering 0:08:46.283,0:08:48.192 that insured the success of the mission, 0:08:48.192,0:08:52.265 and that is precisely the strategy[br]I think we should take with AGI. 0:08:52.670,0:08:56.585 Think through what can go wrong[br]to make sure it goes right. 0:08:56.849,0:08:58.126 So in this spirit, 0:08:58.126,0:08:59.474 we've organized conferences, 0:08:59.474,0:09:02.275 bringing together leading[br]AI researchers and other thinkers 0:09:02.275,0:09:05.569 to discuss how to grow this wisdom[br]we need to keep AI beneficial. 0:09:05.959,0:09:09.162 Our last conference[br]was in Asilomar, California last year 0:09:09.162,0:09:12.302 and produced this list of 23 principles 0:09:12.302,0:09:15.275 which have since been signed[br]by over 1,000 AI researchers 0:09:15.275,0:09:16.730 and key industry leaders, 0:09:16.730,0:09:19.397 and I want to tell you[br]about three of these principles. 0:09:19.866,0:09:24.815 One is that we should avoid an arms race[br]and lethal autonomous weapons. 0:09:25.609,0:09:29.213 The idea here is that any science[br]can be used for new ways of helping people 0:09:29.213,0:09:30.754 or new ways of harming people. 0:09:30.754,0:09:34.894 For example, biology and chemistry[br]are much more likely to be used 0:09:34.894,0:09:39.667 for new medicines or new cures[br]than for new ways of killing people, 0:09:39.667,0:09:41.871 because biologists[br]and chemists pushed hard -- 0:09:41.871,0:09:42.974 and successfully -- 0:09:42.974,0:09:45.378 for bans on biological[br]and chemical weapons. 0:09:45.378,0:09:46.633 And in the same spirit, 0:09:46.633,0:09:51.146 most AI researchers want to stigmatize[br]and ban lethal autonomous weapons. 0:09:51.753,0:09:53.796 Another Asilomar AI principle 0:09:53.796,0:09:56.780 is that we should mitigate[br]AI-fueled income inequality. 0:09:57.280,0:10:01.777 I think that if we can grow[br]the economic pie dramatically with AI, 0:10:01.777,0:10:04.239 and we still can't figure out[br]how to divide this pie 0:10:04.239,0:10:05.897 so that everyone is better off, 0:10:05.897,0:10:07.100 then shame on us. 0:10:07.100,0:10:10.130 (Applause) 0:10:11.259,0:10:14.794 All right, now raise your hand[br]if your computer has ever crashed. 0:10:15.970,0:10:17.116 (Laughter) 0:10:17.116,0:10:18.438 Wow, that's a lot of hands. 0:10:18.608,0:10:20.643 Well, then you'll appreciate[br]this principle 0:10:20.643,0:10:23.966 that we should invest much more[br]in the AI safety research, 0:10:23.966,0:10:27.663 because as we put AI in charge[br]of even more decisions and infrastructure, 0:10:27.663,0:10:31.265 we need to figure out how to transform[br]today's buggy and hackable computers 0:10:31.265,0:10:33.785 into robust AI systems[br]that we can really trust, 0:10:33.785,0:10:34.792 because otherwise, 0:10:34.792,0:10:37.580 all this awesome new technology[br]can malfunction and harm us 0:10:37.580,0:10:39.492 or get hacked and be turned against us. 0:10:39.705,0:10:45.504 And this AI safety work has to include[br]work on AI value alignment, 0:10:45.504,0:10:48.304 because the real threat[br]from AGI isn't malice, 0:10:48.304,0:10:49.942 like in silly Hollywood movies, 0:10:49.942,0:10:51.734 but competence -- 0:10:51.734,0:10:54.785 AGI accomplishing goals[br]that just aren't aligned with ours. 0:10:55.264,0:10:56.306 For example, 0:10:56.306,0:10:59.990 when we humans drove[br]the West African Black Rhino extinct, 0:10:59.990,0:11:03.068 we didn't do it because we're a bunch[br]of evil rhinocerous haters, 0:11:03.068,0:11:04.074 did we? 0:11:04.074,0:11:06.169 We did it because we were[br]smarter than them 0:11:06.169,0:11:08.213 and our goals weren't aligned with theirs. 0:11:08.549,0:11:11.358 But AGI is by definition smarter than us, 0:11:11.358,0:11:14.902 so to make sure that we don't put[br]ourselves in the position of those rhinos 0:11:14.902,0:11:16.705 if we create AGI, 0:11:16.705,0:11:21.030 we need to figure out how[br]to make machines understand our goals, 0:11:21.030,0:11:22.363 adopt our goals 0:11:22.363,0:11:23.960 and retain our goals. 0:11:25.518,0:11:27.830 And whose goals should these be, anyway? 0:11:28.291,0:11:29.862 Which goals should they be? 0:11:30.172,0:11:33.092 This brings us to the third part[br]of our rocket metaphor: 0:11:33.092,0:11:34.352 the destination. 0:11:35.322,0:11:37.172 We're making AI more powerful, 0:11:37.172,0:11:38.958 trying to figure out how to steer it, 0:11:38.958,0:11:40.769 but where do we want to go with it? 0:11:41.974,0:11:45.596 This is the elephant in the room[br]that almost nobody talks about -- 0:11:45.596,0:11:47.279 not even here at TED -- 0:11:47.279,0:11:51.347 because we're so fixated[br]on short-term AI challenges. 0:11:52.161,0:11:56.869 Look, our species is trying to build AGI, 0:11:56.869,0:12:00.474 motivated by curiosity and economics, 0:12:00.474,0:12:03.939 but what sort of future society[br]are we hoping for if we succeed? 0:12:04.721,0:12:06.660 We did an opinion poll on this recently, 0:12:06.660,0:12:07.848 and I was struck to see 0:12:07.848,0:12:10.709 that most people actually want us[br]to build superintelligence: 0:12:10.709,0:12:14.219 AI that's vastly smarter[br]than us in all ways. 0:12:15.185,0:12:18.719 What there was the greatest agreement on[br]was that we should be ambitious 0:12:18.719,0:12:20.727 and help life spread into the cosmos, 0:12:20.727,0:12:25.080 but there was much less agreement[br]about who or what should be in charge. 0:12:25.238,0:12:27.116 And I was actually quite amused 0:12:27.116,0:12:30.539 to see that there's some some people[br]who want it to be just the machines. 0:12:30.539,0:12:32.261 (Laughter) 0:12:32.261,0:12:36.161 And there was total disagreement[br]about what the role of humans should be, 0:12:36.161,0:12:38.076 even at the most basic level, 0:12:38.076,0:12:40.965 so let's take a closer look[br]at possible futures 0:12:40.965,0:12:43.358 that we might choose[br]to steer toward, all right? 0:12:43.547,0:12:44.919 So don't get be wrong here; 0:12:44.919,0:12:47.125 I'm not talking about space travel, 0:12:47.125,0:12:50.219 merely about humanity's[br]metaphorical journey into the future. 0:12:51.007,0:12:54.718 So one option that some[br]of my AI colleagues like 0:12:54.718,0:12:58.165 is to build superintelligence[br]and keep it under human control, 0:12:58.165,0:12:59.966 like an enslaved god, 0:12:59.966,0:13:01.611 disconnected from the internet 0:13:01.611,0:13:04.826 and used to create unimaginable[br]technology and wealth 0:13:04.826,0:13:06.326 for whoever controls it. 0:13:06.917,0:13:08.654 But Lord Acton warned us 0:13:08.654,0:13:12.098 that power corrupts and absolute[br]power corrupts absolutely, 0:13:12.098,0:13:16.072 so you might worry that maybe[br]we humans just aren't smart enough, 0:13:16.072,0:13:17.812 or wise enough rather, 0:13:17.812,0:13:19.258 to handle this much power. 0:13:19.748,0:13:22.517 Also, aside from any moral[br]qualms you might have 0:13:22.517,0:13:24.653 about enslaving superior minds, 0:13:24.653,0:13:28.484 you might worry that maybe[br]the superintelligence could outsmart us, 0:13:28.484,0:13:29.509 break out 0:13:29.509,0:13:31.026 and take over. 0:13:31.687,0:13:35.168 But I also have colleagues[br]who are fine with AI taking over 0:13:35.168,0:13:37.311 and even causing human extinction, 0:13:37.311,0:13:41.041 as long as we feel the the AIs[br]are our worthy descendants, 0:13:41.041,0:13:42.305 like our children. 0:13:42.889,0:13:48.527 But how would we know that the AIs[br]have adopted our best values, 0:13:48.527,0:13:52.261 and aren't just unconscious zombies[br]tricking us into anthropomorphizing them? 0:13:52.832,0:13:55.664 Also, shouldn't those people[br]who don't want human extinction 0:13:55.664,0:13:57.251 have a say in the matter, too? 0:13:58.298,0:14:01.715 Now, if you didn't like either[br]of those two high-tech options, 0:14:01.715,0:14:04.974 it's important to remember[br]that low-tech is suicide 0:14:04.974,0:14:06.265 from a cosmic perspective, 0:14:06.265,0:14:08.749 because if we don't go far beyond[br]today's technology, 0:14:08.749,0:14:11.536 the question isn't whether humanity[br]is going to go extinct, 0:14:11.536,0:14:14.932 merely whether we're going to get[br]taken out by the next killer asteroid, 0:14:14.932,0:14:15.941 super volcano 0:14:15.941,0:14:18.939 or some other problem that better[br]technology could have solved. 0:14:19.046,0:14:22.630 So, how about having[br]our cake and eating it ... 0:14:22.630,0:14:25.272 with AGI that's not enslaved 0:14:25.272,0:14:28.374 but treats us well because its values[br]are aligned with ours? 0:14:28.374,0:14:32.620 This is the gist of what Eliezer Yudkowsky[br]has called "friendly AI," 0:14:32.620,0:14:34.169 and if we can do this, 0:14:34.169,0:14:35.504 it could be awesome. 0:14:35.896,0:14:40.722 It could not only eliminate negative[br]experiences like disease, poverty, 0:14:40.722,0:14:42.404 crime and other suffering, 0:14:42.404,0:14:45.150 but it could also give us[br]the freedom to choose 0:14:45.150,0:14:49.116 from a fantastic new diversity[br]of positive experiences -- 0:14:49.116,0:14:52.703 basically making us the masters[br]of our own destiny. 0:14:54.405,0:14:55.865 So in summary, 0:14:55.865,0:14:59.018 our situation with technology[br]is complicated, 0:14:59.018,0:15:00.947 but the big picture is rather simple. 0:15:01.349,0:15:04.854 Most AI researchers expect AGI[br]within decades, 0:15:04.854,0:15:08.018 and if we just bumble[br]into this unprepared, 0:15:08.018,0:15:11.417 it will probably be the biggest[br]mistake in human history -- 0:15:11.417,0:15:12.427 let's face it. 0:15:12.819,0:15:15.497 It could enable brutal,[br]global dictatorship 0:15:15.497,0:15:18.968 with unprecedented inequality,[br]surveillance and suffering, 0:15:18.968,0:15:20.689 and maybe even human extinction. 0:15:20.912,0:15:24.140 But if we steer carefully, 0:15:24.140,0:15:26.274 we could end up in a fantastic future 0:15:26.274,0:15:27.994 where everybody's better off: 0:15:27.994,0:15:29.310 the poor are richer, 0:15:29.310,0:15:30.516 the rich are richer, 0:15:30.516,0:15:34.213 everybody is healthy and free[br]to live out their dreams. 0:15:35.176,0:15:36.527 Now, hang on. 0:15:36.787,0:15:40.970 Do you folks want the future[br]that's politically right or left? 0:15:41.275,0:15:44.178 Do you want the pious society[br]with strict moral rules, 0:15:44.178,0:15:45.996 or do you an hedonistic free-for-all, 0:15:45.996,0:15:48.043 more like Burning Man 24-7? 0:15:48.291,0:15:50.717 Do you want beautiful beaches,[br]forests and lakes 0:15:50.717,0:15:53.202 or would you prefer to rearrange[br]some of those atoms 0:15:53.202,0:15:55.771 with the computers and they[br]can be vitual experiences? 0:15:55.771,0:15:56.766 With friendly AI, 0:15:56.766,0:15:59.120 we could simply build[br]all of these societies 0:15:59.120,0:16:02.345 and give people the freedom to choose[br]which one they want to live in 0:16:02.345,0:16:05.457 because we would no longer[br]be limited by our intelligence, 0:16:05.457,0:16:06.988 merely by the laws of physics. 0:16:06.988,0:16:11.627 So the resources and space[br]for this would be astronomical -- 0:16:11.627,0:16:12.664 literally. 0:16:13.390,0:16:14.696 So here's our choice. 0:16:16.048,0:16:19.599 We can either be complacent[br]about our future, 0:16:19.599,0:16:22.175 taking as an article of blind faith 0:16:22.175,0:16:26.290 that any new technology[br]is guaranteed to be beneficial, 0:16:26.290,0:16:30.491 and just repeat that to ourselves[br]as a mantra over and over and over again 0:16:30.491,0:16:34.297 as we drift like a rudderless ship[br]towards our own obsolesence. 0:16:34.976,0:16:38.007 Or we can be ambitious -- 0:16:38.007,0:16:40.586 thinking hard about how[br]to steer our technology 0:16:40.586,0:16:42.423 and where we want to go with it 0:16:42.423,0:16:44.497 to create the age of amazement. 0:16:45.173,0:16:48.137 We're all here to celebrate[br]the age of amazement, 0:16:48.137,0:16:53.381 and I feel that its essence should lie[br]in becoming not overpowered 0:16:53.381,0:16:55.683 but empowered by our technology. 0:16:56.049,0:16:57.052 Thank you. 0:16:57.356,0:16:59.588 (Applause)