WEBVTT 00:00:01.075 --> 00:00:05.337 After 13.8 billion years of cosmic history, 00:00:05.337 --> 00:00:07.481 our universe has woken up 00:00:07.481 --> 00:00:08.935 and become aware of itself. 00:00:09.561 --> 00:00:11.598 From a small blue planet, 00:00:11.598 --> 00:00:15.701 tiny, conscious parts of our universe have begun gazing out into the cosmos 00:00:15.701 --> 00:00:17.090 with telescopes, 00:00:17.090 --> 00:00:18.690 discovering something humbling. 00:00:19.510 --> 00:00:22.277 We've discovered that our universe is vastly grander 00:00:22.277 --> 00:00:23.624 than our ancestors imagined 00:00:23.624 --> 00:00:27.803 and that life seems to be an almost imperceptibly small perturbation 00:00:27.803 --> 00:00:29.615 on an otherwise dead universe. 00:00:30.463 --> 00:00:33.434 But we've also discovered something inspiring, 00:00:33.434 --> 00:00:36.424 which is that the technology we're developing has the potential 00:00:36.424 --> 00:00:39.313 to help life flourish like never before, 00:00:39.313 --> 00:00:42.395 not just for centuries but for billions of years, 00:00:42.395 --> 00:00:46.759 and not just on Earth but throughout much of this amazing cosmos. 00:00:47.741 --> 00:00:51.172 I think of the earliest life as "Life 1.0" 00:00:51.172 --> 00:00:52.646 because it was really dumb, 00:00:52.646 --> 00:00:56.530 like bacteria, unable to learn anything during its lifetime. 00:00:56.837 --> 00:01:00.446 I think of us humans as Life 2.0 because we can learn, 00:01:00.446 --> 00:01:02.102 which we in nerdy, geek speak, 00:01:02.102 --> 00:01:05.011 might think of as installing new software into our brains, 00:01:05.011 --> 00:01:07.178 like languages and job skills. 00:01:07.730 --> 00:01:12.167 Life 3.0, which can design not only its software but also its hardware 00:01:12.167 --> 00:01:13.577 of course doesn't exist yet. 00:01:13.827 --> 00:01:17.694 But perhaps our technology has already made us life 2.1, 00:01:17.694 --> 00:01:21.899 with our artificial knees, pacemakers and cochlear implants. NOTE Paragraph 00:01:21.899 --> 00:01:25.556 So let's take a closer look at our relationship with technology, OK? 00:01:26.888 --> 00:01:28.000 As an example, 00:01:28.000 --> 00:01:33.480 the Apollo 11 moon mission was both successful and inspiring, 00:01:33.480 --> 00:01:36.559 showing that when we humans use technology wisely, 00:01:36.559 --> 00:01:40.170 we can accomplish things that our ancestors could only dream of. 00:01:40.474 --> 00:01:43.439 But there's an even more inspiring journey 00:01:43.439 --> 00:01:47.442 propelled by something more powerful than rocket engines, 00:01:47.442 --> 00:01:49.709 when passengers aren't just three astronauts 00:01:49.709 --> 00:01:50.965 but all of humanity. 00:01:51.553 --> 00:01:54.427 Let's talk about our collective journey into the future 00:01:54.427 --> 00:01:56.425 with artificial intelligence. NOTE Paragraph 00:01:57.168 --> 00:02:01.698 My friend Jaan Tallinn likes to point out that just as with rocketry, 00:02:01.698 --> 00:02:04.848 it's not enough to make our technology powerful. 00:02:05.612 --> 00:02:06.987 We also have to figure out, 00:02:06.987 --> 00:02:08.856 if we're going to be really ambitious, 00:02:08.856 --> 00:02:10.353 how to steer it 00:02:10.353 --> 00:02:12.063 and where we want to go with it. 00:02:12.932 --> 00:02:16.581 So let's talk about all three [for] artificial intelligence: 00:02:16.581 --> 00:02:19.527 the power, the steering and the destination. 00:02:19.708 --> 00:02:21.035 Let's start with the power. 00:02:21.691 --> 00:02:24.796 I define intelligence very inclusively -- 00:02:24.796 --> 00:02:29.307 simply as our ability to accomplish complex goals, 00:02:29.307 --> 00:02:33.142 because I want to include both biological and artificial intelligence 00:02:33.142 --> 00:02:37.096 and I want to avoid the silly carbon-chauvinism idea 00:02:37.096 --> 00:02:39.490 that you can only be smart if you're made of meat. 00:02:41.003 --> 00:02:45.167 It's really amazing how the power of AI has grown recently. 00:02:45.167 --> 00:02:46.291 Just think about it. 00:02:46.477 --> 00:02:48.086 Not long ago, 00:02:48.086 --> 00:02:49.669 robots couldn't walk. 00:02:51.112 --> 00:02:53.035 Now, they can do backflips. 00:02:54.235 --> 00:02:56.016 Not long ago, 00:02:56.016 --> 00:02:57.969 we didn't have self-driving cars. 00:02:59.101 --> 00:03:01.413 Now, we have self-flying rockets. 00:03:04.040 --> 00:03:05.516 Not long ago, 00:03:05.516 --> 00:03:07.665 AI couldn't do face recognition. 00:03:08.324 --> 00:03:11.178 Now, AI can generate fake faces 00:03:11.178 --> 00:03:15.072 and simulate your face saying stuff that you never said. 00:03:16.529 --> 00:03:18.104 Not long ago, 00:03:18.104 --> 00:03:20.138 AI couldn't beat us at the game of Go. 00:03:20.452 --> 00:03:25.602 Then, Google DeepMind's AlphaZero AI took 3,000 years of human Go games 00:03:25.602 --> 00:03:26.790 and Go wisdom, 00:03:26.790 --> 00:03:31.348 ignored it all and became the world's best player by just playing against itself. 00:03:31.905 --> 00:03:35.648 And the most impressive feat here wasn't that it crushed human gamers, 00:03:35.648 --> 00:03:38.242 but that it crushed human AI researchers 00:03:38.242 --> 00:03:41.525 who had spent decades handcrafting game-playing software. 00:03:42.336 --> 00:03:46.948 And AlphaZero crushed human AI researchers not just in GO but even at chess, 00:03:46.948 --> 00:03:49.371 which we have been working on since 1950. NOTE Paragraph 00:03:50.225 --> 00:03:55.473 So all this amazing recent progress in AI really begs the question: 00:03:55.473 --> 00:03:56.948 how far will it go? 00:03:57.869 --> 00:03:59.625 I like to think about this question 00:03:59.625 --> 00:04:02.697 in terms of this abstract landscape of tasks, 00:04:02.697 --> 00:04:06.193 where the elevation represents how hard it is for AI to do each task 00:04:06.193 --> 00:04:07.202 at human level, 00:04:07.202 --> 00:04:10.250 and the sea level represents what AI can do today. 00:04:11.241 --> 00:04:13.327 The seal level is rising as the AI improves, 00:04:13.327 --> 00:04:16.808 so there's a kind of global warming going on here in the task landscape. 00:04:18.106 --> 00:04:21.686 And the obvious takeaway is to avoid careers at the waterfront -- NOTE Paragraph 00:04:21.686 --> 00:04:22.952 (Laughter) NOTE Paragraph 00:04:22.952 --> 00:04:25.307 which will soon be automated and disrupted. 00:04:25.649 --> 00:04:28.273 But there's a much bigger question as well. 00:04:28.589 --> 00:04:30.803 How high will the water end up rising? 00:04:31.541 --> 00:04:34.946 Will it eventually rise to flood everything? 00:04:35.959 --> 00:04:38.294 Imagine human intelligence at all tasks. 00:04:38.588 --> 00:04:42.216 This is the definition of artificial general intelligence -- 00:04:42.216 --> 00:04:43.499 AGI, 00:04:43.499 --> 00:04:46.564 which has been the holy grail of AI research since its inception. 00:04:47.068 --> 00:04:48.326 [By] this definition, 00:04:48.326 --> 00:04:50.899 [when] people will say, "Ah, there will always be jobs 00:04:50.899 --> 00:04:52.877 that humans can do better than machines," 00:04:52.877 --> 00:04:55.306 [they] are simply saying that we'll never get AGI. 00:04:55.903 --> 00:04:59.491 Sure, we might still choose to have some human jobs 00:04:59.491 --> 00:05:02.532 or to give humans income and purpose with our jobs, 00:05:02.532 --> 00:05:06.316 but AGI will in any case transform life as we know it 00:05:06.316 --> 00:05:08.678 with humans no longer being the most intelligent. 00:05:08.983 --> 00:05:12.838 Now, if the water level does reach AGI, 00:05:12.838 --> 00:05:18.058 then further AI progress will be driven mainly not by humans but by AI, 00:05:18.058 --> 00:05:19.888 which means that there's a possibility 00:05:19.888 --> 00:05:22.400 that further AI progress could be way faster 00:05:22.400 --> 00:05:25.715 than the typical human research and development timescale of years, 00:05:25.715 --> 00:05:29.819 raising the controversial possibility of an intelligence explosion 00:05:29.819 --> 00:05:31.957 where recursively self-improving AI 00:05:31.957 --> 00:05:35.523 rapidly leaves human intelligence far behind, 00:05:35.523 --> 00:05:37.754 creating what's known as superintelligence. NOTE Paragraph 00:05:39.927 --> 00:05:43.207 All right, reality check: 00:05:43.207 --> 00:05:45.559 are we going to get AGI any time soon? 00:05:46.456 --> 00:05:47.974 Some famous AI researchers, 00:05:47.974 --> 00:05:49.184 like Rodney Brooks, 00:05:49.184 --> 00:05:51.260 think it won't happen for hundreds of years. 00:05:51.746 --> 00:05:55.729 But others, like Google DeepMind founder Demis Hassabis, 00:05:55.729 --> 00:05:56.913 are more optimistic 00:05:56.913 --> 00:05:59.413 and are working to try to make it happen much sooner. 00:05:59.699 --> 00:06:02.863 And recent surveys have shown that most AI researchers 00:06:02.863 --> 00:06:05.827 have actually shared Demis's optimism, 00:06:05.827 --> 00:06:09.638 expecting that we will get AGI within decades, 00:06:09.638 --> 00:06:12.012 so within the lifetime of many of us, 00:06:12.012 --> 00:06:13.315 which begs the question -- 00:06:13.315 --> 00:06:14.322 and then what? 00:06:15.177 --> 00:06:17.482 What do we want the role of humans to be 00:06:17.482 --> 00:06:20.265 if machines can do everything better and cheaper than us? NOTE Paragraph 00:06:23.062 --> 00:06:25.223 The way I see it, we face a choice. 00:06:26.049 --> 00:06:27.598 One option is to be complacent. 00:06:27.708 --> 00:06:31.595 We can say, "Oh, let's just build machines that can do everything we can do 00:06:31.595 --> 00:06:33.373 and not worry about the consequences. 00:06:33.373 --> 00:06:36.572 Come on, if we build technology that makes all humans obsolete, 00:06:36.572 --> 00:06:38.619 what could possibly go wrong?" NOTE Paragraph 00:06:38.935 --> 00:06:39.940 (Laughter) NOTE Paragraph 00:06:40.494 --> 00:06:43.083 But I think that would be embarrassingly lame. 00:06:44.090 --> 00:06:45.972 I think we should be more ambitious -- 00:06:45.972 --> 00:06:47.237 in the spirit of TED. 00:06:47.839 --> 00:06:51.149 Let's envision the truly inspiring high-tech future 00:06:51.149 --> 00:06:52.750 and try to steer towards it. 00:06:53.820 --> 00:06:56.492 This brings us to the second part of our rocket metaphor: 00:06:56.492 --> 00:06:57.486 the steering. 00:06:57.486 --> 00:06:59.219 We're making AI more powerful, 00:06:59.219 --> 00:07:03.167 but how can we steer towards a future 00:07:03.167 --> 00:07:06.356 where AI helps humanity flourish rather than flounder? 00:07:06.829 --> 00:07:07.840 To help with this, 00:07:07.840 --> 00:07:10.020 I cofounded the Future of Life Institute. 00:07:10.020 --> 00:07:12.961 It's a small nonprofit promoting beneficial technology use 00:07:12.961 --> 00:07:15.793 and our goal is simply for the future of life to exist 00:07:15.793 --> 00:07:17.722 and to be as inspiring as possible. 00:07:18.032 --> 00:07:20.685 You know, I love technology. 00:07:21.083 --> 00:07:23.800 Technology is why today is better than the Stone Age. 00:07:24.654 --> 00:07:29.806 And I'm optimistic that we can create a really inspiring high-tech future ... 00:07:29.806 --> 00:07:30.814 if -- 00:07:30.814 --> 00:07:32.006 and this is a big if -- 00:07:32.006 --> 00:07:33.772 if we win the wisdom race -- 00:07:33.772 --> 00:07:36.745 the race between the growing power of our technology 00:07:36.745 --> 00:07:38.976 and the growing wisdom with which we manage it. 00:07:39.411 --> 00:07:41.738 But this is going to require a change of strategy 00:07:41.738 --> 00:07:44.761 because our old strategy has been learning from mistakes. 00:07:45.331 --> 00:07:47.020 We invented fire, 00:07:47.020 --> 00:07:48.588 screwed up a bunch of times -- 00:07:48.588 --> 00:07:50.319 invented the fire extinguisher. NOTE Paragraph 00:07:50.511 --> 00:07:51.508 (Laughter) NOTE Paragraph 00:07:51.754 --> 00:07:52.960 We invented the car, 00:07:52.960 --> 00:07:54.458 screwed up a bunch of times -- 00:07:54.458 --> 00:07:57.098 invented the traffic light, the seatbelt and the airbag, 00:07:57.098 --> 00:08:00.749 but with more powerful technology like nuclear weapons and AGI, 00:08:00.749 --> 00:08:03.999 learning from mistakes is lousy strategy, 00:08:03.999 --> 00:08:05.002 don't you think? NOTE Paragraph 00:08:05.002 --> 00:08:06.002 (Laughter) NOTE Paragraph 00:08:06.002 --> 00:08:08.988 It's much better to be proactive rather than be reactive; 00:08:08.988 --> 00:08:11.438 plan ahead and get things right the first time 00:08:11.438 --> 00:08:13.703 because that might be the only time we'll get. 00:08:13.765 --> 00:08:16.094 But it is funny because sometimes people tell me, 00:08:16.094 --> 00:08:17.437 "Max, shhh, 00:08:17.437 --> 00:08:18.854 don't talk like that. 00:08:18.854 --> 00:08:20.869 That's Luddite scaremongering." 00:08:22.242 --> 00:08:23.822 But it's not scaremongering. 00:08:23.822 --> 00:08:26.605 It's what we at MIT call safety engineering. 00:08:27.250 --> 00:08:28.455 Think about it: 00:08:28.455 --> 00:08:30.843 before NASA launched the Apollo 11 mission, 00:08:30.843 --> 00:08:33.954 they systematically thought through everything that could go wrong 00:08:33.954 --> 00:08:36.461 when you put people on top of explosive fuel tanks 00:08:36.461 --> 00:08:39.041 and launch them somewhere where no one could help them. 00:08:39.041 --> 00:08:40.968 And there was a lot that could go wrong. 00:08:40.968 --> 00:08:42.317 Was that scaremongering? 00:08:43.280 --> 00:08:44.276 No. 00:08:44.276 --> 00:08:46.283 That's was precisely the safety engineering 00:08:46.283 --> 00:08:48.192 that insured the success of the mission, 00:08:48.192 --> 00:08:52.265 and that is precisely the strategy I think we should take with AGI. 00:08:52.670 --> 00:08:56.585 Think through what can go wrong to make sure it goes right. NOTE Paragraph 00:08:56.849 --> 00:08:58.126 So in this spirit, 00:08:58.126 --> 00:08:59.474 we've organized conferences, 00:08:59.474 --> 00:09:02.275 bringing together leading AI researchers and other thinkers 00:09:02.275 --> 00:09:05.569 to discuss how to grow this wisdom we need to keep AI beneficial. 00:09:05.959 --> 00:09:09.162 Our last conference was in Asilomar, California last year 00:09:09.162 --> 00:09:12.302 and produced this list of 23 principles 00:09:12.302 --> 00:09:15.275 which have since been signed by over 1,000 AI researchers 00:09:15.275 --> 00:09:16.730 and key industry leaders, 00:09:16.730 --> 00:09:19.397 and I want to tell you about three of these principles. 00:09:19.866 --> 00:09:24.815 One is that we should avoid an arms race and lethal autonomous weapons. 00:09:25.609 --> 00:09:29.213 The idea here is that any science can be used for new ways of helping people 00:09:29.213 --> 00:09:30.754 or new ways of harming people. 00:09:30.754 --> 00:09:34.894 For example, biology and chemistry are much more likely to be used 00:09:34.894 --> 00:09:39.667 for new medicines or new cures than for new ways of killing people, 00:09:39.667 --> 00:09:41.871 because biologists and chemists pushed hard -- 00:09:41.871 --> 00:09:42.974 and successfully -- 00:09:42.974 --> 00:09:45.378 for bans on biological and chemical weapons. 00:09:45.378 --> 00:09:46.633 And in the same spirit, 00:09:46.633 --> 00:09:51.146 most AI researchers want to stigmatize and ban lethal autonomous weapons. 00:09:51.753 --> 00:09:53.796 Another Asilomar AI principle 00:09:53.796 --> 00:09:56.780 is that we should mitigate AI-fueled income inequality. 00:09:57.280 --> 00:10:01.777 I think that if we can grow the economic pie dramatically with AI, 00:10:01.777 --> 00:10:04.239 and we still can't figure out how to divide this pie 00:10:04.239 --> 00:10:05.897 so that everyone is better off, 00:10:05.897 --> 00:10:07.100 then shame on us. NOTE Paragraph 00:10:07.100 --> 00:10:10.130 (Applause) NOTE Paragraph 00:10:11.259 --> 00:10:14.794 All right, now raise your hand if your computer has ever crashed. NOTE Paragraph 00:10:15.970 --> 00:10:17.116 (Laughter) NOTE Paragraph 00:10:17.116 --> 00:10:18.438 Wow, that's a lot of hands. 00:10:18.608 --> 00:10:20.643 Well, then you'll appreciate this principle 00:10:20.643 --> 00:10:23.966 that we should invest much more in the AI safety research, 00:10:23.966 --> 00:10:27.663 because as we put AI in charge of even more decisions and infrastructure, 00:10:27.663 --> 00:10:31.265 we need to figure out how to transform today's buggy and hackable computers 00:10:31.265 --> 00:10:33.785 into robust AI systems that we can really trust, 00:10:33.785 --> 00:10:34.792 because otherwise, 00:10:34.792 --> 00:10:37.580 all this awesome new technology can malfunction and harm us 00:10:37.580 --> 00:10:39.492 or get hacked and be turned against us. 00:10:39.705 --> 00:10:45.504 And this AI safety work has to include work on AI value alignment, 00:10:45.504 --> 00:10:48.304 because the real threat from AGI isn't malice, 00:10:48.304 --> 00:10:49.942 like in silly Hollywood movies, 00:10:49.942 --> 00:10:51.734 but competence -- 00:10:51.734 --> 00:10:54.785 AGI accomplishing goals that just aren't aligned with ours. 00:10:55.264 --> 00:10:56.306 For example, 00:10:56.306 --> 00:10:59.990 when we humans drove the West African Black Rhino extinct, 00:10:59.990 --> 00:11:03.068 we didn't do it because we're a bunch of evil rhinocerous haters, 00:11:03.068 --> 00:11:04.074 did we? 00:11:04.074 --> 00:11:06.169 We did it because we were smarter than them 00:11:06.169 --> 00:11:08.213 and our goals weren't aligned with theirs. 00:11:08.549 --> 00:11:11.358 But AGI is by definition smarter than us, 00:11:11.358 --> 00:11:14.902 so to make sure that we don't put ourselves in the position of those rhinos 00:11:14.902 --> 00:11:16.705 if we create AGI, 00:11:16.705 --> 00:11:21.030 we need to figure out how to make machines understand our goals, 00:11:21.030 --> 00:11:22.363 adopt our goals 00:11:22.363 --> 00:11:23.960 and retain our goals. NOTE Paragraph 00:11:25.518 --> 00:11:27.830 And whose goals should these be, anyway? 00:11:28.291 --> 00:11:29.862 Which goals should they be? 00:11:30.172 --> 00:11:33.092 This brings us to the third part of our rocket metaphor: 00:11:33.092 --> 00:11:34.352 the destination. 00:11:35.322 --> 00:11:37.172 We're making AI more powerful, 00:11:37.172 --> 00:11:38.958 trying to figure out how to steer it, 00:11:38.958 --> 00:11:40.769 but where do we want to go with it? 00:11:41.974 --> 00:11:45.596 This is the elephant in the room that almost nobody talks about -- 00:11:45.596 --> 00:11:47.279 not even here at TED -- 00:11:47.279 --> 00:11:51.347 because we're so fixated on short-term AI challenges. 00:11:52.161 --> 00:11:56.869 Look, our species is trying to build AGI, 00:11:56.869 --> 00:12:00.474 motivated by curiosity and economics, 00:12:00.474 --> 00:12:03.939 but what sort of future society are we hoping for if we succeed? 00:12:04.721 --> 00:12:06.660 We did an opinion poll on this recently, 00:12:06.660 --> 00:12:07.848 and I was struck to see 00:12:07.848 --> 00:12:10.709 that most people actually want us to build superintelligence: 00:12:10.709 --> 00:12:14.219 AI that's vastly smarter than us in all ways. 00:12:15.185 --> 00:12:18.719 What there was the greatest agreement on was that we should be ambitious 00:12:18.719 --> 00:12:20.727 and help life spread into the cosmos, 00:12:20.727 --> 00:12:25.080 but there was much less agreement about who or what should be in charge. 00:12:25.238 --> 00:12:27.116 And I was actually quite amused 00:12:27.116 --> 00:12:30.539 to see that there's some some people who want it to be just the machines. NOTE Paragraph 00:12:30.539 --> 00:12:32.261 (Laughter) NOTE Paragraph 00:12:32.261 --> 00:12:36.161 And there was total disagreement about what the role of humans should be, 00:12:36.161 --> 00:12:38.076 even at the most basic level, 00:12:38.076 --> 00:12:40.965 so let's take a closer look at possible futures 00:12:40.965 --> 00:12:43.358 that we might choose to steer toward, all right? NOTE Paragraph 00:12:43.547 --> 00:12:44.919 So don't get be wrong here; 00:12:44.919 --> 00:12:47.125 I'm not talking about space travel, 00:12:47.125 --> 00:12:50.219 merely about humanity's metaphorical journey into the future. 00:12:51.007 --> 00:12:54.718 So one option that some of my AI colleagues like 00:12:54.718 --> 00:12:58.165 is to build superintelligence and keep it under human control, 00:12:58.165 --> 00:12:59.966 like an enslaved god, 00:12:59.966 --> 00:13:01.611 disconnected from the internet 00:13:01.611 --> 00:13:04.826 and used to create unimaginable technology and wealth 00:13:04.826 --> 00:13:06.326 for whoever controls it. 00:13:06.917 --> 00:13:08.654 But Lord Acton warned us 00:13:08.654 --> 00:13:12.098 that power corrupts and absolute power corrupts absolutely, 00:13:12.098 --> 00:13:16.072 so you might worry that maybe we humans just aren't smart enough, 00:13:16.072 --> 00:13:17.812 or wise enough rather, 00:13:17.812 --> 00:13:19.258 to handle this much power. 00:13:19.748 --> 00:13:22.517 Also, aside from any moral qualms you might have 00:13:22.517 --> 00:13:24.653 about enslaving superior minds, 00:13:24.653 --> 00:13:28.484 you might worry that maybe the superintelligence could outsmart us, 00:13:28.484 --> 00:13:29.509 break out 00:13:29.509 --> 00:13:31.026 and take over. 00:13:31.687 --> 00:13:35.168 But I also have colleagues who are fine with AI taking over 00:13:35.168 --> 00:13:37.311 and even causing human extinction, 00:13:37.311 --> 00:13:41.041 as long as we feel the the AIs are our worthy descendants, 00:13:41.041 --> 00:13:42.305 like our children. 00:13:42.889 --> 00:13:48.527 But how would we know that the AIs have adopted our best values, 00:13:48.527 --> 00:13:52.261 and aren't just unconscious zombies tricking us into anthropomorphizing them? 00:13:52.832 --> 00:13:55.664 Also, shouldn't those people who don't want human extinction 00:13:55.664 --> 00:13:57.251 have a say in the matter, too? 00:13:58.298 --> 00:14:01.715 Now, if you didn't like either of those two high-tech options, 00:14:01.715 --> 00:14:04.974 it's important to remember that low-tech is suicide 00:14:04.974 --> 00:14:06.265 from a cosmic perspective, 00:14:06.265 --> 00:14:08.749 because if we don't go far beyond today's technology, 00:14:08.749 --> 00:14:11.536 the question isn't whether humanity is going to go extinct, 00:14:11.536 --> 00:14:14.932 merely whether we're going to get taken out by the next killer asteroid, 00:14:14.932 --> 00:14:15.941 super volcano 00:14:15.941 --> 00:14:18.939 or some other problem that better technology could have solved. NOTE Paragraph 00:14:19.046 --> 00:14:22.630 So, how about having our cake and eating it ... 00:14:22.630 --> 00:14:25.272 with AGI that's not enslaved 00:14:25.272 --> 00:14:28.374 but treats us well because its values are aligned with ours? 00:14:28.374 --> 00:14:32.620 This is the gist of what Eliezer Yudkowsky has called "friendly AI," 00:14:32.620 --> 00:14:34.169 and if we can do this, 00:14:34.169 --> 00:14:35.504 it could be awesome. 00:14:35.896 --> 00:14:40.722 It could not only eliminate negative experiences like disease, poverty, 00:14:40.722 --> 00:14:42.404 crime and other suffering, 00:14:42.404 --> 00:14:45.150 but it could also give us the freedom to choose 00:14:45.150 --> 00:14:49.116 from a fantastic new diversity of positive experiences -- 00:14:49.116 --> 00:14:52.703 basically making us the masters of our own destiny. NOTE Paragraph 00:14:54.405 --> 00:14:55.865 So in summary, 00:14:55.865 --> 00:14:59.018 our situation with technology is complicated, 00:14:59.018 --> 00:15:00.947 but the big picture is rather simple. 00:15:01.349 --> 00:15:04.854 Most AI researchers expect AGI within decades, 00:15:04.854 --> 00:15:08.018 and if we just bumble into this unprepared, 00:15:08.018 --> 00:15:11.417 it will probably be the biggest mistake in human history -- 00:15:11.417 --> 00:15:12.427 let's face it. 00:15:12.819 --> 00:15:15.497 It could enable brutal, global dictatorship 00:15:15.497 --> 00:15:18.968 with unprecedented inequality, surveillance and suffering, 00:15:18.968 --> 00:15:20.689 and maybe even human extinction. 00:15:20.912 --> 00:15:24.140 But if we steer carefully, 00:15:24.140 --> 00:15:26.274 we could end up in a fantastic future 00:15:26.274 --> 00:15:27.994 where everybody's better off: 00:15:27.994 --> 00:15:29.310 the poor are richer, 00:15:29.310 --> 00:15:30.516 the rich are richer, 00:15:30.516 --> 00:15:34.213 everybody is healthy and free to live out their dreams. NOTE Paragraph 00:15:35.176 --> 00:15:36.527 Now, hang on. 00:15:36.787 --> 00:15:40.970 Do you folks want the future that's politically right or left? 00:15:41.275 --> 00:15:44.178 Do you want the pious society with strict moral rules, 00:15:44.178 --> 00:15:45.996 or do you an hedonistic free-for-all, 00:15:45.996 --> 00:15:48.043 more like Burning Man 24-7? 00:15:48.291 --> 00:15:50.717 Do you want beautiful beaches, forests and lakes 00:15:50.717 --> 00:15:53.202 or would you prefer to rearrange some of those atoms 00:15:53.202 --> 00:15:55.771 with the computers and they can be vitual experiences? 00:15:55.771 --> 00:15:56.766 With friendly AI, 00:15:56.766 --> 00:15:59.120 we could simply build all of these societies 00:15:59.120 --> 00:16:02.345 and give people the freedom to choose which one they want to live in 00:16:02.345 --> 00:16:05.457 because we would no longer be limited by our intelligence, 00:16:05.457 --> 00:16:06.988 merely by the laws of physics. 00:16:06.988 --> 00:16:11.627 So the resources and space for this would be astronomical -- 00:16:11.627 --> 00:16:12.664 literally. NOTE Paragraph 00:16:13.390 --> 00:16:14.696 So here's our choice. 00:16:16.048 --> 00:16:19.599 We can either be complacent about our future, 00:16:19.599 --> 00:16:22.175 taking as an article of blind faith 00:16:22.175 --> 00:16:26.290 that any new technology is guaranteed to be beneficial, 00:16:26.290 --> 00:16:30.491 and just repeat that to ourselves as a mantra over and over and over again 00:16:30.491 --> 00:16:34.297 as we drift like a rudderless ship towards our own obsolesence. 00:16:34.976 --> 00:16:38.007 Or we can be ambitious -- 00:16:38.007 --> 00:16:40.586 thinking hard about how to steer our technology 00:16:40.586 --> 00:16:42.423 and where we want to go with it 00:16:42.423 --> 00:16:44.497 to create the age of amazement. 00:16:45.173 --> 00:16:48.137 We're all here to celebrate the age of amazement, 00:16:48.137 --> 00:16:53.381 and I feel that its essence should lie in becoming not overpowered 00:16:53.381 --> 00:16:55.683 but empowered by our technology. NOTE Paragraph 00:16:56.049 --> 00:16:57.052 Thank you. NOTE Paragraph 00:16:57.356 --> 00:16:59.588 (Applause)