0:00:00.760,0:00:04.600 When I was a kid,[br]I was the quintessential nerd. 0:00:05.320,0:00:07.496 I think some of you were, too. 0:00:07.520,0:00:08.736 (Laughter) 0:00:08.760,0:00:11.976 And you, sir, who laughed the loudest,[br]you probably still are. 0:00:12.000,0:00:14.256 (Laughter) 0:00:14.280,0:00:17.776 I grew up in a small town[br]in the dusty plains of north Texas, 0:00:17.800,0:00:21.136 the son of a sheriff[br]who was the son of a pastor. 0:00:21.160,0:00:23.080 Getting into trouble was not an option. 0:00:24.040,0:00:27.296 And so I started reading[br]calculus books for fun. 0:00:27.320,0:00:28.856 (Laughter) 0:00:28.880,0:00:30.576 You did, too. 0:00:30.600,0:00:34.336 That led me to building a laser[br]and a computer and model rockets, 0:00:34.360,0:00:37.360 and that led me to making[br]rocket fuel in my bedroom. 0:00:37.960,0:00:41.616 Now, in scientific terms, 0:00:41.640,0:00:44.896 we call this a very bad idea. 0:00:44.920,0:00:46.136 (Laughter) 0:00:46.160,0:00:48.336 Around that same time, 0:00:48.360,0:00:51.576 Stanley Kubrick's "2001: A Space Odyssey"[br]came to the theaters, 0:00:51.600,0:00:53.800 and my life was forever changed. 0:00:54.280,0:00:56.336 I loved everything about that movie, 0:00:56.360,0:00:58.896 especially the HAL 9000. 0:00:58.920,0:01:00.976 Now, HAL was a sentient computer 0:01:01.000,0:01:03.456 designed to guide the Discovery spacecraft 0:01:03.480,0:01:06.016 from the Earth to Jupiter. 0:01:06.040,0:01:08.096 HAL was also a flawed character, 0:01:08.120,0:01:12.400 for in the end he chose[br]to value the mission over human life. 0:01:12.840,0:01:14.936 Now, HAL was a fictional character, 0:01:14.960,0:01:17.616 but nonetheless he speaks to our fears, 0:01:17.640,0:01:19.736 our fears of being subjugated 0:01:19.760,0:01:22.776 by some unfeeling, artificial intelligence 0:01:22.800,0:01:24.760 who is indifferent to our humanity. 0:01:25.880,0:01:28.456 I believe that such fears are unfounded. 0:01:28.480,0:01:31.176 Indeed, we stand at a remarkable time 0:01:31.200,0:01:32.736 in human history, 0:01:32.760,0:01:37.736 where, driven by refusal to accept[br]the limits of our bodies and our minds, 0:01:37.760,0:01:39.456 we are building machines 0:01:39.480,0:01:43.096 of exquisite, beautiful[br]complexity and grace 0:01:43.120,0:01:45.176 that will extend the human experience 0:01:45.200,0:01:46.880 in ways beyond our imagining. 0:01:47.720,0:01:50.296 After a career that led me[br]from the Air Force Academy 0:01:50.320,0:01:52.256 to Space Command to now, 0:01:52.280,0:01:53.976 I became a systems engineer, 0:01:54.000,0:01:56.736 and recently I was drawn[br]into an engineering problem 0:01:56.760,0:01:59.336 associated with NASA's mission to Mars. 0:01:59.360,0:02:01.856 Now, in space flights to the Moon, 0:02:01.880,0:02:05.016 we can rely upon[br]mission control in Houston 0:02:05.040,0:02:07.016 to watch over all aspects of a flight. 0:02:07.040,0:02:10.576 However, Mars is 200 times further away, 0:02:10.600,0:02:13.816 and as a result it takes[br]on average 13 minutes 0:02:13.840,0:02:16.976 for a signal to travel[br]from the Earth to Mars. 0:02:17.000,0:02:20.400 If there's trouble,[br]there's not enough time. 0:02:20.840,0:02:23.336 And so a reasonable engineering solution 0:02:23.360,0:02:25.936 calls for us to put mission control 0:02:25.960,0:02:28.976 inside the walls of the Orion spacecraft. 0:02:29.000,0:02:31.896 Another fascinating idea[br]in the mission profile 0:02:31.920,0:02:34.816 places humanoid robots[br]on the surface of Mars 0:02:34.840,0:02:36.696 before the humans themselves arrive, 0:02:36.720,0:02:38.376 first to build facilities 0:02:38.400,0:02:41.760 and later to serve as collaborative[br]members of the science team. 0:02:43.400,0:02:46.136 Now, as I looked at this[br]from an engineering perspective, 0:02:46.160,0:02:49.336 it became very clear to me[br]that what I needed to architect 0:02:49.360,0:02:51.536 was a smart, collaborative, 0:02:51.560,0:02:53.936 socially intelligent[br]artificial intelligence. 0:02:53.960,0:02:58.256 In other words, I needed to build[br]something very much like a HAL 0:02:58.280,0:03:00.696 but without the homicidal tendencies. 0:03:00.720,0:03:02.080 (Laughter) 0:03:02.920,0:03:04.736 Let's pause for a moment. 0:03:04.760,0:03:08.656 Is it really possible to build[br]an artificial intelligence like that? 0:03:08.680,0:03:10.136 Actually, it is. 0:03:10.160,0:03:11.416 In many ways, 0:03:11.440,0:03:13.416 this is a hard engineering problem 0:03:13.440,0:03:14.896 with elements of AI, 0:03:14.920,0:03:19.616 not some wet hair ball of an AI problem[br]that needs to be engineered. 0:03:19.640,0:03:22.296 To paraphrase Alan Turing, 0:03:22.320,0:03:24.696 I'm not interested[br]in building a sentient machine. 0:03:24.720,0:03:26.296 I'm not building a HAL. 0:03:26.320,0:03:28.736 All I'm after is a simple brain, 0:03:28.760,0:03:31.880 something that offers[br]the illusion of intelligence. 0:03:33.000,0:03:36.136 The art and the science of computing[br]have come a long way 0:03:36.160,0:03:37.656 since HAL was onscreen, 0:03:37.680,0:03:40.896 and I'd imagine if his inventor[br]Dr. Chandra were here today, 0:03:40.920,0:03:43.256 he'd have a whole lot of questions for us. 0:03:43.280,0:03:45.376 Is it really possible for us 0:03:45.400,0:03:49.416 to take a system of millions[br]upon millions of devices, 0:03:49.440,0:03:50.896 to read in their data streams, 0:03:50.920,0:03:53.176 to predict their failures[br]and act in advance? 0:03:53.200,0:03:54.416 Yes. 0:03:54.440,0:03:57.616 Can we build systems that converse[br]with humans in natural language? 0:03:57.640,0:03:58.856 Yes. 0:03:58.880,0:04:01.856 Can we build systems[br]that recognize objects, identify emotions, 0:04:01.880,0:04:05.256 emote themselves,[br]play games and even read lips? 0:04:05.280,0:04:06.496 Yes. 0:04:06.520,0:04:08.656 Can we build a system that sets goals, 0:04:08.680,0:04:12.296 that carries out plans against those goals[br]and learns along the way? 0:04:12.320,0:04:13.536 Yes. 0:04:13.560,0:04:16.896 Can we build systems[br]that have a theory of mind? 0:04:16.920,0:04:18.416 This we are learning to do. 0:04:18.440,0:04:21.920 Can we build systems that have[br]an ethical and moral foundation? 0:04:22.480,0:04:24.520 This we must learn how to do. 0:04:25.360,0:04:26.736 So let's accept for a moment 0:04:26.760,0:04:29.656 that it's possible to build[br]such an artificial intelligence 0:04:29.680,0:04:31.816 for this kind of mission and others. 0:04:31.840,0:04:34.376 The next question[br]you must ask yourself is, 0:04:34.400,0:04:35.856 should we fear it? 0:04:35.880,0:04:37.856 Now, every new technology 0:04:37.880,0:04:40.776 brings with it[br]some measure of trepidation. 0:04:40.800,0:04:42.496 When we first saw cars, 0:04:42.520,0:04:46.536 people lamented that we would see[br]the destruction of the family. 0:04:46.560,0:04:49.256 When we first saw telephones come in, 0:04:49.280,0:04:52.176 people were worried it would destroy[br]all civil conversation. 0:04:52.200,0:04:56.136 At a point in time we saw[br]the written word become pervasive, 0:04:56.160,0:04:58.656 people thought we would lose[br]our ability to memorize. 0:04:58.680,0:05:00.736 These things are all true to a degree, 0:05:00.760,0:05:03.176 but it's also the case[br]that these technologies 0:05:03.200,0:05:06.576 brought to us things[br]that extended the human experience 0:05:06.600,0:05:08.480 in some profound ways. 0:05:09.840,0:05:12.120 So let's take this a little further. 0:05:13.120,0:05:17.856 I do not fear the creation[br]of an AI like this, 0:05:17.880,0:05:21.696 because it will eventually[br]embody some of our values. 0:05:21.720,0:05:25.216 Consider this: building a cognitive system[br]is fundamentally different 0:05:25.240,0:05:28.536 than building a traditional[br]software-intensive system of the past. 0:05:28.560,0:05:31.016 We don't program them. We teach them. 0:05:31.040,0:05:33.696 In order to teach a system[br]how to recognize flowers, 0:05:33.720,0:05:36.736 I show it thousands of flowers[br]of the kinds I like. 0:05:36.760,0:05:39.016 In order to teach a system[br]how to play a game -- 0:05:39.040,0:05:41.000 Well, I would. You would, too. 0:05:42.600,0:05:44.640 I like flowers. Come on. 0:05:45.440,0:05:48.296 To teach a system[br]how to play a game like Go, 0:05:48.320,0:05:50.376 I'd have it play thousands of games of Go, 0:05:50.400,0:05:52.056 but in the process I also teach it 0:05:52.080,0:05:54.496 how to discern[br]a good game from a bad game. 0:05:54.520,0:05:58.216 If I want to create an artificially[br]intelligent legal assistant, 0:05:58.240,0:06:00.016 I will teach it some corpus of law 0:06:00.040,0:06:02.896 but at the same time I am fusing with it 0:06:02.920,0:06:05.800 the sense of mercy and justice[br]that is part of that law. 0:06:06.560,0:06:09.536 In scientific terms,[br]this is what we call ground truth, 0:06:09.560,0:06:11.576 and here's the important point: 0:06:11.600,0:06:13.056 in producing these machines, 0:06:13.080,0:06:16.496 we are therefore teaching them[br]a sense of our values. 0:06:16.520,0:06:19.656 To that end, I trust[br]an artificial intelligence 0:06:19.680,0:06:23.320 the same, if not more,[br]as a human who is well-trained. 0:06:24.080,0:06:25.296 But, you may ask, 0:06:25.320,0:06:27.936 what about rogue agents, 0:06:27.960,0:06:31.296 some well-funded[br]nongovernment organization? 0:06:31.320,0:06:35.136 I do not fear an artificial intelligence[br]in the hand of a lone wolf. 0:06:35.160,0:06:39.696 Clearly, we cannot protect ourselves[br]against all random acts of violence, 0:06:39.720,0:06:41.856 but the reality is such a system 0:06:41.880,0:06:44.976 requires substantial training[br]and subtle training 0:06:45.000,0:06:47.296 far beyond the resources of an individual. 0:06:47.320,0:06:48.536 And furthermore, 0:06:48.560,0:06:51.816 it's far more than just injecting[br]an internet virus to the world, 0:06:51.840,0:06:54.936 where you push a button,[br]all of a sudden it's in a million places 0:06:54.960,0:06:57.416 and laptops start blowing up[br]all over the place. 0:06:57.440,0:07:00.256 Now, these kinds of substances[br]are much larger, 0:07:00.280,0:07:01.995 and we'll certainly see them coming. 0:07:02.520,0:07:05.576 Do I fear that such[br]an artificial intelligence 0:07:05.600,0:07:07.560 might threaten all of humanity? 0:07:08.280,0:07:12.656 If you look at movies[br]such as "The Matrix," "Metropolis," 0:07:12.680,0:07:15.856 "The Terminator,"[br]shows such as "Westworld," 0:07:15.880,0:07:18.016 they all speak of this kind of fear. 0:07:18.040,0:07:22.336 Indeed, in the book "Superintelligence"[br]by the philosopher Nick Bostrom, 0:07:22.360,0:07:23.896 he picks up on this theme 0:07:23.920,0:07:27.936 and observes that a superintelligence[br]might not only be dangerous, 0:07:27.960,0:07:31.816 it could represent an existential threat[br]to all of humanity. 0:07:31.840,0:07:34.056 Dr. Bostrom's basic argument 0:07:34.080,0:07:36.816 is that such systems will eventually 0:07:36.840,0:07:40.096 have such an insatiable[br]thirst for information 0:07:40.120,0:07:43.016 that they will perhaps learn how to learn 0:07:43.040,0:07:45.656 and eventually discover[br]that they may have goals 0:07:45.680,0:07:47.976 that are contrary to human needs. 0:07:48.000,0:07:49.856 Dr. Bostrom has a number of followers. 0:07:49.880,0:07:54.200 He is supported by people[br]such as Elon Musk and Stephen Hawking. 0:07:54.880,0:07:57.280 With all due respect 0:07:58.160,0:08:00.176 to these brilliant minds, 0:08:00.200,0:08:02.456 I believe that they[br]are fundamentally wrong. 0:08:02.480,0:08:05.656 Now, there are a lot of pieces[br]of Dr. Bostrom's argument to unpack, 0:08:05.680,0:08:07.816 and I don't have time to unpack them all, 0:08:07.840,0:08:10.536 but very briefly, consider this: 0:08:10.560,0:08:14.296 super knowing is very different[br]than super doing. 0:08:14.320,0:08:16.216 HAL was a threat to the Discovery crew 0:08:16.240,0:08:20.656 only insofar as HAL commanded[br]all aspects of the Discovery. 0:08:20.680,0:08:23.176 So it would have to be[br]with a superintelligence. 0:08:23.200,0:08:25.696 It would have to have dominion[br]over all of our world. 0:08:25.720,0:08:28.536 This is the stuff of Skynet[br]from the movie "The Terminator" 0:08:28.560,0:08:30.416 in which we had a superintelligence 0:08:30.440,0:08:31.816 that commanded human will, 0:08:31.840,0:08:35.696 that directed every device[br]that was in every corner of the world. 0:08:35.720,0:08:37.176 Practically speaking, 0:08:37.200,0:08:39.296 it ain't gonna happen. 0:08:39.320,0:08:42.376 We are not building AIs[br]that control the weather, 0:08:42.400,0:08:43.736 that direct the tides, 0:08:43.760,0:08:47.136 that command us[br]capricious, chaotic humans. 0:08:47.160,0:08:51.056 And furthermore, if such[br]an artificial intelligence existed, 0:08:51.080,0:08:54.016 it would have to compete[br]with human economies, 0:08:54.040,0:08:56.560 and thereby compete for resources with us. 0:08:57.200,0:08:58.416 And in the end -- 0:08:58.440,0:08:59.680 don't tell Siri this -- 0:09:00.440,0:09:01.816 we can always unplug them. 0:09:01.840,0:09:03.960 (Laughter) 0:09:05.360,0:09:07.816 We are on an incredible journey 0:09:07.840,0:09:10.336 of coevolution with our machines. 0:09:10.360,0:09:12.856 The humans we are today 0:09:12.880,0:09:15.416 are not the humans we will be then. 0:09:15.440,0:09:18.576 To worry now about the rise[br]of a superintelligence 0:09:18.600,0:09:21.656 is in many ways a dangerous distraction 0:09:21.680,0:09:24.016 because the rise of computing itself 0:09:24.040,0:09:27.056 brings to us a number[br]of human and societal issues 0:09:27.080,0:09:28.720 to which we must now attend. 0:09:29.360,0:09:32.176 How shall I best organize society 0:09:32.200,0:09:34.536 when the need for human labor diminishes? 0:09:34.560,0:09:38.376 How can I bring understanding[br]and education throughout the globe 0:09:38.400,0:09:40.176 and still respect our differences? 0:09:40.200,0:09:44.456 How might I extend and enhance human life[br]through cognitive healthcare? 0:09:44.480,0:09:47.336 How might I use computing 0:09:47.360,0:09:49.120 to help take us to the stars? 0:09:49.760,0:09:51.800 And that's the exciting thing. 0:09:52.400,0:09:54.736 The opportunities to use computing 0:09:54.760,0:09:56.296 to advance the human experience 0:09:56.320,0:09:57.736 are within our reach, 0:09:57.760,0:09:59.616 here and now, 0:09:59.640,0:10:01.320 and we are just beginning. 0:10:02.280,0:10:03.496 Thank you very much. 0:10:03.520,0:10:07.806 (Applause)