0:00:00.767,0:00:04.585 When I was a kid, I was[br]the quintessential nerd. 0:00:05.773,0:00:07.633 I think some of you were too. 0:00:07.633,0:00:09.160 (Laughter) 0:00:09.160,0:00:12.275 And you sir, who laughed the loudest,[br]you probably still are. 0:00:12.275,0:00:14.455 (Laughter) 0:00:14.455,0:00:16.056 I grew up in a small town 0:00:16.056,0:00:17.902 in the dusty plains of north Texas, 0:00:17.902,0:00:21.078 the son of a sheriff who was[br]the son of a pastor. 0:00:21.078,0:00:23.647 Getting into trouble was not an option. 0:00:23.647,0:00:28.048 And so I started reading[br]calculus books for fun. 0:00:28.048,0:00:29.202 (Laughter) 0:00:29.202,0:00:30.872 You did too. 0:00:30.872,0:00:34.560 That led me to building a laser[br]and a computer and model rockets, 0:00:34.560,0:00:36.998 and that led me to making rocket fuel 0:00:36.998,0:00:38.344 in my bedroom. 0:00:38.344,0:00:41.683 Now, in scientific terms, 0:00:41.683,0:00:45.014 we call this a very bad idea. 0:00:45.014,0:00:46.375 (Laughter) 0:00:46.375,0:00:48.063 Around that same time, 0:00:48.063,0:00:51.078 Stanley Kubrick's "2001: A Space Odyssey"[br]came to the theaters, 0:00:51.078,0:00:53.850 and my life was forever changed. 0:00:53.850,0:00:56.368 I loved everything about that movie, 0:00:56.368,0:00:58.972 especially the HAL 9000. 0:00:58.972,0:01:01.244 Now, HAL was a sentient computer 0:01:01.244,0:01:04.009 designed to guide the Discovery spacecraft 0:01:04.009,0:01:06.314 from the Earth to Jupiter. 0:01:06.314,0:01:08.314 HAL was also a flawed character, 0:01:08.314,0:01:13.195 for in the end he chose[br]to value the mission over human life. 0:01:13.195,0:01:14.947 Now, HAL was a fictional character, 0:01:14.947,0:01:17.827 but nonetheless he speaks to our fears, 0:01:17.827,0:01:20.014 our fears of being subjugated 0:01:20.014,0:01:22.795 by some unfeeling, artificial intelligence 0:01:22.795,0:01:25.869 who is indifferent to our humanity. 0:01:25.869,0:01:28.657 I believe that such fears are unfounded. 0:01:28.657,0:01:31.806 Indeed, we stand at a remarkable time 0:01:31.806,0:01:32.895 in human history, 0:01:32.895,0:01:38.051 where, driven by refusal to accept[br]the limits of our bodies and our minds, 0:01:38.051,0:01:40.081 we are building machines 0:01:40.081,0:01:43.300 of exquisite, beautiful[br]complexity and grace 0:01:43.300,0:01:45.483 that will extend the human experience 0:01:45.483,0:01:48.005 in ways beyond our imagining. 0:01:48.005,0:01:50.615 After a career that led me[br]from the Air Force Academy 0:01:50.615,0:01:52.455 to Space Command to now, 0:01:52.455,0:01:54.339 I became a systems engineer, 0:01:54.339,0:01:56.878 and recently I was drawn[br]into an engineering problem 0:01:56.878,0:01:59.437 associated with NASA's mission to Mars. 0:01:59.437,0:02:02.121 Now, in space flights to the Moon, 0:02:02.121,0:02:04.286 we can rely upon mission control 0:02:04.286,0:02:07.263 in Houston to watch over[br]all aspects of the flight. 0:02:07.263,0:02:10.648 However, Mars is 200 times further away, 0:02:10.648,0:02:13.830 and as a result it takes on average 0:02:13.830,0:02:17.295 13 minutes for a signal to travel[br]from the Earth to Mars. 0:02:17.295,0:02:20.884 If there's trouble,[br]there's not enough time. 0:02:20.884,0:02:23.531 And so a reasonable engineering solution 0:02:23.531,0:02:26.164 calls for us to put mission control 0:02:26.164,0:02:29.135 inside the walls of the Orion Spacecraft. 0:02:29.135,0:02:31.386 Another fascinating idea 0:02:31.386,0:02:32.138 in the mission profile 0:02:32.138,0:02:33.827 places humanoid robots[br]on the surface of Mars 0:02:33.827,0:02:36.849 before the humans themselves arrive, 0:02:36.849,0:02:38.886 first to build facilities 0:02:38.886,0:02:42.786 and later to serve as collaborative[br]members of the science team. 0:02:42.786,0:02:46.444 Now, as I looked at this[br]from an engineering perspective, 0:02:46.444,0:02:49.346 it became very clear to me[br]that what I needed to architect 0:02:49.346,0:02:51.726 was a smart, collaborative, 0:02:51.726,0:02:54.366 socially intelligent[br]artificial intelligence. 0:02:54.366,0:02:57.904 In other words, I needed to build[br]something very much like a HAL 0:02:57.904,0:03:01.144 but without the homicidal tendencies. 0:03:01.144,0:03:03.117 (Laughter) 0:03:03.117,0:03:04.807 Let's pause for a moment. 0:03:04.807,0:03:08.761 Is it really possible to build[br]an artificial intelligence like that? 0:03:08.761,0:03:10.231 Actually, it is. 0:03:10.231,0:03:11.448 In many ways, 0:03:11.448,0:03:13.522 this is a hard engineering problem 0:03:13.522,0:03:15.357 with elements of AI, 0:03:15.357,0:03:18.593 not some wet hairball of an AI problem[br]that needs to be engineered. 0:03:18.593,0:03:22.362 To paraphrase Alan Turing, 0:03:22.362,0:03:25.109 I'm not interested in a sentient machine. 0:03:25.109,0:03:26.488 I'm not building a HAL. 0:03:26.488,0:03:29.110 All I'm after is a simple brain, 0:03:29.110,0:03:32.871 something that offers[br]the illusion of intelligence. 0:03:32.871,0:03:36.031 The art and the science of computing 0:03:36.031,0:03:36.795 have come a long way 0:03:36.795,0:03:37.853 since HAL was onscreen, 0:03:37.853,0:03:41.024 and I'd imagine if his inventor[br]Dr. Chandra were here today, 0:03:41.024,0:03:43.380 he'd have a whole lot of questions for us. 0:03:43.380,0:03:45.310 Is it really possible for us 0:03:45.310,0:03:49.382 to take a system of millions[br]upon millions of devices 0:03:49.382,0:03:51.331 to read in their data streams, 0:03:51.331,0:03:53.359 to predict their failures[br]and act in advance? 0:03:53.359,0:03:54.644 Yes. 0:03:54.644,0:03:57.816 Can we build systems that converse[br]with humans in natural language? 0:03:57.816,0:03:58.603 Yes. 0:03:58.603,0:04:00.898 Can we build systems that recognize[br]objects, identify emotions, 0:04:00.898,0:04:05.325 emote themselves, play games,[br]and even read lips? 0:04:05.325,0:04:06.377 Yes. 0:04:06.377,0:04:08.787 Can we build a system that sets goals, 0:04:08.787,0:04:12.331 that carries out plans against those goals[br]and learns along the way? 0:04:12.331,0:04:13.609 Yes. 0:04:13.609,0:04:15.469 Can we build systems 0:04:15.469,0:04:17.110 that have a theory of mind? 0:04:17.110,0:04:18.581 This we are learning to do. 0:04:18.581,0:04:22.566 Can we build systems that have[br]an ethical and moral foundation? 0:04:22.566,0:04:25.360 This we must learn how to do. 0:04:25.360,0:04:28.238 So let's accept for a moment that it's[br]possible to build such 0:04:28.238,0:04:29.768 an artificial intelligence 0:04:29.768,0:04:31.968 for this kind of mission and others. 0:04:31.968,0:04:34.613 The next question you[br]must ask yourself is, 0:04:34.613,0:04:36.390 should we fear it? 0:04:36.390,0:04:38.138 Now, every new technology 0:04:38.138,0:04:40.406 brings with it some[br]measure of trepidation. 0:04:40.406,0:04:42.952 When we first saw cars, 0:04:42.952,0:04:46.900 people lamented that we would see[br]the destruction of the family. 0:04:46.900,0:04:49.566 When we first saw telephones come in, 0:04:49.566,0:04:52.363 people would worry it would destroy[br]all civil conversation. 0:04:52.363,0:04:56.328 At the point in time we saw[br]the written word become pervasive, 0:04:56.328,0:04:58.812 people thought we would lose[br]our ability to memorize. 0:04:58.812,0:05:00.916 These things are all true to a degree, 0:05:00.916,0:05:03.005 but it's also true that these technologies 0:05:03.005,0:05:06.408 brought to us things that extended[br]the human experience 0:05:06.408,0:05:09.928 in some profound ways. 0:05:09.928,0:05:12.591 So let's take this a little further. 0:05:12.591,0:05:17.722 I do not fear the creation[br]of an AI like this, 0:05:17.722,0:05:21.864 because it will eventually embody[br]some of our values. 0:05:21.864,0:05:24.233 Consider this: building a cognitive system 0:05:24.233,0:05:27.911 is fundamentally different[br]than building a traditional 0:05:27.911,0:05:29.217 software-intensive system of the past. 0:05:29.217,0:05:31.418 We don't program them. We teach them. 0:05:31.418,0:05:33.875 In order to teach a system[br]how to recognize flowers, 0:05:33.875,0:05:36.934 I show it thousands of flowers[br]of the kinds I like. 0:05:36.934,0:05:39.308 In order to teach a system[br]how to play a game -- 0:05:39.308,0:05:42.114 Well, I would. You would too. 0:05:42.114,0:05:45.611 I like flowers. Come on. 0:05:45.611,0:05:48.543 To teach a system how[br]to play a game like go, 0:05:48.543,0:05:50.392 I'd have it play thousands of games of go, 0:05:50.392,0:05:52.563 but in the process I also teach it[br]how to discern 0:05:52.563,0:05:54.715 a good game from a bad game. 0:05:54.715,0:05:58.318 If I want to create an artificially[br]intelligent legal assistant, 0:05:58.318,0:06:01.019 I will teach it some corpus of law[br]but at the same time 0:06:01.019,0:06:02.998 I am fusing with it 0:06:02.998,0:06:06.315 the sense of mercy and justice[br]that is part of that law. 0:06:06.315,0:06:09.824 In scientific terms, this is what[br]we call ground truth, 0:06:09.824,0:06:11.900 and here's the important point: 0:06:11.900,0:06:16.378 in producing these machines,[br]we are therefore teaching them 0:06:16.378,0:06:16.761 a sense of our values. 0:06:16.761,0:06:19.638 To that end, I trust[br]in artificial intelligence 0:06:19.638,0:06:24.291 the same, if not more, as a human[br]who is well-trained. 0:06:24.291,0:06:25.685 But, you may ask, 0:06:25.685,0:06:28.551 what about rogue agents, 0:06:28.551,0:06:31.230 some well-funded[br]non-government organization? 0:06:31.230,0:06:35.224 I do not fear an artificial intelligence[br]in the hand of a lone wolf. 0:06:35.224,0:06:37.392 Clearly, we cannot protect ourselves 0:06:37.392,0:06:39.958 against all random acts of violence, 0:06:39.958,0:06:41.841 but the reality is such a system 0:06:41.841,0:06:45.591 requires substantial training[br]and subtle training 0:06:45.591,0:06:47.606 far beyond the resources of an individual, 0:06:47.606,0:06:51.738 and furthermore, it's far more[br]than just injecting an Internet virus 0:06:51.738,0:06:53.410 to the world where you push a button, 0:06:53.410,0:06:55.068 all of a sudden it's in a million places 0:06:55.068,0:06:57.474 and laptops start blowing up[br]all over the place. 0:06:57.474,0:07:00.452 Now, these kinds of substances[br]are much larger 0:07:00.452,0:07:02.596 and we'll certainly see them coming. 0:07:02.596,0:07:05.289 Do I fear that such[br]an artificial intelligence 0:07:05.289,0:07:08.263 might threaten all of humanity? 0:07:08.263,0:07:09.801 If you look at movies 0:07:09.801,0:07:12.731 such as "The Matrix," "Metropolis," 0:07:12.731,0:07:15.777 "The Terminator," shows[br]such as "Westworld," 0:07:15.777,0:07:17.969 they all speak of this kind of fear. 0:07:17.969,0:07:22.965 Indeed, in the book "Superintelligence"[br]by the philosopher Nick Bostrom, 0:07:22.965,0:07:24.392 he picks up on this theme 0:07:24.392,0:07:28.164 and observes that a superintelligence[br]might not only be dangerous, 0:07:28.164,0:07:32.000 it could represent an existential threat[br]to all of humanity. 0:07:32.000,0:07:33.789 Dr. Bostrom's basic argument 0:07:33.789,0:07:37.591 is that such systems will eventually 0:07:37.591,0:07:40.276 have such an insatiable[br]thirst for information 0:07:40.276,0:07:42.920 that they will perhaps learn how to learn 0:07:42.920,0:07:46.604 and eventually discover that they[br]may have goals 0:07:46.604,0:07:48.324 that are contrary to human needs. 0:07:48.324,0:07:50.206 Dr. Bostrom has a number of followers. 0:07:50.206,0:07:52.766 He is supported by people[br]such as Elon Musk 0:07:52.766,0:07:54.706 and Stephen Hawking. 0:07:54.706,0:07:57.846 With all due respect 0:07:57.846,0:08:00.251 to these brilliant minds, 0:08:00.251,0:08:02.600 I believe that they[br]are fundamentally wrong. 0:08:02.600,0:08:06.650 Now, there are a lot of pieces[br]of Dr. Bostrom's argument to unpack, 0:08:06.650,0:08:08.254 and I don't have time to unpack them all, 0:08:08.254,0:08:10.564 but very briefly, consider this: 0:08:10.564,0:08:14.228 super knowing is very different[br]than super doing. 0:08:14.228,0:08:17.394 HAL was a threat to the Discovery crew[br]only insofar as HAL commanded 0:08:17.394,0:08:20.651 all aspects of the Discovery. 0:08:20.651,0:08:23.166 So it would have to be[br]with a superintelligence. 0:08:23.166,0:08:25.697 It would have to have dominion[br]over all of our world. 0:08:25.697,0:08:28.529 This is the stuff of Skynet[br]from the movie "The Terminator" 0:08:28.529,0:08:30.467 in which we had a superintelligence 0:08:30.467,0:08:32.059 that commanded human will,[br]that directed every device 0:08:32.059,0:08:35.404 that was in every corner of the world. 0:08:35.404,0:08:37.057 Practically speaking, 0:08:37.057,0:08:39.345 it ain't gonna happen. 0:08:39.345,0:08:42.671 We are not building AIs[br]that control the weather, 0:08:42.671,0:08:44.021 that direct the tides, 0:08:44.021,0:08:47.466 that command us capricious,[br]chaotic humans. 0:08:47.466,0:08:50.813 And furthermore, if such[br]an artificial intelligence existed, 0:08:50.813,0:08:53.712 it would have to compete[br]with human economies, 0:08:53.712,0:08:57.357 and thereby compete for resources with us. 0:08:57.357,0:08:58.666 And in the end -- 0:08:58.666,0:09:00.162 don't tell Siri this -- 0:09:00.162,0:09:02.674 we can always unplug them. 0:09:02.674,0:09:04.943 (Laughter) 0:09:04.943,0:09:08.110 We are on an incredible journey 0:09:08.110,0:09:10.443 of coevolution with our machines. 0:09:10.443,0:09:12.999 The humans we are today 0:09:12.999,0:09:15.465 are not the humans we will be then. 0:09:15.465,0:09:18.904 To worry now about the rise[br]of a superintelligence 0:09:18.904,0:09:21.872 is in many ways a dangerous distraction 0:09:21.872,0:09:24.040 because the rise of computing itself 0:09:24.040,0:09:27.116 brings to us a number of human[br]and societal issues 0:09:27.116,0:09:29.456 to which we must now attend. 0:09:29.456,0:09:32.305 How shall I best organize society 0:09:32.305,0:09:34.731 when the need for human labor diminishes? 0:09:34.731,0:09:36.642 How can I bring understanding 0:09:36.642,0:09:40.370 and education throughout the globe[br]and still respect our differences? 0:09:40.370,0:09:44.575 How might I extend and enhance human life[br]through cognitive healthcare? 0:09:44.575,0:09:47.264 How might I use computing 0:09:47.264,0:09:50.049 to help take us to the stars? 0:09:50.049,0:09:52.507 And that's the exciting thing. 0:09:52.507,0:09:55.553 The opportunities to use computing 0:09:55.553,0:09:57.075 to advance the human experience 0:09:57.075,0:09:58.123 are within our reach, 0:09:58.123,0:09:59.960 here and now, 0:09:59.960,0:10:02.404 and we are just beginning. 0:10:02.404,0:10:04.150 Thank you very much. 0:10:04.150,0:10:07.806 (Applause)