WEBVTT 00:00:00.767 --> 00:00:04.585 When I was a kid, I was the quintessential nerd. 00:00:05.773 --> 00:00:07.633 I think some of you were too. NOTE Paragraph 00:00:07.633 --> 00:00:09.160 (Laughter) 00:00:09.160 --> 00:00:12.275 And you sir, who laughed the loudest, you probably still are. NOTE Paragraph 00:00:12.275 --> 00:00:14.455 (Laughter) NOTE Paragraph 00:00:14.455 --> 00:00:16.056 I grew up in a small town 00:00:16.056 --> 00:00:17.902 in the dusty plains of north Texas, 00:00:17.902 --> 00:00:21.078 the son of a sheriff who was the son of a pastor. 00:00:21.078 --> 00:00:23.647 Getting into trouble was not an option. 00:00:23.647 --> 00:00:28.048 And so I started reading calculus books for fun. NOTE Paragraph 00:00:28.048 --> 00:00:29.202 (Laughter) NOTE Paragraph 00:00:29.202 --> 00:00:30.872 You did too. 00:00:30.872 --> 00:00:34.560 That led me to building a laser and a computer and model rockets, 00:00:34.560 --> 00:00:36.998 and that led me to making rocket fuel 00:00:36.998 --> 00:00:38.344 in my bedroom. 00:00:38.344 --> 00:00:41.683 Now, in scientific terms, 00:00:41.683 --> 00:00:45.014 we call this a very bad idea. NOTE Paragraph 00:00:45.014 --> 00:00:46.375 (Laughter) NOTE Paragraph 00:00:46.375 --> 00:00:48.063 Around that same time, 00:00:48.063 --> 00:00:51.078 Stanley Kubrick's "2001: A Space Odyssey" came to the theaters, 00:00:51.078 --> 00:00:53.850 and my life was forever changed. 00:00:53.850 --> 00:00:56.368 I loved everything about that movie, 00:00:56.368 --> 00:00:58.972 especially the HAL 9000. 00:00:58.972 --> 00:01:01.244 Now, HAL was a sentient computer 00:01:01.244 --> 00:01:04.009 designed to guide the Discovery spacecraft 00:01:04.009 --> 00:01:06.314 from the Earth to Jupiter. 00:01:06.314 --> 00:01:08.314 HAL was also a flawed character, 00:01:08.314 --> 00:01:13.195 for in the end he chose to value the mission over human life. 00:01:13.195 --> 00:01:14.947 Now, HAL was a fictional character, 00:01:14.947 --> 00:01:17.827 but nonetheless he speaks to our fears, 00:01:17.827 --> 00:01:20.014 our fears of being subjugated 00:01:20.014 --> 00:01:22.795 by some unfeeling, artificial intelligence 00:01:22.795 --> 00:01:25.869 who is indifferent to our humanity. NOTE Paragraph 00:01:25.869 --> 00:01:28.657 I believe that such fears are unfounded. 00:01:28.657 --> 00:01:31.806 Indeed, we stand at a remarkable time 00:01:31.806 --> 00:01:32.895 in human history, 00:01:32.895 --> 00:01:38.051 where, driven by refusal to accept the limits of our bodies and our minds, 00:01:38.051 --> 00:01:40.081 we are building machines 00:01:40.081 --> 00:01:43.300 of exquisite, beautiful complexity and grace 00:01:43.300 --> 00:01:45.483 that will extend the human experience 00:01:45.483 --> 00:01:48.005 in ways beyond our imagining. 00:01:48.005 --> 00:01:50.615 After a career that led me from the Air Force Academy 00:01:50.615 --> 00:01:52.455 to Space Command to now, 00:01:52.455 --> 00:01:54.339 I became a systems engineer, 00:01:54.339 --> 00:01:56.878 and recently I was drawn into an engineering problem 00:01:56.878 --> 00:01:59.437 associated with NASA's mission to Mars. 00:01:59.437 --> 00:02:02.121 Now, in space flights to the Moon, 00:02:02.121 --> 00:02:04.286 we can rely upon mission control 00:02:04.286 --> 00:02:07.263 in Houston to watch over all aspects of the flight. 00:02:07.263 --> 00:02:10.648 However, Mars is 200 times further away, 00:02:10.648 --> 00:02:13.830 and as a result it takes on average 00:02:13.830 --> 00:02:17.295 13 minutes for a signal to travel from the Earth to Mars. 00:02:17.295 --> 00:02:20.884 If there's trouble, there's not enough time. 00:02:20.884 --> 00:02:23.531 And so a reasonable engineering solution 00:02:23.531 --> 00:02:26.164 calls for us to put mission control 00:02:26.164 --> 00:02:29.135 inside the walls of the Orion Spacecraft. 00:02:29.135 --> 00:02:31.386 Another fascinating idea 00:02:31.386 --> 00:02:32.138 in the mission profile 00:02:32.138 --> 00:02:33.827 places humanoid robots on the surface of Mars 00:02:33.827 --> 00:02:36.849 before the humans themselves arrive, 00:02:36.849 --> 00:02:38.886 first to build facilities 00:02:38.886 --> 00:02:42.786 and later to serve as collaborative members of the science team. NOTE Paragraph 00:02:42.786 --> 00:02:46.444 Now, as I looked at this from an engineering perspective, 00:02:46.444 --> 00:02:49.346 it became very clear to me that what I needed to architect 00:02:49.346 --> 00:02:51.726 was a smart, collaborative, 00:02:51.726 --> 00:02:54.366 socially intelligent artificial intelligence. 00:02:54.366 --> 00:02:57.904 In other words, I needed to build something very much like a HAL 00:02:57.904 --> 00:03:01.144 but without the homicidal tendencies. NOTE Paragraph 00:03:01.144 --> 00:03:03.117 (Laughter) NOTE Paragraph 00:03:03.117 --> 00:03:04.807 Let's pause for a moment. 00:03:04.807 --> 00:03:08.761 Is it really possible to build an artificial intelligence like that? 00:03:08.761 --> 00:03:10.231 Actually, it is. 00:03:10.231 --> 00:03:11.448 In many ways, 00:03:11.448 --> 00:03:13.522 this is a hard engineering problem 00:03:13.522 --> 00:03:15.357 with elements of AI, 00:03:15.357 --> 00:03:18.593 not some wet hairball of an AI problem that needs to be engineered. 00:03:18.593 --> 00:03:22.362 To paraphrase Alan Turing, 00:03:22.362 --> 00:03:25.109 I'm not interested in a sentient machine. 00:03:25.109 --> 00:03:26.488 I'm not building a HAL. 00:03:26.488 --> 00:03:29.110 All I'm after is a simple brain, 00:03:29.110 --> 00:03:32.871 something that offers the illusion of intelligence. NOTE Paragraph 00:03:32.871 --> 00:03:36.031 The art and the science of computing 00:03:36.031 --> 00:03:36.795 have come a long way 00:03:36.795 --> 00:03:37.853 since HAL was onscreen, 00:03:37.853 --> 00:03:41.024 and I'd imagine if his inventor Dr. Chandra were here today, 00:03:41.024 --> 00:03:43.380 he'd have a whole lot of questions for us. 00:03:43.380 --> 00:03:45.310 Is it really possible for us 00:03:45.310 --> 00:03:49.382 to take a system of millions upon millions of devices 00:03:49.382 --> 00:03:51.331 to read in their data streams, 00:03:51.331 --> 00:03:53.359 to predict their failures and act in advance? 00:03:53.359 --> 00:03:54.644 Yes. 00:03:54.644 --> 00:03:57.816 Can we build systems that converse with humans in natural language? 00:03:57.816 --> 00:03:58.603 Yes. 00:03:58.603 --> 00:04:00.898 Can we build systems that recognize objects, identify emotions, 00:04:00.898 --> 00:04:05.325 emote themselves, play games, and even read lips? 00:04:05.325 --> 00:04:06.377 Yes. 00:04:06.377 --> 00:04:08.787 Can we build a system that sets goals, 00:04:08.787 --> 00:04:12.331 that carries out plans against those goals and learns along the way? 00:04:12.331 --> 00:04:13.609 Yes. 00:04:13.609 --> 00:04:15.469 Can we build systems 00:04:15.469 --> 00:04:17.110 that have a theory of mind? 00:04:17.110 --> 00:04:18.581 This we are learning to do. 00:04:18.581 --> 00:04:22.566 Can we build systems that have an ethical and moral foundation? 00:04:22.566 --> 00:04:25.360 This we must learn how to do. 00:04:25.360 --> 00:04:28.238 So let's accept for a moment that it's possible to build such 00:04:28.238 --> 00:04:29.768 an artificial intelligence 00:04:29.768 --> 00:04:31.968 for this kind of mission and others. NOTE Paragraph 00:04:31.968 --> 00:04:34.613 The next question you must ask yourself is, 00:04:34.613 --> 00:04:36.390 should we fear it? 00:04:36.390 --> 00:04:38.138 Now, every new technology 00:04:38.138 --> 00:04:40.406 brings with it some measure of trepidation. 00:04:40.406 --> 00:04:42.952 When we first saw cars, 00:04:42.952 --> 00:04:46.900 people lamented that we would see the destruction of the family. 00:04:46.900 --> 00:04:49.566 When we first saw telephones come in, 00:04:49.566 --> 00:04:52.363 people would worry it would destroy all civil conversation. 00:04:52.363 --> 00:04:56.328 At the point in time we saw the written word become pervasive, 00:04:56.328 --> 00:04:58.812 people thought we would lose our ability to memorize. 00:04:58.812 --> 00:05:00.916 These things are all true to a degree, 00:05:00.916 --> 00:05:03.005 but it's also true that these technologies 00:05:03.005 --> 00:05:06.408 brought to us things that extended the human experience 00:05:06.408 --> 00:05:09.928 in some profound ways. NOTE Paragraph 00:05:09.928 --> 00:05:12.591 So let's take this a little further. 00:05:12.591 --> 00:05:17.722 I do not fear the creation of an AI like this, 00:05:17.722 --> 00:05:21.864 because it will eventually embody some of our values. 00:05:21.864 --> 00:05:24.233 Consider this: building a cognitive system 00:05:24.233 --> 00:05:27.911 is fundamentally different than building a traditional 00:05:27.911 --> 00:05:29.217 software-intensive system of the past. 00:05:29.217 --> 00:05:31.418 We don't program them. We teach them. 00:05:31.418 --> 00:05:33.875 In order to teach a system how to recognize flowers, 00:05:33.875 --> 00:05:36.934 I show it thousands of flowers of the kinds I like. 00:05:36.934 --> 00:05:39.308 In order to teach a system how to play a game -- 00:05:39.308 --> 00:05:42.114 Well, I would. You would too. 00:05:42.114 --> 00:05:45.611 I like flowers. Come on. 00:05:45.611 --> 00:05:48.543 To teach a system how to play a game like go, 00:05:48.543 --> 00:05:50.392 I'd have it play thousands of games of go, 00:05:50.392 --> 00:05:52.563 but in the process I also teach it how to discern 00:05:52.563 --> 00:05:54.715 a good game from a bad game. 00:05:54.715 --> 00:05:58.318 If I want to create an artificially intelligent legal assistant, 00:05:58.318 --> 00:06:01.019 I will teach it some corpus of law but at the same time 00:06:01.019 --> 00:06:02.998 I am fusing with it 00:06:02.998 --> 00:06:06.315 the sense of mercy and justice that is part of that law. 00:06:06.315 --> 00:06:09.824 In scientific terms, this is what we call ground truth, 00:06:09.824 --> 00:06:11.900 and here's the important point: 00:06:11.900 --> 00:06:16.378 in producing these machines, we are therefore teaching them 00:06:16.378 --> 00:06:16.761 a sense of our values. 00:06:16.761 --> 00:06:19.638 To that end, I trust in artificial intelligence 00:06:19.638 --> 00:06:24.291 the same, if not more, as a human who is well-trained. NOTE Paragraph 00:06:24.291 --> 00:06:25.685 But, you may ask, 00:06:25.685 --> 00:06:28.551 what about rogue agents, 00:06:28.551 --> 00:06:31.230 some well-funded non-government organization? 00:06:31.230 --> 00:06:35.224 I do not fear an artificial intelligence in the hand of a lone wolf. 00:06:35.224 --> 00:06:37.392 Clearly, we cannot protect ourselves 00:06:37.392 --> 00:06:39.958 against all random acts of violence, 00:06:39.958 --> 00:06:41.841 but the reality is such a system 00:06:41.841 --> 00:06:45.591 requires substantial training and subtle training 00:06:45.591 --> 00:06:47.606 far beyond the resources of an individual, 00:06:47.606 --> 00:06:51.738 and furthermore, it's far more than just injecting an Internet virus 00:06:51.738 --> 00:06:53.410 to the world where you push a button, 00:06:53.410 --> 00:06:55.068 all of a sudden it's in a million places 00:06:55.068 --> 00:06:57.474 and laptops start blowing up all over the place. 00:06:57.474 --> 00:07:00.452 Now, these kinds of substances are much larger 00:07:00.452 --> 00:07:02.596 and we'll certainly see them coming. NOTE Paragraph 00:07:02.596 --> 00:07:05.289 Do I fear that such an artificial intelligence 00:07:05.289 --> 00:07:08.263 might threaten all of humanity? 00:07:08.263 --> 00:07:09.801 If you look at movies 00:07:09.801 --> 00:07:12.731 such as "The Matrix," "Metropolis," 00:07:12.731 --> 00:07:15.777 "The Terminator," shows such as "Westworld," 00:07:15.777 --> 00:07:17.969 they all speak of this kind of fear. 00:07:17.969 --> 00:07:22.965 Indeed, in the book "Superintelligence" by the philosopher Nick Bostrom, 00:07:22.965 --> 00:07:24.392 he picks up on this theme 00:07:24.392 --> 00:07:28.164 and observes that a superintelligence might not only be dangerous, 00:07:28.164 --> 00:07:32.000 it could represent an existential threat to all of humanity. 00:07:32.000 --> 00:07:33.789 Dr. Bostrom's basic argument 00:07:33.789 --> 00:07:37.591 is that such systems will eventually 00:07:37.591 --> 00:07:40.276 have such an insatiable thirst for information 00:07:40.276 --> 00:07:42.920 that they will perhaps learn how to learn 00:07:42.920 --> 00:07:46.604 and eventually discover that they may have goals 00:07:46.604 --> 00:07:48.324 that are contrary to human needs. 00:07:48.324 --> 00:07:50.206 Dr. Bostrom has a number of followers. 00:07:50.206 --> 00:07:52.766 He is supported by people such as Elon Musk 00:07:52.766 --> 00:07:54.706 and Stephen Hawking. 00:07:54.706 --> 00:07:57.846 With all due respect 00:07:57.846 --> 00:08:00.251 to these brilliant minds, 00:08:00.251 --> 00:08:02.600 I believe that they are fundamentally wrong. 00:08:02.600 --> 00:08:06.650 Now, there are a lot of pieces of Dr. Bostrom's argument to unpack, 00:08:06.650 --> 00:08:08.254 and I don't have time to unpack them all, 00:08:08.254 --> 00:08:10.564 but very briefly, consider this: 00:08:10.564 --> 00:08:14.228 super knowing is very different than super doing. 00:08:14.228 --> 00:08:17.394 HAL was a threat to the Discovery crew only insofar as HAL commanded 00:08:17.394 --> 00:08:20.651 all aspects of the Discovery. 00:08:20.651 --> 00:08:23.166 So it would have to be with a superintelligence. 00:08:23.166 --> 00:08:25.697 It would have to have dominion over all of our world. 00:08:25.697 --> 00:08:28.529 This is the stuff of Skynet from the movie "The Terminator" 00:08:28.529 --> 00:08:30.467 in which we had a superintelligence 00:08:30.467 --> 00:08:32.059 that commanded human will, that directed every device 00:08:32.059 --> 00:08:35.404 that was in every corner of the world. 00:08:35.404 --> 00:08:37.057 Practically speaking, 00:08:37.057 --> 00:08:39.345 it ain't gonna happen. 00:08:39.345 --> 00:08:42.671 We are not building AIs that control the weather, 00:08:42.671 --> 00:08:44.021 that direct the tides, 00:08:44.021 --> 00:08:47.466 that command us capricious, chaotic humans. 00:08:47.466 --> 00:08:50.813 And furthermore, if such an artificial intelligence existed, 00:08:50.813 --> 00:08:53.712 it would have to compete with human economies, 00:08:53.712 --> 00:08:57.357 and thereby compete for resources with us. 00:08:57.357 --> 00:08:58.666 And in the end -- 00:08:58.666 --> 00:09:00.162 don't tell Siri this -- 00:09:00.162 --> 00:09:02.674 we can always unplug them. NOTE Paragraph 00:09:02.674 --> 00:09:04.943 (Laughter) NOTE Paragraph 00:09:04.943 --> 00:09:08.110 We are on an incredible journey 00:09:08.110 --> 00:09:10.443 of coevolution with our machines. 00:09:10.443 --> 00:09:12.999 The humans we are today 00:09:12.999 --> 00:09:15.465 are not the humans we will be then. 00:09:15.465 --> 00:09:18.904 To worry now about the rise of a superintelligence 00:09:18.904 --> 00:09:21.872 is in many ways a dangerous distraction 00:09:21.872 --> 00:09:24.040 because the rise of computing itself 00:09:24.040 --> 00:09:27.116 brings to us a number of human and societal issues 00:09:27.116 --> 00:09:29.456 to which we must now attend. 00:09:29.456 --> 00:09:32.305 How shall I best organize society 00:09:32.305 --> 00:09:34.731 when the need for human labor diminishes? 00:09:34.731 --> 00:09:36.642 How can I bring understanding 00:09:36.642 --> 00:09:40.370 and education throughout the globe and still respect our differences? 00:09:40.370 --> 00:09:44.575 How might I extend and enhance human life through cognitive healthcare? 00:09:44.575 --> 00:09:47.264 How might I use computing 00:09:47.264 --> 00:09:50.049 to help take us to the stars? NOTE Paragraph 00:09:50.049 --> 00:09:52.507 And that's the exciting thing. 00:09:52.507 --> 00:09:55.553 The opportunities to use computing 00:09:55.553 --> 00:09:57.075 to advance the human experience 00:09:57.075 --> 00:09:58.123 are within our reach, 00:09:58.123 --> 00:09:59.960 here and now, 00:09:59.960 --> 00:10:02.404 and we are just beginning. NOTE Paragraph 00:10:02.404 --> 00:10:04.150 Thank you very much. NOTE Paragraph 00:10:04.150 --> 00:10:07.806 (Applause)