1 00:00:00,767 --> 00:00:04,585 When I was a kid, I was the quintessential nerd. 2 00:00:05,773 --> 00:00:07,633 I think some of you were too. 3 00:00:07,633 --> 00:00:09,160 (Laughter) 4 00:00:09,160 --> 00:00:12,275 And you sir, who laughed the loudest, you probably still are. 5 00:00:12,275 --> 00:00:14,455 (Laughter) 6 00:00:14,455 --> 00:00:16,056 I grew up in a small town 7 00:00:16,056 --> 00:00:17,902 in the dusty plains of north Texas, 8 00:00:17,902 --> 00:00:21,078 the son of a sheriff who was the son of a pastor. 9 00:00:21,078 --> 00:00:23,647 Getting into trouble was not an option. 10 00:00:23,647 --> 00:00:28,048 And so I started reading calculus books for fun. 11 00:00:28,048 --> 00:00:29,202 (Laughter) 12 00:00:29,202 --> 00:00:30,872 You did too. 13 00:00:30,872 --> 00:00:34,560 That led me to building a laser and a computer and model rockets, 14 00:00:34,560 --> 00:00:36,998 and that led me to making rocket fuel 15 00:00:36,998 --> 00:00:38,344 in my bedroom. 16 00:00:38,344 --> 00:00:41,683 Now, in scientific terms, 17 00:00:41,683 --> 00:00:45,014 we call this a very bad idea. 18 00:00:45,014 --> 00:00:46,375 (Laughter) 19 00:00:46,375 --> 00:00:48,063 Around that same time, 20 00:00:48,063 --> 00:00:51,078 Stanley Kubrick's "2001: A Space Odyssey" came to the theaters, 21 00:00:51,078 --> 00:00:53,850 and my life was forever changed. 22 00:00:53,850 --> 00:00:56,368 I loved everything about that movie, 23 00:00:56,368 --> 00:00:58,972 especially the HAL 9000. 24 00:00:58,972 --> 00:01:01,244 Now, HAL was a sentient computer 25 00:01:01,244 --> 00:01:04,009 designed to guide the Discovery spacecraft 26 00:01:04,009 --> 00:01:06,314 from the Earth to Jupiter. 27 00:01:06,314 --> 00:01:08,314 HAL was also a flawed character, 28 00:01:08,314 --> 00:01:13,195 for in the end he chose to value the mission over human life. 29 00:01:13,195 --> 00:01:14,947 Now, HAL was a fictional character, 30 00:01:14,947 --> 00:01:17,827 but nonetheless he speaks to our fears, 31 00:01:17,827 --> 00:01:20,014 our fears of being subjugated 32 00:01:20,014 --> 00:01:22,795 by some unfeeling, artificial intelligence 33 00:01:22,795 --> 00:01:25,869 who is indifferent to our humanity. 34 00:01:25,869 --> 00:01:28,657 I believe that such fears are unfounded. 35 00:01:28,657 --> 00:01:31,806 Indeed, we stand at a remarkable time 36 00:01:31,806 --> 00:01:32,895 in human history, 37 00:01:32,895 --> 00:01:38,051 where, driven by refusal to accept the limits of our bodies and our minds, 38 00:01:38,051 --> 00:01:40,081 we are building machines 39 00:01:40,081 --> 00:01:43,300 of exquisite, beautiful complexity and grace 40 00:01:43,300 --> 00:01:45,483 that will extend the human experience 41 00:01:45,483 --> 00:01:48,005 in ways beyond our imagining. 42 00:01:48,005 --> 00:01:50,615 After a career that led me from the Air Force Academy 43 00:01:50,615 --> 00:01:52,455 to Space Command to now, 44 00:01:52,455 --> 00:01:54,339 I became a systems engineer, 45 00:01:54,339 --> 00:01:56,878 and recently I was drawn into an engineering problem 46 00:01:56,878 --> 00:01:59,437 associated with NASA's mission to Mars. 47 00:01:59,437 --> 00:02:02,121 Now, in space flights to the Moon, 48 00:02:02,121 --> 00:02:04,286 we can rely upon mission control 49 00:02:04,286 --> 00:02:07,263 in Houston to watch over all aspects of the flight. 50 00:02:07,263 --> 00:02:10,648 However, Mars is 200 times further away, 51 00:02:10,648 --> 00:02:13,830 and as a result it takes on average 52 00:02:13,830 --> 00:02:17,295 13 minutes for a signal to travel from the Earth to Mars. 53 00:02:17,295 --> 00:02:20,884 If there's trouble, there's not enough time. 54 00:02:20,884 --> 00:02:23,531 And so a reasonable engineering solution 55 00:02:23,531 --> 00:02:26,164 calls for us to put mission control 56 00:02:26,164 --> 00:02:29,135 inside the walls of the Orion Spacecraft. 57 00:02:29,135 --> 00:02:31,386 Another fascinating idea 58 00:02:31,386 --> 00:02:32,138 in the mission profile 59 00:02:32,138 --> 00:02:33,827 places humanoid robots on the surface of Mars 60 00:02:33,827 --> 00:02:36,849 before the humans themselves arrive, 61 00:02:36,849 --> 00:02:38,886 first to build facilities 62 00:02:38,886 --> 00:02:42,786 and later to serve as collaborative members of the science team. 63 00:02:42,786 --> 00:02:46,444 Now, as I looked at this from an engineering perspective, 64 00:02:46,444 --> 00:02:49,346 it became very clear to me that what I needed to architect 65 00:02:49,346 --> 00:02:51,726 was a smart, collaborative, 66 00:02:51,726 --> 00:02:54,366 socially intelligent artificial intelligence. 67 00:02:54,366 --> 00:02:57,904 In other words, I needed to build something very much like a HAL 68 00:02:57,904 --> 00:03:01,144 but without the homicidal tendencies. 69 00:03:01,144 --> 00:03:03,117 (Laughter) 70 00:03:03,117 --> 00:03:04,807 Let's pause for a moment. 71 00:03:04,807 --> 00:03:08,761 Is it really possible to build an artificial intelligence like that? 72 00:03:08,761 --> 00:03:10,231 Actually, it is. 73 00:03:10,231 --> 00:03:11,448 In many ways, 74 00:03:11,448 --> 00:03:13,522 this is a hard engineering problem 75 00:03:13,522 --> 00:03:15,357 with elements of AI, 76 00:03:15,357 --> 00:03:18,593 not some wet hairball of an AI problem that needs to be engineered. 77 00:03:18,593 --> 00:03:22,362 To paraphrase Alan Turing, 78 00:03:22,362 --> 00:03:25,109 I'm not interested in a sentient machine. 79 00:03:25,109 --> 00:03:26,488 I'm not building a HAL. 80 00:03:26,488 --> 00:03:29,110 All I'm after is a simple brain, 81 00:03:29,110 --> 00:03:32,871 something that offers the illusion of intelligence. 82 00:03:32,871 --> 00:03:36,031 The art and the science of computing 83 00:03:36,031 --> 00:03:36,795 have come a long way 84 00:03:36,795 --> 00:03:37,853 since HAL was onscreen, 85 00:03:37,853 --> 00:03:41,024 and I'd imagine if his inventor Dr. Chandra were here today, 86 00:03:41,024 --> 00:03:43,380 he'd have a whole lot of questions for us. 87 00:03:43,380 --> 00:03:45,310 Is it really possible for us 88 00:03:45,310 --> 00:03:49,382 to take a system of millions upon millions of devices 89 00:03:49,382 --> 00:03:51,331 to read in their data streams, 90 00:03:51,331 --> 00:03:53,359 to predict their failures and act in advance? 91 00:03:53,359 --> 00:03:54,644 Yes. 92 00:03:54,644 --> 00:03:57,816 Can we build systems that converse with humans in natural language? 93 00:03:57,816 --> 00:03:58,603 Yes. 94 00:03:58,603 --> 00:04:00,898 Can we build systems that recognize objects, identify emotions, 95 00:04:00,898 --> 00:04:05,325 emote themselves, play games, and even read lips? 96 00:04:05,325 --> 00:04:06,377 Yes. 97 00:04:06,377 --> 00:04:08,787 Can we build a system that sets goals, 98 00:04:08,787 --> 00:04:12,331 that carries out plans against those goals and learns along the way? 99 00:04:12,331 --> 00:04:13,609 Yes. 100 00:04:13,609 --> 00:04:15,469 Can we build systems 101 00:04:15,469 --> 00:04:17,110 that have a theory of mind? 102 00:04:17,110 --> 00:04:18,581 This we are learning to do. 103 00:04:18,581 --> 00:04:22,566 Can we build systems that have an ethical and moral foundation? 104 00:04:22,566 --> 00:04:25,360 This we must learn how to do. 105 00:04:25,360 --> 00:04:28,238 So let's accept for a moment that it's possible to build such 106 00:04:28,238 --> 00:04:29,768 an artificial intelligence 107 00:04:29,768 --> 00:04:31,968 for this kind of mission and others. 108 00:04:31,968 --> 00:04:34,613 The next question you must ask yourself is, 109 00:04:34,613 --> 00:04:36,390 should we fear it? 110 00:04:36,390 --> 00:04:38,138 Now, every new technology 111 00:04:38,138 --> 00:04:40,406 brings with it some measure of trepidation. 112 00:04:40,406 --> 00:04:42,952 When we first saw cars, 113 00:04:42,952 --> 00:04:46,900 people lamented that we would see the destruction of the family. 114 00:04:46,900 --> 00:04:49,566 When we first saw telephones come in, 115 00:04:49,566 --> 00:04:52,363 people would worry it would destroy all civil conversation. 116 00:04:52,363 --> 00:04:56,328 At the point in time we saw the written word become pervasive, 117 00:04:56,328 --> 00:04:58,812 people thought we would lose our ability to memorize. 118 00:04:58,812 --> 00:05:00,916 These things are all true to a degree, 119 00:05:00,916 --> 00:05:03,005 but it's also true that these technologies 120 00:05:03,005 --> 00:05:06,408 brought to us things that extended the human experience 121 00:05:06,408 --> 00:05:09,928 in some profound ways. 122 00:05:09,928 --> 00:05:12,591 So let's take this a little further. 123 00:05:12,591 --> 00:05:17,722 I do not fear the creation of an AI like this, 124 00:05:17,722 --> 00:05:21,864 because it will eventually embody some of our values. 125 00:05:21,864 --> 00:05:24,233 Consider this: building a cognitive system 126 00:05:24,233 --> 00:05:27,911 is fundamentally different than building a traditional 127 00:05:27,911 --> 00:05:29,217 software-intensive system of the past. 128 00:05:29,217 --> 00:05:31,418 We don't program them. We teach them. 129 00:05:31,418 --> 00:05:33,875 In order to teach a system how to recognize flowers, 130 00:05:33,875 --> 00:05:36,934 I show it thousands of flowers of the kinds I like. 131 00:05:36,934 --> 00:05:39,308 In order to teach a system how to play a game -- 132 00:05:39,308 --> 00:05:42,114 Well, I would. You would too. 133 00:05:42,114 --> 00:05:45,611 I like flowers. Come on. 134 00:05:45,611 --> 00:05:48,543 To teach a system how to play a game like go, 135 00:05:48,543 --> 00:05:50,392 I'd have it play thousands of games of go, 136 00:05:50,392 --> 00:05:52,563 but in the process I also teach it how to discern 137 00:05:52,563 --> 00:05:54,715 a good game from a bad game. 138 00:05:54,715 --> 00:05:58,318 If I want to create an artificially intelligent legal assistant, 139 00:05:58,318 --> 00:06:01,019 I will teach it some corpus of law but at the same time 140 00:06:01,019 --> 00:06:02,998 I am fusing with it 141 00:06:02,998 --> 00:06:06,315 the sense of mercy and justice that is part of that law. 142 00:06:06,315 --> 00:06:09,824 In scientific terms, this is what we call ground truth, 143 00:06:09,824 --> 00:06:11,900 and here's the important point: 144 00:06:11,900 --> 00:06:16,378 in producing these machines, we are therefore teaching them 145 00:06:16,378 --> 00:06:16,761 a sense of our values. 146 00:06:16,761 --> 00:06:19,638 To that end, I trust in artificial intelligence 147 00:06:19,638 --> 00:06:24,291 the same, if not more, as a human who is well-trained. 148 00:06:24,291 --> 00:06:25,685 But, you may ask, 149 00:06:25,685 --> 00:06:28,551 what about rogue agents, 150 00:06:28,551 --> 00:06:31,230 some well-funded non-government organization? 151 00:06:31,230 --> 00:06:35,224 I do not fear an artificial intelligence in the hand of a lone wolf. 152 00:06:35,224 --> 00:06:37,392 Clearly, we cannot protect ourselves 153 00:06:37,392 --> 00:06:39,958 against all random acts of violence, 154 00:06:39,958 --> 00:06:41,841 but the reality is such a system 155 00:06:41,841 --> 00:06:45,591 requires substantial training and subtle training 156 00:06:45,591 --> 00:06:47,606 far beyond the resources of an individual, 157 00:06:47,606 --> 00:06:51,738 and furthermore, it's far more than just injecting an Internet virus 158 00:06:51,738 --> 00:06:53,410 to the world where you push a button, 159 00:06:53,410 --> 00:06:55,068 all of a sudden it's in a million places 160 00:06:55,068 --> 00:06:57,474 and laptops start blowing up all over the place. 161 00:06:57,474 --> 00:07:00,452 Now, these kinds of substances are much larger 162 00:07:00,452 --> 00:07:02,596 and we'll certainly see them coming. 163 00:07:02,596 --> 00:07:05,289 Do I fear that such an artificial intelligence 164 00:07:05,289 --> 00:07:08,263 might threaten all of humanity? 165 00:07:08,263 --> 00:07:09,801 If you look at movies 166 00:07:09,801 --> 00:07:12,731 such as "The Matrix," "Metropolis," 167 00:07:12,731 --> 00:07:15,777 "The Terminator," shows such as "Westworld," 168 00:07:15,777 --> 00:07:17,969 they all speak of this kind of fear. 169 00:07:17,969 --> 00:07:22,965 Indeed, in the book "Superintelligence" by the philosopher Nick Bostrom, 170 00:07:22,965 --> 00:07:24,392 he picks up on this theme 171 00:07:24,392 --> 00:07:28,164 and observes that a superintelligence might not only be dangerous, 172 00:07:28,164 --> 00:07:32,000 it could represent an existential threat to all of humanity. 173 00:07:32,000 --> 00:07:33,789 Dr. Bostrom's basic argument 174 00:07:33,789 --> 00:07:37,591 is that such systems will eventually 175 00:07:37,591 --> 00:07:40,276 have such an insatiable thirst for information 176 00:07:40,276 --> 00:07:42,920 that they will perhaps learn how to learn 177 00:07:42,920 --> 00:07:46,604 and eventually discover that they may have goals 178 00:07:46,604 --> 00:07:48,324 that are contrary to human needs. 179 00:07:48,324 --> 00:07:50,206 Dr. Bostrom has a number of followers. 180 00:07:50,206 --> 00:07:52,766 He is supported by people such as Elon Musk 181 00:07:52,766 --> 00:07:54,706 and Stephen Hawking. 182 00:07:54,706 --> 00:07:57,846 With all due respect 183 00:07:57,846 --> 00:08:00,251 to these brilliant minds, 184 00:08:00,251 --> 00:08:02,600 I believe that they are fundamentally wrong. 185 00:08:02,600 --> 00:08:06,650 Now, there are a lot of pieces of Dr. Bostrom's argument to unpack, 186 00:08:06,650 --> 00:08:08,254 and I don't have time to unpack them all, 187 00:08:08,254 --> 00:08:10,564 but very briefly, consider this: 188 00:08:10,564 --> 00:08:14,228 super knowing is very different than super doing. 189 00:08:14,228 --> 00:08:17,394 HAL was a threat to the Discovery crew only insofar as HAL commanded 190 00:08:17,394 --> 00:08:20,651 all aspects of the Discovery. 191 00:08:20,651 --> 00:08:23,166 So it would have to be with a superintelligence. 192 00:08:23,166 --> 00:08:25,697 It would have to have dominion over all of our world. 193 00:08:25,697 --> 00:08:28,529 This is the stuff of Skynet from the movie "The Terminator" 194 00:08:28,529 --> 00:08:30,467 in which we had a superintelligence 195 00:08:30,467 --> 00:08:32,059 that commanded human will, that directed every device 196 00:08:32,059 --> 00:08:35,404 that was in every corner of the world. 197 00:08:35,404 --> 00:08:37,057 Practically speaking, 198 00:08:37,057 --> 00:08:39,345 it ain't gonna happen. 199 00:08:39,345 --> 00:08:42,671 We are not building AIs that control the weather, 200 00:08:42,671 --> 00:08:44,021 that direct the tides, 201 00:08:44,021 --> 00:08:47,466 that command us capricious, chaotic humans. 202 00:08:47,466 --> 00:08:50,813 And furthermore, if such an artificial intelligence existed, 203 00:08:50,813 --> 00:08:53,712 it would have to compete with human economies, 204 00:08:53,712 --> 00:08:57,357 and thereby compete for resources with us. 205 00:08:57,357 --> 00:08:58,666 And in the end -- 206 00:08:58,666 --> 00:09:00,162 don't tell Siri this -- 207 00:09:00,162 --> 00:09:02,674 we can always unplug them. 208 00:09:02,674 --> 00:09:04,943 (Laughter) 209 00:09:04,943 --> 00:09:08,110 We are on an incredible journey 210 00:09:08,110 --> 00:09:10,443 of coevolution with our machines. 211 00:09:10,443 --> 00:09:12,999 The humans we are today 212 00:09:12,999 --> 00:09:15,465 are not the humans we will be then. 213 00:09:15,465 --> 00:09:18,904 To worry now about the rise of a superintelligence 214 00:09:18,904 --> 00:09:21,872 is in many ways a dangerous distraction 215 00:09:21,872 --> 00:09:24,040 because the rise of computing itself 216 00:09:24,040 --> 00:09:27,116 brings to us a number of human and societal issues 217 00:09:27,116 --> 00:09:29,456 to which we must now attend. 218 00:09:29,456 --> 00:09:32,305 How shall I best organize society 219 00:09:32,305 --> 00:09:34,731 when the need for human labor diminishes? 220 00:09:34,731 --> 00:09:36,642 How can I bring understanding 221 00:09:36,642 --> 00:09:40,370 and education throughout the globe and still respect our differences? 222 00:09:40,370 --> 00:09:44,575 How might I extend and enhance human life through cognitive healthcare? 223 00:09:44,575 --> 00:09:47,264 How might I use computing 224 00:09:47,264 --> 00:09:50,049 to help take us to the stars? 225 00:09:50,049 --> 00:09:52,507 And that's the exciting thing. 226 00:09:52,507 --> 00:09:55,553 The opportunities to use computing 227 00:09:55,553 --> 00:09:57,075 to advance the human experience 228 00:09:57,075 --> 00:09:58,123 are within our reach, 229 00:09:58,123 --> 00:09:59,960 here and now, 230 00:09:59,960 --> 00:10:02,404 and we are just beginning. 231 00:10:02,404 --> 00:10:04,150 Thank you very much. 232 00:10:04,150 --> 00:10:07,806 (Applause)