1 00:00:00,760 --> 00:00:04,600 When I was a kid, I was the quintessential nerd. 2 00:00:05,320 --> 00:00:07,496 I think some of you were, too. 3 00:00:07,520 --> 00:00:08,736 (Laughter) 4 00:00:08,760 --> 00:00:11,976 And you, sir, who laughed the loudest, you probably still are. 5 00:00:12,000 --> 00:00:14,256 (Laughter) 6 00:00:14,280 --> 00:00:17,776 I grew up in a small town in the dusty plains of north Texas, 7 00:00:17,800 --> 00:00:21,136 the son of a sheriff who was the son of a pastor. 8 00:00:21,160 --> 00:00:23,080 Getting into trouble was not an option. 9 00:00:24,040 --> 00:00:27,296 And so I started reading calculus books for fun. 10 00:00:27,320 --> 00:00:28,856 (Laughter) 11 00:00:28,880 --> 00:00:30,576 You did, too. 12 00:00:30,600 --> 00:00:34,336 That led me to building a laser and a computer and model rockets, 13 00:00:34,360 --> 00:00:37,360 and that led me to making rocket fuel in my bedroom. 14 00:00:37,960 --> 00:00:41,616 Now, in scientific terms, 15 00:00:41,640 --> 00:00:44,896 we call this a very bad idea. 16 00:00:44,920 --> 00:00:46,136 (Laughter) 17 00:00:46,160 --> 00:00:48,336 Around that same time, 18 00:00:48,360 --> 00:00:51,576 Stanley Kubrick's "2001: A Space Odyssey" came to the theaters, 19 00:00:51,600 --> 00:00:53,800 and my life was forever changed. 20 00:00:54,280 --> 00:00:56,336 I loved everything about that movie, 21 00:00:56,360 --> 00:00:58,896 especially the HAL 9000. 22 00:00:58,920 --> 00:01:00,976 Now, HAL was a sentient computer 23 00:01:01,000 --> 00:01:03,456 designed to guide the Discovery spacecraft 24 00:01:03,480 --> 00:01:06,016 from the Earth to Jupiter. 25 00:01:06,040 --> 00:01:08,096 HAL was also a flawed character, 26 00:01:08,120 --> 00:01:12,400 for in the end he chose to value the mission over human life. 27 00:01:12,840 --> 00:01:14,936 Now, HAL was a fictional character, 28 00:01:14,960 --> 00:01:17,616 but nonetheless he speaks to our fears, 29 00:01:17,640 --> 00:01:19,736 our fears of being subjugated 30 00:01:19,760 --> 00:01:22,776 by some unfeeling, artificial intelligence 31 00:01:22,800 --> 00:01:24,760 who is indifferent to our humanity. 32 00:01:25,880 --> 00:01:28,456 I believe that such fears are unfounded. 33 00:01:28,480 --> 00:01:31,176 Indeed, we stand at a remarkable time 34 00:01:31,200 --> 00:01:32,736 in human history, 35 00:01:32,760 --> 00:01:37,736 where, driven by refusal to accept the limits of our bodies and our minds, 36 00:01:37,760 --> 00:01:39,456 we are building machines 37 00:01:39,480 --> 00:01:43,096 of exquisite, beautiful complexity and grace 38 00:01:43,120 --> 00:01:45,176 that will extend the human experience 39 00:01:45,200 --> 00:01:46,880 in ways beyond our imagining. 40 00:01:47,720 --> 00:01:50,296 After a career that led me from the Air Force Academy 41 00:01:50,320 --> 00:01:52,256 to Space Command to now, 42 00:01:52,280 --> 00:01:53,976 I became a systems engineer, 43 00:01:54,000 --> 00:01:56,736 and recently I was drawn into an engineering problem 44 00:01:56,760 --> 00:01:59,336 associated with NASA's mission to Mars. 45 00:01:59,360 --> 00:02:01,856 Now, in space flights to the Moon, 46 00:02:01,880 --> 00:02:05,016 we can rely upon mission control in Houston 47 00:02:05,040 --> 00:02:07,016 to watch over all aspects of a flight. 48 00:02:07,040 --> 00:02:10,576 However, Mars is 200 times further away, 49 00:02:10,600 --> 00:02:13,816 and as a result it takes on average 13 minutes 50 00:02:13,840 --> 00:02:16,976 for a signal to travel from the Earth to Mars. 51 00:02:17,000 --> 00:02:20,400 If there's trouble, there's not enough time. 52 00:02:20,840 --> 00:02:23,336 And so a reasonable engineering solution 53 00:02:23,360 --> 00:02:25,936 calls for us to put mission control 54 00:02:25,960 --> 00:02:28,976 inside the walls of the Orion spacecraft. 55 00:02:29,000 --> 00:02:31,896 Another fascinating idea in the mission profile 56 00:02:31,920 --> 00:02:34,816 places humanoid robots on the surface of Mars 57 00:02:34,840 --> 00:02:36,696 before the humans themselves arrive, 58 00:02:36,720 --> 00:02:38,376 first to build facilities 59 00:02:38,400 --> 00:02:41,760 and later to serve as collaborative members of the science team. 60 00:02:43,400 --> 00:02:46,136 Now, as I looked at this from an engineering perspective, 61 00:02:46,160 --> 00:02:49,336 it became very clear to me that what I needed to architect 62 00:02:49,360 --> 00:02:51,536 was a smart, collaborative, 63 00:02:51,560 --> 00:02:53,936 socially intelligent artificial intelligence. 64 00:02:53,960 --> 00:02:58,256 In other words, I needed to build something very much like a HAL 65 00:02:58,280 --> 00:03:00,696 but without the homicidal tendencies. 66 00:03:00,720 --> 00:03:02,080 (Laughter) 67 00:03:02,920 --> 00:03:04,736 Let's pause for a moment. 68 00:03:04,760 --> 00:03:08,656 Is it really possible to build an artificial intelligence like that? 69 00:03:08,680 --> 00:03:10,136 Actually, it is. 70 00:03:10,160 --> 00:03:11,416 In many ways, 71 00:03:11,440 --> 00:03:13,416 this is a hard engineering problem 72 00:03:13,440 --> 00:03:14,896 with elements of AI, 73 00:03:14,920 --> 00:03:19,616 not some wet hair ball of an AI problem that needs to be engineered. 74 00:03:19,640 --> 00:03:22,296 To paraphrase Alan Turing, 75 00:03:22,320 --> 00:03:24,696 I'm not interested in building a sentient machine. 76 00:03:24,720 --> 00:03:26,296 I'm not building a HAL. 77 00:03:26,320 --> 00:03:28,736 All I'm after is a simple brain, 78 00:03:28,760 --> 00:03:31,880 something that offers the illusion of intelligence. 79 00:03:33,000 --> 00:03:36,136 The art and the science of computing have come a long way 80 00:03:36,160 --> 00:03:37,656 since HAL was onscreen, 81 00:03:37,680 --> 00:03:40,896 and I'd imagine if his inventor Dr. Chandra were here today, 82 00:03:40,920 --> 00:03:43,256 he'd have a whole lot of questions for us. 83 00:03:43,280 --> 00:03:45,376 Is it really possible for us 84 00:03:45,400 --> 00:03:49,416 to take a system of millions upon millions of devices, 85 00:03:49,440 --> 00:03:50,896 to read in their data streams, 86 00:03:50,920 --> 00:03:53,176 to predict their failures and act in advance? 87 00:03:53,200 --> 00:03:54,416 Yes. 88 00:03:54,440 --> 00:03:57,616 Can we build systems that converse with humans in natural language? 89 00:03:57,640 --> 00:03:58,856 Yes. 90 00:03:58,880 --> 00:04:01,856 Can we build systems that recognize objects, identify emotions, 91 00:04:01,880 --> 00:04:05,256 emote themselves, play games and even read lips? 92 00:04:05,280 --> 00:04:06,496 Yes. 93 00:04:06,520 --> 00:04:08,656 Can we build a system that sets goals, 94 00:04:08,680 --> 00:04:12,296 that carries out plans against those goals and learns along the way? 95 00:04:12,320 --> 00:04:13,536 Yes. 96 00:04:13,560 --> 00:04:16,896 Can we build systems that have a theory of mind? 97 00:04:16,920 --> 00:04:18,416 This we are learning to do. 98 00:04:18,440 --> 00:04:21,920 Can we build systems that have an ethical and moral foundation? 99 00:04:22,480 --> 00:04:24,520 This we must learn how to do. 100 00:04:25,360 --> 00:04:26,736 So let's accept for a moment 101 00:04:26,760 --> 00:04:29,656 that it's possible to build such an artificial intelligence 102 00:04:29,680 --> 00:04:31,816 for this kind of mission and others. 103 00:04:31,840 --> 00:04:34,376 The next question you must ask yourself is, 104 00:04:34,400 --> 00:04:35,856 should we fear it? 105 00:04:35,880 --> 00:04:37,856 Now, every new technology 106 00:04:37,880 --> 00:04:40,776 brings with it some measure of trepidation. 107 00:04:40,800 --> 00:04:42,496 When we first saw cars, 108 00:04:42,520 --> 00:04:46,536 people lamented that we would see the destruction of the family. 109 00:04:46,560 --> 00:04:49,256 When we first saw telephones come in, 110 00:04:49,280 --> 00:04:52,176 people were worried it would destroy all civil conversation. 111 00:04:52,200 --> 00:04:56,136 At a point in time we saw the written word become pervasive, 112 00:04:56,160 --> 00:04:58,656 people thought we would lose our ability to memorize. 113 00:04:58,680 --> 00:05:00,736 These things are all true to a degree, 114 00:05:00,760 --> 00:05:03,176 but it's also the case that these technologies 115 00:05:03,200 --> 00:05:06,576 brought to us things that extended the human experience 116 00:05:06,600 --> 00:05:08,480 in some profound ways. 117 00:05:09,840 --> 00:05:12,120 So let's take this a little further. 118 00:05:13,120 --> 00:05:17,856 I do not fear the creation of an AI like this, 119 00:05:17,880 --> 00:05:21,696 because it will eventually embody some of our values. 120 00:05:21,720 --> 00:05:25,216 Consider this: building a cognitive system is fundamentally different 121 00:05:25,240 --> 00:05:28,536 than building a traditional software-intensive system of the past. 122 00:05:28,560 --> 00:05:31,016 We don't program them. We teach them. 123 00:05:31,040 --> 00:05:33,696 In order to teach a system how to recognize flowers, 124 00:05:33,720 --> 00:05:36,736 I show it thousands of flowers of the kinds I like. 125 00:05:36,760 --> 00:05:39,016 In order to teach a system how to play a game -- 126 00:05:39,040 --> 00:05:41,000 Well, I would. You would, too. 127 00:05:42,600 --> 00:05:44,640 I like flowers. Come on. 128 00:05:45,440 --> 00:05:48,296 To teach a system how to play a game like Go, 129 00:05:48,320 --> 00:05:50,376 I'd have it play thousands of games of Go, 130 00:05:50,400 --> 00:05:52,056 but in the process I also teach it 131 00:05:52,080 --> 00:05:54,496 how to discern a good game from a bad game. 132 00:05:54,520 --> 00:05:58,216 If I want to create an artificially intelligent legal assistant, 133 00:05:58,240 --> 00:06:00,016 I will teach it some corpus of law 134 00:06:00,040 --> 00:06:02,896 but at the same time I am fusing with it 135 00:06:02,920 --> 00:06:05,800 the sense of mercy and justice that is part of that law. 136 00:06:06,560 --> 00:06:09,536 In scientific terms, this is what we call ground truth, 137 00:06:09,560 --> 00:06:11,576 and here's the important point: 138 00:06:11,600 --> 00:06:13,056 in producing these machines, 139 00:06:13,080 --> 00:06:16,496 we are therefore teaching them a sense of our values. 140 00:06:16,520 --> 00:06:19,656 To that end, I trust an artificial intelligence 141 00:06:19,680 --> 00:06:23,320 the same, if not more, as a human who is well-trained. 142 00:06:24,080 --> 00:06:25,296 But, you may ask, 143 00:06:25,320 --> 00:06:27,936 what about rogue agents, 144 00:06:27,960 --> 00:06:31,296 some well-funded nongovernment organization? 145 00:06:31,320 --> 00:06:35,136 I do not fear an artificial intelligence in the hand of a lone wolf. 146 00:06:35,160 --> 00:06:39,696 Clearly, we cannot protect ourselves against all random acts of violence, 147 00:06:39,720 --> 00:06:41,856 but the reality is such a system 148 00:06:41,880 --> 00:06:44,976 requires substantial training and subtle training 149 00:06:45,000 --> 00:06:47,296 far beyond the resources of an individual. 150 00:06:47,320 --> 00:06:48,536 And furthermore, 151 00:06:48,560 --> 00:06:51,816 it's far more than just injecting an internet virus to the world, 152 00:06:51,840 --> 00:06:54,936 where you push a button, all of a sudden it's in a million places 153 00:06:54,960 --> 00:06:57,416 and laptops start blowing up all over the place. 154 00:06:57,440 --> 00:07:00,256 Now, these kinds of substances are much larger, 155 00:07:00,280 --> 00:07:01,995 and we'll certainly see them coming. 156 00:07:02,520 --> 00:07:05,576 Do I fear that such an artificial intelligence 157 00:07:05,600 --> 00:07:07,560 might threaten all of humanity? 158 00:07:08,280 --> 00:07:12,656 If you look at movies such as "The Matrix," "Metropolis," 159 00:07:12,680 --> 00:07:15,856 "The Terminator," shows such as "Westworld," 160 00:07:15,880 --> 00:07:18,016 they all speak of this kind of fear. 161 00:07:18,040 --> 00:07:22,336 Indeed, in the book "Superintelligence" by the philosopher Nick Bostrom, 162 00:07:22,360 --> 00:07:23,896 he picks up on this theme 163 00:07:23,920 --> 00:07:27,936 and observes that a superintelligence might not only be dangerous, 164 00:07:27,960 --> 00:07:31,816 it could represent an existential threat to all of humanity. 165 00:07:31,840 --> 00:07:34,056 Dr. Bostrom's basic argument 166 00:07:34,080 --> 00:07:36,816 is that such systems will eventually 167 00:07:36,840 --> 00:07:40,096 have such an insatiable thirst for information 168 00:07:40,120 --> 00:07:43,016 that they will perhaps learn how to learn 169 00:07:43,040 --> 00:07:45,656 and eventually discover that they may have goals 170 00:07:45,680 --> 00:07:47,976 that are contrary to human needs. 171 00:07:48,000 --> 00:07:49,856 Dr. Bostrom has a number of followers. 172 00:07:49,880 --> 00:07:54,200 He is supported by people such as Elon Musk and Stephen Hawking. 173 00:07:54,880 --> 00:07:57,280 With all due respect 174 00:07:58,160 --> 00:08:00,176 to these brilliant minds, 175 00:08:00,200 --> 00:08:02,456 I believe that they are fundamentally wrong. 176 00:08:02,480 --> 00:08:05,656 Now, there are a lot of pieces of Dr. Bostrom's argument to unpack, 177 00:08:05,680 --> 00:08:07,816 and I don't have time to unpack them all, 178 00:08:07,840 --> 00:08:10,536 but very briefly, consider this: 179 00:08:10,560 --> 00:08:14,296 super knowing is very different than super doing. 180 00:08:14,320 --> 00:08:16,216 HAL was a threat to the Discovery crew 181 00:08:16,240 --> 00:08:20,656 only insofar as HAL commanded all aspects of the Discovery. 182 00:08:20,680 --> 00:08:23,176 So it would have to be with a superintelligence. 183 00:08:23,200 --> 00:08:25,696 It would have to have dominion over all of our world. 184 00:08:25,720 --> 00:08:28,536 This is the stuff of Skynet from the movie "The Terminator" 185 00:08:28,560 --> 00:08:30,416 in which we had a superintelligence 186 00:08:30,440 --> 00:08:31,816 that commanded human will, 187 00:08:31,840 --> 00:08:35,696 that directed every device that was in every corner of the world. 188 00:08:35,720 --> 00:08:37,176 Practically speaking, 189 00:08:37,200 --> 00:08:39,296 it ain't gonna happen. 190 00:08:39,320 --> 00:08:42,376 We are not building AIs that control the weather, 191 00:08:42,400 --> 00:08:43,736 that direct the tides, 192 00:08:43,760 --> 00:08:47,136 that command us capricious, chaotic humans. 193 00:08:47,160 --> 00:08:51,056 And furthermore, if such an artificial intelligence existed, 194 00:08:51,080 --> 00:08:54,016 it would have to compete with human economies, 195 00:08:54,040 --> 00:08:56,560 and thereby compete for resources with us. 196 00:08:57,200 --> 00:08:58,416 And in the end -- 197 00:08:58,440 --> 00:08:59,680 don't tell Siri this -- 198 00:09:00,440 --> 00:09:01,816 we can always unplug them. 199 00:09:01,840 --> 00:09:03,960 (Laughter) 200 00:09:05,360 --> 00:09:07,816 We are on an incredible journey 201 00:09:07,840 --> 00:09:10,336 of coevolution with our machines. 202 00:09:10,360 --> 00:09:12,856 The humans we are today 203 00:09:12,880 --> 00:09:15,416 are not the humans we will be then. 204 00:09:15,440 --> 00:09:18,576 To worry now about the rise of a superintelligence 205 00:09:18,600 --> 00:09:21,656 is in many ways a dangerous distraction 206 00:09:21,680 --> 00:09:24,016 because the rise of computing itself 207 00:09:24,040 --> 00:09:27,056 brings to us a number of human and societal issues 208 00:09:27,080 --> 00:09:28,720 to which we must now attend. 209 00:09:29,360 --> 00:09:32,176 How shall I best organize society 210 00:09:32,200 --> 00:09:34,536 when the need for human labor diminishes? 211 00:09:34,560 --> 00:09:38,376 How can I bring understanding and education throughout the globe 212 00:09:38,400 --> 00:09:40,176 and still respect our differences? 213 00:09:40,200 --> 00:09:44,456 How might I extend and enhance human life through cognitive healthcare? 214 00:09:44,480 --> 00:09:47,336 How might I use computing 215 00:09:47,360 --> 00:09:49,120 to help take us to the stars? 216 00:09:49,760 --> 00:09:51,800 And that's the exciting thing. 217 00:09:52,400 --> 00:09:54,736 The opportunities to use computing 218 00:09:54,760 --> 00:09:56,296 to advance the human experience 219 00:09:56,320 --> 00:09:57,736 are within our reach, 220 00:09:57,760 --> 00:09:59,616 here and now, 221 00:09:59,640 --> 00:10:01,320 and we are just beginning. 222 00:10:02,280 --> 00:10:03,496 Thank you very much. 223 00:10:03,520 --> 00:10:07,806 (Applause)