1 00:00:01,765 --> 00:00:04,765 So, artificial intelligence 2 00:00:04,789 --> 00:00:08,318 is known for disrupting all kinds of industries. 3 00:00:08,961 --> 00:00:11,004 What about ice cream? 4 00:00:11,879 --> 00:00:15,518 What kind of mind-blowing new flavors could we generate 5 00:00:15,542 --> 00:00:18,518 with the power of an advanced artificial intelligence? 6 00:00:19,011 --> 00:00:23,172 So I teamed up with a group of coders from Kealing Middle School 7 00:00:23,196 --> 00:00:25,437 to find out the answer to this question. 8 00:00:25,461 --> 00:00:30,542 They collected over 1,600 existing ice cream flavors, 9 00:00:30,566 --> 00:00:36,088 and together, we fed them to an algorithm to see what it would generate. 10 00:00:36,112 --> 00:00:39,865 And here are some of the flavors that the AI came up with. 11 00:00:40,444 --> 00:00:41,915 [Pumpkin Trash Break] 12 00:00:41,939 --> 00:00:43,341 (Laughter) 13 00:00:43,365 --> 00:00:45,834 [Peanut Butter Slime] 14 00:00:46,822 --> 00:00:48,165 [Strawberry Cream Disease] 15 00:00:48,189 --> 00:00:50,315 (Laughter) 16 00:00:50,339 --> 00:00:54,936 These flavors are not delicious, as we might have hoped they would be. 17 00:00:54,960 --> 00:00:56,824 So the question is: What happened? 18 00:00:56,848 --> 00:00:58,242 What went wrong? 19 00:00:58,266 --> 00:01:00,225 Is the AI trying to kill us? 20 00:01:01,027 --> 00:01:05,337 Or is it trying to do what we asked, and there was a problem? 21 00:01:06,567 --> 00:01:09,031 In movies, when something goes wrong with AI, 22 00:01:09,055 --> 00:01:11,767 it's usually because the AI has decided 23 00:01:11,791 --> 00:01:14,063 that it doesn't want to obey the humans anymore, 24 00:01:14,087 --> 00:01:16,710 and it's got its own goals, thank you very much. 25 00:01:17,266 --> 00:01:20,482 In real life, though, the AI that we actually have 26 00:01:20,506 --> 00:01:22,369 is not nearly smart enough for that. 27 00:01:22,781 --> 00:01:25,763 It has the approximate computing power 28 00:01:25,787 --> 00:01:27,063 of an earthworm, 29 00:01:27,087 --> 00:01:30,490 or maybe at most a single honeybee, 30 00:01:30,514 --> 00:01:32,729 and actually, probably maybe less. 31 00:01:32,753 --> 00:01:35,347 Like, we're constantly learning new things about brains 32 00:01:35,371 --> 00:01:39,731 that make it clear how much our AIs don't measure up to real brains. 33 00:01:39,755 --> 00:01:45,418 So today's AI can do a task like identify a pedestrian in a picture, 34 00:01:45,442 --> 00:01:48,425 but it doesn't have a concept of what the pedestrian is 35 00:01:48,449 --> 00:01:53,273 beyond that it's a collection of lines and textures and things. 36 00:01:53,792 --> 00:01:56,313 It doesn't know what a human actually is. 37 00:01:56,822 --> 00:02:00,104 So will today's AI do what we ask it to do? 38 00:02:00,128 --> 00:02:01,722 It will if it can, 39 00:02:01,746 --> 00:02:04,472 but it might not do what we actually want. 40 00:02:04,496 --> 00:02:06,911 So let's say that you were trying to get an AI 41 00:02:06,935 --> 00:02:09,554 to take this collection of robot parts 42 00:02:09,578 --> 00:02:13,775 and assemble them into some kind of robot to get from Point A to Point B. 43 00:02:13,799 --> 00:02:16,280 Now, if you were going to try and solve this problem 44 00:02:16,304 --> 00:02:18,655 by writing a traditional-style computer program, 45 00:02:18,679 --> 00:02:22,096 you would give the program step-by-step instructions 46 00:02:22,120 --> 00:02:23,449 on how to take these parts, 47 00:02:23,473 --> 00:02:25,880 how to assemble them into a robot with legs 48 00:02:25,904 --> 00:02:28,846 and then how to use those legs to walk to Point B. 49 00:02:29,441 --> 00:02:31,781 But when you're using AI to solve the problem, 50 00:02:31,805 --> 00:02:32,979 it goes differently. 51 00:02:33,003 --> 00:02:35,385 You don't tell it how to solve the problem, 52 00:02:35,409 --> 00:02:36,888 you just give it the goal, 53 00:02:36,912 --> 00:02:40,174 and it has to figure out for itself via trial and error 54 00:02:40,198 --> 00:02:41,682 how to reach that goal. 55 00:02:42,254 --> 00:02:46,356 And it turns out that the way AI tends to solve this particular problem 56 00:02:46,380 --> 00:02:47,864 is by doing this: 57 00:02:47,888 --> 00:02:51,255 it assembles itself into a tower and then falls over 58 00:02:51,279 --> 00:02:53,106 and lands at Point B. 59 00:02:53,130 --> 00:02:55,959 And technically, this solves the problem. 60 00:02:55,983 --> 00:02:57,622 Technically, it got to Point B. 61 00:02:57,646 --> 00:03:01,911 The danger of AI is not that it's going to rebel against us, 62 00:03:01,935 --> 00:03:06,209 it's that it's going to do exactly what we ask it to do. 63 00:03:06,876 --> 00:03:09,374 So then the trick of working with AI becomes: 64 00:03:09,398 --> 00:03:13,226 How do we set up the problem so that it actually does what we want? 65 00:03:14,726 --> 00:03:18,032 So this little robot here is being controlled by an AI. 66 00:03:18,056 --> 00:03:20,870 The AI came up with a design for the robot legs 67 00:03:20,894 --> 00:03:24,972 and then figured out how to use them to get past all these obstacles. 68 00:03:24,996 --> 00:03:27,737 But when David Ha set up this experiment, 69 00:03:27,761 --> 00:03:30,617 he had to set it up with very, very strict limits 70 00:03:30,641 --> 00:03:33,933 on how big the AI was allowed to make the legs, 71 00:03:33,957 --> 00:03:35,507 because otherwise ... 72 00:03:43,058 --> 00:03:46,989 (Laughter) 73 00:03:48,563 --> 00:03:52,308 And technically, it got to the end of that obstacle course. 74 00:03:52,332 --> 00:03:57,274 So you see how hard it is to get AI to do something as simple as just walk. 75 00:03:57,298 --> 00:04:01,118 So seeing the AI do this, you may say, OK, no fair, 76 00:04:01,142 --> 00:04:03,722 you can't just be a tall tower and fall over, 77 00:04:03,746 --> 00:04:07,181 you have to actually, like, use legs to walk. 78 00:04:07,205 --> 00:04:09,964 And it turns out, that doesn't always work, either. 79 00:04:09,988 --> 00:04:12,747 This AI's job was to move fast. 80 00:04:13,115 --> 00:04:16,708 They didn't tell it that it had to run facing forward 81 00:04:16,732 --> 00:04:18,990 or that it couldn't use its arms. 82 00:04:19,487 --> 00:04:24,105 So this is what you get when you train AI to move fast, 83 00:04:24,129 --> 00:04:27,663 you get things like somersaulting and silly walks. 84 00:04:27,687 --> 00:04:29,087 It's really common. 85 00:04:29,667 --> 00:04:32,846 So is twitching along the floor in a heap. 86 00:04:32,870 --> 00:04:34,020 (Laughter) 87 00:04:35,241 --> 00:04:38,495 So in my opinion, you know what should have been a whole lot weirder 88 00:04:38,519 --> 00:04:39,915 is the "Terminator" robots. 89 00:04:40,256 --> 00:04:44,011 Hacking "The Matrix" is another thing that AI will do if you give it a chance. 90 00:04:44,035 --> 00:04:46,552 So if you train an AI in a simulation, 91 00:04:46,576 --> 00:04:50,689 it will learn how to do things like hack into the simulation's math errors 92 00:04:50,713 --> 00:04:52,920 and harvest them for energy. 93 00:04:52,944 --> 00:04:58,419 Or it will figure out how to move faster by glitching repeatedly into the floor. 94 00:04:58,443 --> 00:05:00,028 When you're working with AI, 95 00:05:00,052 --> 00:05:02,441 it's less like working with another human 96 00:05:02,465 --> 00:05:06,094 and a lot more like working with some kind of weird force of nature. 97 00:05:06,562 --> 00:05:11,185 And it's really easy to accidentally give AI the wrong problem to solve, 98 00:05:11,209 --> 00:05:15,747 and often we don't realize that until something has actually gone wrong. 99 00:05:16,242 --> 00:05:18,322 So here's an experiment I did, 100 00:05:18,346 --> 00:05:21,528 where I wanted the AI to copy paint colors, 101 00:05:21,552 --> 00:05:23,298 to invent new paint colors, 102 00:05:23,322 --> 00:05:26,309 given the list like the ones here on the left. 103 00:05:26,798 --> 00:05:29,802 And here's what the AI actually came up with. 104 00:05:29,826 --> 00:05:32,969 [Sindis Poop, Turdly, Suffer, Gray Pubic] 105 00:05:32,993 --> 00:05:37,223 (Laughter) 106 00:05:39,177 --> 00:05:41,063 So technically, 107 00:05:41,087 --> 00:05:42,951 it did what I asked it to. 108 00:05:42,975 --> 00:05:46,283 I thought I was asking it for, like, nice paint color names, 109 00:05:46,307 --> 00:05:48,614 but what I was actually asking it to do 110 00:05:48,638 --> 00:05:51,724 was just imitate the kinds of letter combinations 111 00:05:51,748 --> 00:05:53,653 that it had seen in the original. 112 00:05:53,677 --> 00:05:56,775 And I didn't tell it anything about what words mean, 113 00:05:56,799 --> 00:05:59,359 or that there are maybe some words 114 00:05:59,383 --> 00:06:02,272 that it should avoid using in these paint colors. 115 00:06:03,141 --> 00:06:06,635 So its entire world is the data that I gave it. 116 00:06:06,659 --> 00:06:10,687 Like with the ice cream flavors, it doesn't know about anything else. 117 00:06:12,491 --> 00:06:14,129 So it is through the data 118 00:06:14,153 --> 00:06:18,197 that we often accidentally tell AI to do the wrong thing. 119 00:06:18,694 --> 00:06:21,726 This is a fish called a tench. 120 00:06:21,750 --> 00:06:23,565 And there was a group of researchers 121 00:06:23,589 --> 00:06:27,463 who trained an AI to identify this tench in pictures. 122 00:06:27,487 --> 00:06:28,783 But then when they asked it 123 00:06:28,807 --> 00:06:32,233 what part of the picture it was actually using to identify the fish, 124 00:06:32,257 --> 00:06:33,615 here's what it highlighted. 125 00:06:35,203 --> 00:06:37,392 Yes, those are human fingers. 126 00:06:37,416 --> 00:06:39,475 Why would it be looking for human fingers 127 00:06:39,499 --> 00:06:41,420 if it's trying to identify a fish? 128 00:06:42,126 --> 00:06:45,290 Well, it turns out that the tench is a trophy fish, 129 00:06:45,314 --> 00:06:49,125 and so in a lot of pictures that the AI had seen of this fish 130 00:06:49,149 --> 00:06:50,300 during training, 131 00:06:50,324 --> 00:06:51,814 the fish looked like this. 132 00:06:51,838 --> 00:06:53,473 (Laughter) 133 00:06:53,497 --> 00:06:56,827 And it didn't know that the fingers aren't part of the fish. 134 00:06:58,808 --> 00:07:02,928 So you see why it is so hard to design an AI 135 00:07:02,952 --> 00:07:06,271 that actually can understand what it's looking at. 136 00:07:06,295 --> 00:07:09,157 And this is why designing the image recognition 137 00:07:09,181 --> 00:07:11,248 in self-driving cars is so hard, 138 00:07:11,272 --> 00:07:13,477 and why so many self-driving car failures 139 00:07:13,501 --> 00:07:16,386 are because the AI got confused. 140 00:07:16,410 --> 00:07:20,418 I want to talk about an example from 2016. 141 00:07:20,442 --> 00:07:24,897 There was a fatal accident when somebody was using Tesla's autopilot AI, 142 00:07:24,921 --> 00:07:28,335 but instead of using it on the highway like it was designed for, 143 00:07:28,359 --> 00:07:30,564 they used it on city streets. 144 00:07:31,239 --> 00:07:32,414 And what happened was, 145 00:07:32,438 --> 00:07:35,834 a truck drove out in front of the car and the car failed to brake. 146 00:07:36,507 --> 00:07:41,269 Now, the AI definitely was trained to recognize trucks in pictures. 147 00:07:41,293 --> 00:07:43,438 But what it looks like happened is 148 00:07:43,462 --> 00:07:46,393 the AI was trained to recognize trucks on highway driving, 149 00:07:46,417 --> 00:07:49,316 where you would expect to see trucks from behind. 150 00:07:49,340 --> 00:07:52,760 Trucks on the side is not supposed to happen on a highway, 151 00:07:52,784 --> 00:07:56,239 and so when the AI saw this truck, 152 00:07:56,263 --> 00:08:01,090 it looks like the AI recognized it as most likely to be a road sign 153 00:08:01,114 --> 00:08:03,387 and therefore, safe to drive underneath. 154 00:08:04,114 --> 00:08:06,694 Here's an AI misstep from a different field. 155 00:08:06,718 --> 00:08:10,178 Amazon recently had to give up on a résumé-sorting algorithm 156 00:08:10,202 --> 00:08:11,422 that they were working on 157 00:08:11,446 --> 00:08:15,354 when they discovered that the algorithm had learned to discriminate against women. 158 00:08:15,378 --> 00:08:18,094 What happened is they had trained it on example résumés 159 00:08:18,118 --> 00:08:20,360 of people who they had hired in the past. 160 00:08:20,384 --> 00:08:24,407 And from these examples, the AI learned to avoid the résumés of people 161 00:08:24,431 --> 00:08:26,457 who had gone to women's colleges 162 00:08:26,481 --> 00:08:29,287 or who had the word "women" somewhere in their resume, 163 00:08:29,311 --> 00:08:33,887 as in, "women's soccer team" or "Society of Women Engineers." 164 00:08:33,911 --> 00:08:37,885 The AI didn't know that it wasn't supposed to copy this particular thing 165 00:08:37,909 --> 00:08:39,887 that it had seen the humans do. 166 00:08:39,911 --> 00:08:43,088 And technically, it did what they asked it to do. 167 00:08:43,112 --> 00:08:45,909 They just accidentally asked it to do the wrong thing. 168 00:08:46,653 --> 00:08:49,548 And this happens all the time with AI. 169 00:08:50,120 --> 00:08:53,711 AI can be really destructive and not know it. 170 00:08:53,735 --> 00:08:58,813 So the AIs that recommend new content in Facebook, in YouTube, 171 00:08:58,837 --> 00:09:02,376 they're optimized to increase the number of clicks and views. 172 00:09:02,400 --> 00:09:05,836 And unfortunately, one way that they have found of doing this 173 00:09:05,860 --> 00:09:10,363 is to recommend the content of conspiracy theories or bigotry. 174 00:09:10,902 --> 00:09:16,204 The AIs themselves don't have any concept of what this content actually is, 175 00:09:16,228 --> 00:09:19,623 and they don't have any concept of what the consequences might be 176 00:09:19,647 --> 00:09:21,756 of recommending this content. 177 00:09:22,296 --> 00:09:24,307 So, when we're working with AI, 178 00:09:24,331 --> 00:09:28,513 it's up to us to avoid problems. 179 00:09:28,537 --> 00:09:30,860 And avoiding things going wrong, 180 00:09:30,884 --> 00:09:35,410 that may come down to the age-old problem of communication, 181 00:09:35,434 --> 00:09:39,179 where we as humans have to learn how to communicate with AI. 182 00:09:39,203 --> 00:09:43,242 We have to learn what AI is capable of doing and what it's not, 183 00:09:43,266 --> 00:09:46,352 and to understand that, with its tiny little worm brain, 184 00:09:46,376 --> 00:09:50,389 AI doesn't really understand what we're trying to ask it to do. 185 00:09:51,148 --> 00:09:54,469 So in other words, we have to be prepared to work with AI 186 00:09:54,493 --> 00:09:59,751 that's not the super-competent, all-knowing AI of science fiction. 187 00:09:59,775 --> 00:10:02,637 We have to be prepared to work with an AI 188 00:10:02,661 --> 00:10:05,599 that's the one that we actually have in the present day. 189 00:10:05,623 --> 00:10:09,828 And present-day AI is plenty weird enough. 190 00:10:09,852 --> 00:10:11,042 Thank you. 191 00:10:11,066 --> 00:10:16,291 (Applause)