0:00:01.765,0:00:04.765 So, artificial intelligence 0:00:04.789,0:00:08.318 is known for disrupting[br]all kinds of industries. 0:00:08.961,0:00:11.004 What about ice cream? 0:00:11.879,0:00:15.518 What kind of mind-blowing[br]new flavors could we generate 0:00:15.542,0:00:18.518 with the power of an advanced[br]artificial intelligence? 0:00:19.011,0:00:23.172 So I teamed up with a group of coders[br]from Kealing Middle School 0:00:23.196,0:00:25.437 to find out the answer to this question. 0:00:25.461,0:00:30.542 They collected over 1,600[br]existing ice cream flavors, 0:00:30.566,0:00:36.088 and together, we fed them to an algorithm[br]to see what it would generate. 0:00:36.112,0:00:39.865 And here are some of the flavors[br]that the AI came up with. 0:00:40.444,0:00:41.915 [Pumpkin Trash Break] 0:00:41.939,0:00:43.341 (Laughter) 0:00:43.365,0:00:45.834 [Peanut Butter Slime] 0:00:46.822,0:00:48.165 [Strawberry Cream Disease] 0:00:48.189,0:00:50.315 (Laughter) 0:00:50.339,0:00:54.936 These flavors are not delicious,[br]as we might have hoped they would be. 0:00:54.960,0:00:56.824 So the question is: What happened? 0:00:56.848,0:00:58.242 What went wrong? 0:00:58.266,0:01:00.225 Is the AI trying to kill us? 0:01:01.027,0:01:05.337 Or is it trying to do what we asked,[br]and there was a problem? 0:01:06.567,0:01:09.031 In movies, when something[br]goes wrong with AI, 0:01:09.055,0:01:11.767 it's usually because the AI has decided 0:01:11.791,0:01:14.063 that it doesn't want to obey[br]the humans anymore, 0:01:14.087,0:01:16.710 and it's got its own goals,[br]thank you very much. 0:01:17.266,0:01:20.482 In real life, though,[br]the AI that we actually have 0:01:20.506,0:01:22.369 is not nearly smart enough for that. 0:01:22.781,0:01:25.763 It has the approximate computing power 0:01:25.787,0:01:27.063 of an earthworm, 0:01:27.087,0:01:30.490 or maybe at most a single honeybee, 0:01:30.514,0:01:32.729 and actually, probably maybe less. 0:01:32.753,0:01:35.347 Like, we're constantly learning[br]new things about brains 0:01:35.371,0:01:39.731 that make it clear how much our AIs[br]don't measure up to real brains. 0:01:39.755,0:01:45.418 So today's AI can do a task[br]like identify a pedestrian in a picture, 0:01:45.442,0:01:48.425 but it doesn't have a concept[br]of what the pedestrian is 0:01:48.449,0:01:53.273 beyond that it's a collection[br]of lines and textures and things. 0:01:53.792,0:01:56.313 It doesn't know what a human actually is. 0:01:56.822,0:02:00.104 So will today's AI[br]do what we ask it to do? 0:02:00.128,0:02:01.722 It will if it can, 0:02:01.746,0:02:04.472 but it might not do what we actually want. 0:02:04.496,0:02:06.911 So let's say that you[br]were trying to get an AI 0:02:06.935,0:02:09.554 to take this collection of robot parts 0:02:09.578,0:02:13.775 and assemble them into some kind of robot[br]to get from Point A to Point B. 0:02:13.799,0:02:16.280 Now, if you were going to try[br]and solve this problem 0:02:16.304,0:02:18.655 by writing a traditional-style[br]computer program, 0:02:18.679,0:02:22.096 you would give the program[br]step-by-step instructions 0:02:22.120,0:02:23.449 on how to take these parts, 0:02:23.473,0:02:25.880 how to assemble them[br]into a robot with legs 0:02:25.904,0:02:28.846 and then how to use those legs[br]to walk to Point B. 0:02:29.441,0:02:31.781 But when you're using AI[br]to solve the problem, 0:02:31.805,0:02:32.979 it goes differently. 0:02:33.003,0:02:35.385 You don't tell it[br]how to solve the problem, 0:02:35.409,0:02:36.888 you just give it the goal, 0:02:36.912,0:02:40.174 and it has to figure out for itself[br]via trial and error 0:02:40.198,0:02:41.682 how to reach that goal. 0:02:42.254,0:02:46.356 And it turns out that the way AI tends[br]to solve this particular problem 0:02:46.380,0:02:47.864 is by doing this: 0:02:47.888,0:02:51.255 it assembles itself into a tower[br]and then falls over 0:02:51.279,0:02:53.106 and lands at Point B. 0:02:53.130,0:02:55.959 And technically, this solves the problem. 0:02:55.983,0:02:57.622 Technically, it got to Point B. 0:02:57.646,0:03:01.911 The danger of AI is not that[br]it's going to rebel against us, 0:03:01.935,0:03:06.209 it's that it's going to do[br]exactly what we ask it to do. 0:03:06.876,0:03:09.374 So then the trick[br]of working with AI becomes: 0:03:09.398,0:03:13.226 How do we set up the problem[br]so that it actually does what we want? 0:03:14.726,0:03:18.032 So this little robot here[br]is being controlled by an AI. 0:03:18.056,0:03:20.870 The AI came up with a design[br]for the robot legs 0:03:20.894,0:03:24.972 and then figured out how to use them[br]to get past all these obstacles. 0:03:24.996,0:03:27.737 But when David Ha set up this experiment, 0:03:27.761,0:03:30.617 he had to set it up[br]with very, very strict limits 0:03:30.641,0:03:33.933 on how big the AI[br]was allowed to make the legs, 0:03:33.957,0:03:35.507 because otherwise ... 0:03:43.058,0:03:46.989 (Laughter) 0:03:48.563,0:03:52.308 And technically, it got[br]to the end of that obstacle course. 0:03:52.332,0:03:57.274 So you see how hard it is to get AI[br]to do something as simple as just walk. 0:03:57.298,0:04:01.118 So seeing the AI do this,[br]you may say, OK, no fair, 0:04:01.142,0:04:03.722 you can't just be[br]a tall tower and fall over, 0:04:03.746,0:04:07.181 you have to actually, like,[br]use legs to walk. 0:04:07.205,0:04:09.964 And it turns out,[br]that doesn't always work, either. 0:04:09.988,0:04:12.747 This AI's job was to move fast. 0:04:13.115,0:04:16.708 They didn't tell it that it had[br]to run facing forward 0:04:16.732,0:04:18.990 or that it couldn't use its arms. 0:04:19.487,0:04:24.105 So this is what you get[br]when you train AI to move fast, 0:04:24.129,0:04:27.663 you get things like somersaulting[br]and silly walks. 0:04:27.687,0:04:29.087 It's really common. 0:04:29.667,0:04:32.846 So is twitching along the floor in a heap. 0:04:32.870,0:04:34.020 (Laughter) 0:04:35.241,0:04:38.495 So in my opinion, you know what[br]should have been a whole lot weirder 0:04:38.519,0:04:39.915 is the "Terminator" robots. 0:04:40.256,0:04:44.011 Hacking "The Matrix" is another thing[br]that AI will do if you give it a chance. 0:04:44.035,0:04:46.552 So if you train an AI in a simulation, 0:04:46.576,0:04:50.689 it will learn how to do things like[br]hack into the simulation's math errors 0:04:50.713,0:04:52.920 and harvest them for energy. 0:04:52.944,0:04:58.419 Or it will figure out how to move faster[br]by glitching repeatedly into the floor. 0:04:58.443,0:05:00.028 When you're working with AI, 0:05:00.052,0:05:02.441 it's less like working with another human 0:05:02.465,0:05:06.094 and a lot more like working[br]with some kind of weird force of nature. 0:05:06.562,0:05:11.185 And it's really easy to accidentally[br]give AI the wrong problem to solve, 0:05:11.209,0:05:15.747 and often we don't realize that[br]until something has actually gone wrong. 0:05:16.242,0:05:18.322 So here's an experiment I did, 0:05:18.346,0:05:21.528 where I wanted the AI[br]to copy paint colors, 0:05:21.552,0:05:23.298 to invent new paint colors, 0:05:23.322,0:05:26.309 given the list like the ones[br]here on the left. 0:05:26.798,0:05:29.802 And here's what the AI[br]actually came up with. 0:05:29.826,0:05:32.969 [Sindis Poop, Turdly, Suffer, Gray Pubic] 0:05:32.993,0:05:37.223 (Laughter) 0:05:39.177,0:05:41.063 So technically, 0:05:41.087,0:05:42.951 it did what I asked it to. 0:05:42.975,0:05:46.283 I thought I was asking it for,[br]like, nice paint color names, 0:05:46.307,0:05:48.614 but what I was actually asking it to do 0:05:48.638,0:05:51.724 was just imitate the kinds[br]of letter combinations 0:05:51.748,0:05:53.653 that it had seen in the original. 0:05:53.677,0:05:56.775 And I didn't tell it anything[br]about what words mean, 0:05:56.799,0:05:59.359 or that there are maybe some words 0:05:59.383,0:06:02.272 that it should avoid using[br]in these paint colors. 0:06:03.141,0:06:06.635 So its entire world[br]is the data that I gave it. 0:06:06.659,0:06:10.687 Like with the ice cream flavors,[br]it doesn't know about anything else. 0:06:12.491,0:06:14.129 So it is through the data 0:06:14.153,0:06:18.197 that we often accidentally tell AI[br]to do the wrong thing. 0:06:18.694,0:06:21.726 This is a fish called a tench. 0:06:21.750,0:06:23.565 And there was a group of researchers 0:06:23.589,0:06:27.463 who trained an AI to identify[br]this tench in pictures. 0:06:27.487,0:06:28.783 But then when they asked it 0:06:28.807,0:06:32.233 what part of the picture it was actually[br]using to identify the fish, 0:06:32.257,0:06:33.615 here's what it highlighted. 0:06:35.203,0:06:37.392 Yes, those are human fingers. 0:06:37.416,0:06:39.475 Why would it be looking for human fingers 0:06:39.499,0:06:41.420 if it's trying to identify a fish? 0:06:42.126,0:06:45.290 Well, it turns out that the tench[br]is a trophy fish, 0:06:45.314,0:06:49.125 and so in a lot of pictures[br]that the AI had seen of this fish 0:06:49.149,0:06:50.300 during training, 0:06:50.324,0:06:51.814 the fish looked like this. 0:06:51.838,0:06:53.473 (Laughter) 0:06:53.497,0:06:56.827 And it didn't know that the fingers[br]aren't part of the fish. 0:06:58.808,0:07:02.928 So you see why it is so hard[br]to design an AI 0:07:02.952,0:07:06.271 that actually can understand[br]what it's looking at. 0:07:06.295,0:07:09.157 And this is why designing[br]the image recognition 0:07:09.181,0:07:11.248 in self-driving cars is so hard, 0:07:11.272,0:07:13.477 and why so many self-driving car failures 0:07:13.501,0:07:16.386 are because the AI got confused. 0:07:16.410,0:07:20.418 I want to talk about an example from 2016. 0:07:20.442,0:07:24.897 There was a fatal accident when somebody[br]was using Tesla's autopilot AI, 0:07:24.921,0:07:28.335 but instead of using it on the highway[br]like it was designed for, 0:07:28.359,0:07:30.564 they used it on city streets. 0:07:31.239,0:07:32.414 And what happened was, 0:07:32.438,0:07:35.834 a truck drove out in front of the car[br]and the car failed to brake. 0:07:36.507,0:07:41.269 Now, the AI definitely was trained[br]to recognize trucks in pictures. 0:07:41.293,0:07:43.438 But what it looks like happened is 0:07:43.462,0:07:46.393 the AI was trained to recognize[br]trucks on highway driving, 0:07:46.417,0:07:49.316 where you would expect[br]to see trucks from behind. 0:07:49.340,0:07:52.760 Trucks on the side is not supposed[br]to happen on a highway, 0:07:52.784,0:07:56.239 and so when the AI saw this truck, 0:07:56.263,0:08:01.090 it looks like the AI recognized it[br]as most likely to be a road sign 0:08:01.114,0:08:03.387 and therefore, safe to drive underneath. 0:08:04.114,0:08:06.694 Here's an AI misstep[br]from a different field. 0:08:06.718,0:08:10.178 Amazon recently had to give up[br]on a résumé-sorting algorithm 0:08:10.202,0:08:11.422 that they were working on 0:08:11.446,0:08:15.354 when they discovered that the algorithm[br]had learned to discriminate against women. 0:08:15.378,0:08:18.094 What happened is they had trained it[br]on example résumés 0:08:18.118,0:08:20.360 of people who they had hired in the past. 0:08:20.384,0:08:24.407 And from these examples, the AI learned[br]to avoid the résumés of people 0:08:24.431,0:08:26.457 who had gone to women's colleges 0:08:26.481,0:08:29.287 or who had the word "women"[br]somewhere in their resume, 0:08:29.311,0:08:33.887 as in, "women's soccer team"[br]or "Society of Women Engineers." 0:08:33.911,0:08:37.885 The AI didn't know that it wasn't supposed[br]to copy this particular thing 0:08:37.909,0:08:39.887 that it had seen the humans do. 0:08:39.911,0:08:43.088 And technically, it did[br]what they asked it to do. 0:08:43.112,0:08:45.909 They just accidentally asked it[br]to do the wrong thing. 0:08:46.653,0:08:49.548 And this happens all the time with AI. 0:08:50.120,0:08:53.711 AI can be really destructive[br]and not know it. 0:08:53.735,0:08:58.813 So the AIs that recommend[br]new content in Facebook, in YouTube, 0:08:58.837,0:09:02.376 they're optimized to increase[br]the number of clicks and views. 0:09:02.400,0:09:05.836 And unfortunately, one way[br]that they have found of doing this 0:09:05.860,0:09:10.363 is to recommend the content[br]of conspiracy theories or bigotry. 0:09:10.902,0:09:16.204 The AIs themselves don't have any concept[br]of what this content actually is, 0:09:16.228,0:09:19.623 and they don't have any concept[br]of what the consequences might be 0:09:19.647,0:09:21.756 of recommending this content. 0:09:22.296,0:09:24.307 So, when we're working with AI, 0:09:24.331,0:09:28.513 it's up to us to avoid problems. 0:09:28.537,0:09:30.860 And avoiding things going wrong, 0:09:30.884,0:09:35.410 that may come down to[br]the age-old problem of communication, 0:09:35.434,0:09:39.179 where we as humans have to learn[br]how to communicate with AI. 0:09:39.203,0:09:43.242 We have to learn what AI[br]is capable of doing and what it's not, 0:09:43.266,0:09:46.352 and to understand that,[br]with its tiny little worm brain, 0:09:46.376,0:09:50.389 AI doesn't really understand[br]what we're trying to ask it to do. 0:09:51.148,0:09:54.469 So in other words, we have[br]to be prepared to work with AI 0:09:54.493,0:09:59.751 that's not the super-competent,[br]all-knowing AI of science fiction. 0:09:59.775,0:10:02.637 We have to be prepared to work with an AI 0:10:02.661,0:10:05.599 that's the one that we actually have[br]in the present day. 0:10:05.623,0:10:09.828 And present-day AI is plenty weird enough. 0:10:09.852,0:10:11.042 Thank you. 0:10:11.066,0:10:16.291 (Applause)