WEBVTT 00:00:13.298 --> 00:00:17.148 The rise of the machines! 00:00:17.928 --> 00:00:22.748 Who here is scared of killer robots? 00:00:23.140 --> 00:00:25.250 (Laughter) 00:00:25.612 --> 00:00:27.212 I am! 00:00:28.226 --> 00:00:31.896 I used to work in UAVs - Unmanned Aerial Vehicles - 00:00:31.896 --> 00:00:36.736 and all I could think seeing these things is that someday, 00:00:37.133 --> 00:00:40.903 somebody is going to strap a machine-gun to these things, 00:00:40.903 --> 00:00:43.853 and they're going to hunt me down in swarms. 00:00:44.688 --> 00:00:49.848 I work in robotics at Brown University and I'm scared of robots. 00:00:50.541 --> 00:00:53.281 Actually, I'm kind of terrified, 00:00:53.761 --> 00:00:55.811 but, can you blame me? 00:00:55.811 --> 00:00:59.501 Ever since I was a kid, all I've seen are movies 00:00:59.501 --> 00:01:03.011 that portrayed the ascendance of Artificial Intelligence 00:01:03.011 --> 00:01:05.811 and our inevitable conflict with it - 00:01:05.811 --> 00:01:11.041 2001 Space Odyssey, The Terminator, The Matrix - 00:01:11.800 --> 00:01:16.200 and the stories they tell are pretty scary: 00:01:16.200 --> 00:01:20.918 rogue bands of humans running away from super intelligent machines. 00:01:21.935 --> 00:01:26.795 That scares me. From the hands, it seems like it scares you as well. 00:01:26.795 --> 00:01:30.235 I know it is scary to Elon Musk. 00:01:30.825 --> 00:01:35.245 But, you know, we have a little bit of time before the robots rise up. 00:01:35.245 --> 00:01:38.571 Robots like the PR2 that I have at my initiative, 00:01:38.571 --> 00:01:41.381 they can't even open the door yet. 00:01:42.191 --> 00:01:46.707 So in my mind, this discussion of super intelligent robots 00:01:46.707 --> 00:01:51.997 is a little bit of a distraction from something far more insidious 00:01:51.997 --> 00:01:56.217 that is going on with AI systems across the country. 00:01:56.917 --> 00:02:00.067 You see, right now, there are people - 00:02:00.067 --> 00:02:04.227 doctors, judges, accountants - 00:02:04.227 --> 00:02:07.957 who are getting information from an AI system 00:02:07.957 --> 00:02:12.717 and treating it as if it is information from a trusted colleague. 00:02:13.931 --> 00:02:16.901 It's this trust that bothers me 00:02:17.141 --> 00:02:20.182 not because of how often AI gets it wrong. 00:02:20.182 --> 00:02:24.403 AI researchers pride themselves in accuracy on results. 00:02:24.869 --> 00:02:27.849 It's how badly it gets it wrong when it makes a mistake 00:02:27.849 --> 00:02:29.779 that has me worried. 00:02:29.779 --> 00:02:33.579 These systems do not fail gracefully. 00:02:34.240 --> 00:02:36.960 So, let's take a look at what this looks like. 00:02:37.120 --> 00:02:42.560 This is a dog that has been misidentified as a wolf by an AI algorithm. 00:02:43.233 --> 00:02:45.239 The researchers wanted to know: 00:02:45.239 --> 00:02:49.509 why did this particular husky get misidentified as a wolf? 00:02:49.751 --> 00:02:52.721 So they rewrote the algorithm to explain to them 00:02:52.721 --> 00:02:55.651 the parts of the picture it was paying attention to 00:02:55.651 --> 00:02:58.501 when the AI algorithm made its decision. 00:02:59.039 --> 00:03:02.749 In this picture, what do you think it paid attention to? 00:03:02.869 --> 00:03:05.099 What would you pay attention to? 00:03:05.359 --> 00:03:10.479 Maybe the eyes, maybe the ears, the snout ... 00:03:13.041 --> 00:03:16.531 This is what it paid attention to: 00:03:16.981 --> 00:03:20.391 mostly the snow and the background of the picture. 00:03:21.003 --> 00:03:25.853 You see, there was bias in the data set that was fed to this algorithm. 00:03:26.293 --> 00:03:30.373 Most of the pictures of wolves were in snow, 00:03:30.573 --> 00:03:34.783 so the AI algorithm conflated the presence or absence of snow 00:03:34.783 --> 00:03:38.373 for the presence or absence of a wolf. 00:03:39.912 --> 00:03:42.027 The scary thing about this 00:03:42.027 --> 00:03:46.287 is the researchers had no idea this was happening 00:03:46.287 --> 00:03:50.107 until they rewrote the algorithm to explain itself. 00:03:50.836 --> 00:03:55.326 And that's the thing with AI algorithms, deep learning, machine learning. 00:03:55.326 --> 00:03:59.346 Even the developers who work on this stuff 00:03:59.346 --> 00:04:02.396 have no idea what it's doing. 00:04:03.001 --> 00:04:07.591 So, that might be a great example for a research, 00:04:07.591 --> 00:04:10.281 but what does this mean in the real world? 00:04:10.611 --> 00:04:15.841 The Compas Criminal Sentencing algorithm is used in 13 states 00:04:15.841 --> 00:04:17.991 to determine criminal recidivism 00:04:17.991 --> 00:04:22.471 or the risk of committing a crime again after you're released. 00:04:23.199 --> 00:04:26.959 ProPublica found that if you're African-American, 00:04:26.959 --> 00:04:32.023 Compas was 77% more likely to qualify you as a potentially violent offender 00:04:32.023 --> 00:04:34.123 than if you're a Caucasian. 00:04:34.784 --> 00:04:39.404 This is a real system being used in the real world by real judges 00:04:39.404 --> 00:04:42.434 to make decisions about real people's lives. 00:04:44.115 --> 00:04:48.815 Why would the judges trust it if it seems to exhibit bias? 00:04:49.866 --> 00:04:55.176 Well, the reason they use Compas is because it is a model for efficiency. 00:04:55.622 --> 00:05:00.072 Compas lets them go through caseloads much faster 00:05:00.072 --> 00:05:02.992 in a backlogged criminal justice system. 00:05:04.877 --> 00:05:07.297 Why would they question their own software? 00:05:07.297 --> 00:05:10.957 It's been requisitioned by the State, approved by their ID Department. 00:05:10.957 --> 00:05:13.357 Why would they question it? 00:05:13.357 --> 00:05:16.513 Well, the people sentenced by Compas have questioned it, 00:05:16.513 --> 00:05:18.853 and their lawsuits should chill us all. 00:05:19.243 --> 00:05:22.123 The Wisconsin State Supreme Court ruled 00:05:22.123 --> 00:05:25.643 that compass did not deny a defendant due process 00:05:25.643 --> 00:05:28.433 provided it was used "properly." 00:05:28.963 --> 00:05:30.688 In the same set of rulings, they ruled 00:05:30.688 --> 00:05:34.758 that the defendant could not inspect the source code of Compass. 00:05:35.700 --> 00:05:39.990 It has to be used properly but you can't inspect the source code? 00:05:40.425 --> 00:05:43.425 This is a disturbing set of rulings when taken together 00:05:43.425 --> 00:05:46.175 for anyone facing criminal sentencing. 00:05:46.625 --> 00:05:50.705 You may not care about this because you're not facing criminal sentencing, 00:05:51.056 --> 00:05:55.056 but what if I told you that black box AI algorithms like this 00:05:55.056 --> 00:05:59.376 are being used to decide whether or not you can get a loan for your house, 00:06:00.144 --> 00:06:02.844 whether you get a job interview, 00:06:03.364 --> 00:06:05.863 whether you get Medicaid, 00:06:05.954 --> 00:06:10.434 and are even driving cars and trucks down the highway. 00:06:10.831 --> 00:06:14.531 Would you want the public to be able to inspect the algorithm 00:06:14.531 --> 00:06:17.239 that's trying to make a decisiom between a shopping cart 00:06:17.239 --> 00:06:20.899 and a baby carriage in a self-driving truck, 00:06:20.899 --> 00:06:23.679 in the same way the dog/wolf algorithm was trying to decide 00:06:23.679 --> 00:06:26.069 between a dog or a wolf? 00:06:26.282 --> 00:06:31.462 Are you potentially a metaphorical dog who's been misidentified as a wolf 00:06:31.462 --> 00:06:34.262 by somebody's AI algorithm? 00:06:34.868 --> 00:06:38.718 Considering the complexity of people, it's possible. 00:06:38.811 --> 00:06:42.031 Is there anything you can do about it now? 00:06:42.031 --> 00:06:46.841 Probably not, and that's what we need to focus on. 00:06:47.487 --> 00:06:50.567 We need to demand standards of accountability, 00:06:50.567 --> 00:06:55.397 transparency and recourse in AI systems. 00:06:56.456 --> 00:07:01.034 ISO, the International Standards Organization, just formed a committee 00:07:01.034 --> 00:07:04.504 to make decisions about what to do for AI standards. 00:07:04.923 --> 00:07:08.739 They're about five years out from coming up with a standard. 00:07:08.989 --> 00:07:12.479 These systems are being used now, 00:07:13.671 --> 00:07:19.361 not just in loans, but they're being used in vehicles like I was saying. 00:07:20.841 --> 00:07:25.273 They're being used in things like Cooperative Adaptive Cruise Control. 00:07:25.273 --> 00:07:27.973 It's funny that they call that "cruise control" 00:07:27.973 --> 00:07:32.703 because the type of controller used in cruise control, a PID controller, 00:07:32.703 --> 00:07:38.323 was used for 30 years in chemical plants before it ever made into a car. 00:07:39.139 --> 00:07:41.138 The type of controller that's used 00:07:41.138 --> 00:07:44.628 to drive a self-driving car and a machine learning, 00:07:44.628 --> 00:07:48.878 that's only been used in research since 2007. 00:07:49.680 --> 00:07:52.230 These are new technologies. 00:07:52.470 --> 00:07:56.430 We need to demand the standards and we need to demand regulation 00:07:56.430 --> 00:08:00.340 so that we don't get snake oil in the marketplace. 00:08:00.819 --> 00:08:05.059 And we also have to have a little bit of skepticism. 00:08:05.861 --> 00:08:07.871 The experiments in Authority 00:08:07.871 --> 00:08:11.121 done by Stanley Milgram after World War II, 00:08:11.121 --> 00:08:16.031 showed that your average person would follow an authority figures orders 00:08:16.031 --> 00:08:19.741 even if it meant harming their fellow citizen. 00:08:20.461 --> 00:08:22.850 In this experiment, 00:08:22.850 --> 00:08:27.130 every day Americans would shock an actor 00:08:27.689 --> 00:08:31.269 past the point of him complaining about her trouble, 00:08:31.577 --> 00:08:35.427 past the point of him screaming in pain, 00:08:35.934 --> 00:08:40.894 past the point of him going silent in simulated death, 00:08:41.613 --> 00:08:44.099 all because somebody 00:08:44.099 --> 00:08:47.969 with no credentials, in a lab coat, 00:08:47.969 --> 00:08:50.795 was saying some variation of the phrase 00:08:50.795 --> 00:08:54.475 "The experiment must continue." 00:08:56.945 --> 00:09:02.398 In AI, we have Milgram's ultimate authority figure. 00:09:03.656 --> 00:09:08.366 We have a dispassionate system that can't reflect, 00:09:09.370 --> 00:09:12.640 that can't make another decision, 00:09:12.920 --> 00:09:14.902 that there is no recourse to, 00:09:15.074 --> 00:09:20.454 that will always say "The system or "The process must continue." 00:09:23.313 --> 00:09:25.883 Now, I'm going to tell you a little story. 00:09:25.883 --> 00:09:29.723 It's about a car trip I took driving across country. 00:09:30.790 --> 00:09:34.690 I was coming into Salt Lake City and it started raining. 00:09:35.211 --> 00:09:39.900 As I climbed into the mountains, that rain turned into snow, 00:09:40.380 --> 00:09:42.580 and pretty soon that snow was whiteout. 00:09:42.580 --> 00:09:45.720 I couldn't see the taillights of the car in front of me. 00:09:46.153 --> 00:09:48.023 I started skidding. 00:09:48.023 --> 00:09:51.033 I went 360 one way, I went 360 the other way. 00:09:51.033 --> 00:09:52.773 I went off the highway. 00:09:52.773 --> 00:09:54.953 Mud-coated my windows, I couldn't see a thing. 00:09:54.953 --> 00:09:59.103 I was terrified some car was going to come crashing into me. 00:09:59.924 --> 00:10:03.864 Now, I'm telling you this story to get you thinking 00:10:03.864 --> 00:10:07.324 about how something small and seemingly mundane 00:10:07.324 --> 00:10:10.094 like a little bit precipitation, 00:10:10.094 --> 00:10:14.584 can easily grow into something very dangerous. 00:10:15.409 --> 00:10:19.789 We are driving in the rain with AI right now, 00:10:20.442 --> 00:10:23.402 and that rain will turn to snow, 00:10:23.887 --> 00:10:27.457 and that snow could become a blizzard. 00:10:28.097 --> 00:10:30.347 We need to pause, 00:10:30.537 --> 00:10:32.857 check the conditions, 00:10:33.002 --> 00:10:35.502 put in place safety standards, 00:10:35.642 --> 00:10:41.232 and ask ourselves how far do we want to go, 00:10:42.267 --> 00:10:46.437 because the economic incentives for AI and automation 00:10:46.437 --> 00:10:48.363 to replace human labor 00:10:48.363 --> 00:10:53.313 will be beyond anything we have seen since the Industrial Revolution. 00:10:54.043 --> 00:10:58.040 Human salary demands can't compete 00:10:58.040 --> 00:11:01.650 with the base cost of electricity. 00:11:02.480 --> 00:11:07.915 AIs and robots will replace fry cooks and fast-food joints 00:11:07.915 --> 00:11:10.415 and radiologists in hospitals. 00:11:11.135 --> 00:11:14.330 Someday, the AI will diagnose your cancer, 00:11:14.330 --> 00:11:17.430 and a robot will perform the surgery. 00:11:17.960 --> 00:11:22.420 Only a healthy skepticism of these systems 00:11:22.420 --> 00:11:25.981 is going to help keep people in the loop. 00:11:26.291 --> 00:11:31.473 And I'm confident, if we can keep people in the loop, 00:11:31.473 --> 00:11:36.393 if we can build transparent AI systems like the dog/wolf example 00:11:36.393 --> 00:11:39.658 where the AI explained what it was doing to people, 00:11:39.658 --> 00:11:42.588 and people were able to spot-check it, 00:11:42.588 --> 00:11:47.028 we can create new jobs for people partnering with AI. 00:11:48.573 --> 00:11:51.043 If we work together with AI, 00:11:51.043 --> 00:11:55.563 we will probably be able to solve some of our greatest challenges. 00:11:56.764 --> 00:12:01.354 But to do that, we need to lead and not follow. 00:12:01.801 --> 00:12:05.491 We need to choose to be less like robots, 00:12:05.491 --> 00:12:10.131 and we need to build the robots to be more like people, 00:12:11.020 --> 00:12:13.350 because ultimately, 00:12:13.350 --> 00:12:18.160 the only thing we need to fear is not killer robots, 00:12:18.700 --> 00:12:21.641 it's our own intellectual laziness. 00:12:22.215 --> 00:12:26.655 The only thing we need to fear is ourselves. 00:12:26.989 --> 00:12:28.441 Thank you. 00:12:28.441 --> 00:12:29.831 (Applause)