[Script Info] Title: [Events] Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text Dialogue: 0,0:00:00.00,0:00:09.55,Default,,0000,0000,0000,,{\i1}34c3 preroll music{\i0} Dialogue: 0,0:00:15.56,0:00:18.23,Default,,0000,0000,0000,,Herald: ...and I will let Katherine take\Nthe stage now. Dialogue: 0,0:00:18.59,0:00:21.43,Default,,0000,0000,0000,,Katharine Jarmul, kjam: Awesome! Well,\Nthank you so much for the introduction and Dialogue: 0,0:00:21.43,0:00:25.31,Default,,0000,0000,0000,,thank you so much for being here, taking\Nyour time. I know that Congress is really Dialogue: 0,0:00:25.31,0:00:29.80,Default,,0000,0000,0000,,exciting, so I really appreciate you\Nspending some time with me today. It's my Dialogue: 0,0:00:29.80,0:00:34.47,Default,,0000,0000,0000,,first ever Congress, so I'm also really\Nexcited and I want to meet new people. So Dialogue: 0,0:00:34.47,0:00:39.93,Default,,0000,0000,0000,,if you wanna come say hi to me later, I'm\Nsomewhat friendly, so we can maybe be Dialogue: 0,0:00:39.93,0:00:44.68,Default,,0000,0000,0000,,friends later. Today what we're going to\Ntalk about is deep learning blind spots or Dialogue: 0,0:00:44.68,0:00:49.89,Default,,0000,0000,0000,,how to fool "artificial intelligence". I\Nlike to put "artificial intelligence" in Dialogue: 0,0:00:49.89,0:00:55.27,Default,,0000,0000,0000,,quotes, because.. yeah, we'll talk about\Nthat, but I think it should be in quotes. Dialogue: 0,0:00:55.27,0:00:59.57,Default,,0000,0000,0000,,And today we're going to talk a little bit\Nabout deep learning, how it works and how Dialogue: 0,0:00:59.57,0:01:07.64,Default,,0000,0000,0000,,you can maybe fool it. So I ask us: Is AI\Nbecoming more intelligent? Dialogue: 0,0:01:07.64,0:01:11.08,Default,,0000,0000,0000,,And I ask this because when I open a\Nbrowser and, of course, often it's Chrome Dialogue: 0,0:01:11.08,0:01:16.98,Default,,0000,0000,0000,,and Google is already prompting me\Nfor what I should look at Dialogue: 0,0:01:16.98,0:01:20.26,Default,,0000,0000,0000,,and it knows that I work with machine\Nlearning, right? Dialogue: 0,0:01:20.26,0:01:23.83,Default,,0000,0000,0000,,And these are the headlines\Nthat I see every day: Dialogue: 0,0:01:23.83,0:01:29.40,Default,,0000,0000,0000,,"Are Computers Already Smarter Than\NHumans?" Dialogue: 0,0:01:29.40,0:01:32.29,Default,,0000,0000,0000,,If so, I think we could just pack up and\Ngo home, right? Dialogue: 0,0:01:32.29,0:01:36.14,Default,,0000,0000,0000,,Like, we fixed computers,\Nright? If a computer is smarter than me, Dialogue: 0,0:01:36.14,0:01:39.78,Default,,0000,0000,0000,,then I already fixed it, we can go home,\Nthere's no need to talk about computers Dialogue: 0,0:01:39.78,0:01:47.75,Default,,0000,0000,0000,,anymore, let's just move on with life. But\Nthat's not true, right? We know, because Dialogue: 0,0:01:47.75,0:01:51.01,Default,,0000,0000,0000,,we work with computers and we know how\Nstupid computers are sometimes. They're Dialogue: 0,0:01:51.01,0:01:55.89,Default,,0000,0000,0000,,pretty bad. Computers do only what we tell\Nthem to do, generally, so I don't think a Dialogue: 0,0:01:55.89,0:02:01.09,Default,,0000,0000,0000,,computer can think and be smarter than me.\NSo with the same types of headlines that Dialogue: 0,0:02:01.09,0:02:11.69,Default,,0000,0000,0000,,you see this, then you also see this: And\Nyeah, so Apple recently released their Dialogue: 0,0:02:11.69,0:02:17.50,Default,,0000,0000,0000,,face ID and this unlocks your phone with\Nyour face and it seems like a great idea, Dialogue: 0,0:02:17.50,0:02:22.45,Default,,0000,0000,0000,,right? You have a unique face, you have a\Nface, nobody else can take your face. But Dialogue: 0,0:02:22.45,0:02:28.30,Default,,0000,0000,0000,,unfortunately what we find out about\Ncomputers is that they're awful sometimes, Dialogue: 0,0:02:28.30,0:02:32.48,Default,,0000,0000,0000,,and for these women.. for this Chinese\Nwoman that owned an iPhone, Dialogue: 0,0:02:32.48,0:02:35.96,Default,,0000,0000,0000,,her coworker was able to unlock her phone. Dialogue: 0,0:02:35.96,0:02:39.32,Default,,0000,0000,0000,,And I think Hendrick and Karin\Ntalked about, if you were here for the Dialogue: 0,0:02:39.32,0:02:41.59,Default,,0000,0000,0000,,last talk ("Beeinflussung durch künstliche\NIntelligenz"). We have a lot of problems Dialogue: 0,0:02:41.59,0:02:46.38,Default,,0000,0000,0000,,in machine learning and one of them is\Nstereotypes and prejudice that are within Dialogue: 0,0:02:46.38,0:02:52.34,Default,,0000,0000,0000,,our training data or within our minds that\Nleak into our models. And perhaps they Dialogue: 0,0:02:52.34,0:02:57.74,Default,,0000,0000,0000,,didn't do adequate training data on\Ndetermining different features of Chinese Dialogue: 0,0:02:57.74,0:03:03.16,Default,,0000,0000,0000,,folks. And perhaps it's other problems\Nwith their model or their training data or Dialogue: 0,0:03:03.16,0:03:07.50,Default,,0000,0000,0000,,whatever they're trying to do. But they\Nclearly have some issues, right? So when Dialogue: 0,0:03:07.50,0:03:12.05,Default,,0000,0000,0000,,somebody asked me: "Is AI gonna take over\Nthe world and is there a super robot Dialogue: 0,0:03:12.05,0:03:17.30,Default,,0000,0000,0000,,that's gonna come and be my new, you know,\Nleader or so to speak?" I tell them we Dialogue: 0,0:03:17.30,0:03:21.71,Default,,0000,0000,0000,,can't even figure out the stuff that we\Nalready have in production. So if we can't Dialogue: 0,0:03:21.71,0:03:25.69,Default,,0000,0000,0000,,even figure out the stuff we already have\Nin production, I'm a little bit less Dialogue: 0,0:03:25.69,0:03:33.21,Default,,0000,0000,0000,,worried of the super robot coming to kill\Nme. That said, unfortunately the powers Dialogue: 0,0:03:33.21,0:03:38.19,Default,,0000,0000,0000,,that be, the powers that be a lot of times\Nthey believe in this and they believe Dialogue: 0,0:03:38.19,0:03:44.54,Default,,0000,0000,0000,,strongly in "artificial intelligence" and\Nmachine learning. They're collecting data Dialogue: 0,0:03:44.54,0:03:50.80,Default,,0000,0000,0000,,every day about you and me and everyone\Nelse. And they're gonna use this data to Dialogue: 0,0:03:50.80,0:03:56.35,Default,,0000,0000,0000,,build even better models. This is because\Nthe revolution that we're seeing now in Dialogue: 0,0:03:56.35,0:04:02.08,Default,,0000,0000,0000,,machine learning has really not much to do\Nwith new algorithms or architectures. It Dialogue: 0,0:04:02.08,0:04:09.63,Default,,0000,0000,0000,,has a lot more to do with heavy compute\Nand with massive, massive data sets. And Dialogue: 0,0:04:09.63,0:04:15.74,Default,,0000,0000,0000,,the more that we have training data of\Npetabytes per 24 hours or even less, the Dialogue: 0,0:04:15.74,0:04:22.69,Default,,0000,0000,0000,,more we're able to essentially fix up the\Nparts that don't work so well. The Dialogue: 0,0:04:22.69,0:04:25.98,Default,,0000,0000,0000,,companies that we see here are companies\Nthat are investing heavily in machine Dialogue: 0,0:04:25.98,0:04:30.98,Default,,0000,0000,0000,,learning and AI. Part of how they're\Ninvesting heavily is, they're collecting Dialogue: 0,0:04:30.98,0:04:37.100,Default,,0000,0000,0000,,more and more data about you and me and\Neveryone else. Google and Facebook, more Dialogue: 0,0:04:37.100,0:04:42.79,Default,,0000,0000,0000,,than 1 billion active users. I was\Nsurprised to know that in Germany the Dialogue: 0,0:04:42.79,0:04:48.16,Default,,0000,0000,0000,,desktop search traffic for Google is\Nhigher than most of the rest of the world. Dialogue: 0,0:04:48.16,0:04:53.26,Default,,0000,0000,0000,,And for Baidu they're growing with the\Nspeed that broadband is available. And so, Dialogue: 0,0:04:53.26,0:04:56.97,Default,,0000,0000,0000,,what we see is, these people are\Ncollecting this data and they also are Dialogue: 0,0:04:56.97,0:05:02.78,Default,,0000,0000,0000,,using new technologies like GPUs and TPUs\Nin new ways to parallelize workflows Dialogue: 0,0:05:02.78,0:05:09.45,Default,,0000,0000,0000,,and with this they're able to mess up\Nless, right? They're still messing up, but Dialogue: 0,0:05:09.45,0:05:14.96,Default,,0000,0000,0000,,they mess up slightly less. And they're\Nnot going to get uninterested in this Dialogue: 0,0:05:14.96,0:05:20.55,Default,,0000,0000,0000,,topic, so we need to kind of start to\Nprepare how we respond to this type of Dialogue: 0,0:05:20.55,0:05:25.86,Default,,0000,0000,0000,,behavior. One of the things that has been\Na big area of research, actually also for Dialogue: 0,0:05:25.86,0:05:30.08,Default,,0000,0000,0000,,a lot of these companies, is what we'll\Ntalk about today and that's adversarial Dialogue: 0,0:05:30.08,0:05:36.80,Default,,0000,0000,0000,,machine learning. But the first thing that\Nwe'll start with is what is behind what we Dialogue: 0,0:05:36.80,0:05:44.01,Default,,0000,0000,0000,,call AI. So most of the time when you\Nthink of AI or something like Siri and so Dialogue: 0,0:05:44.01,0:05:48.98,Default,,0000,0000,0000,,forth, you are actually potentially\Ntalking about an old-school rule-based Dialogue: 0,0:05:48.98,0:05:53.93,Default,,0000,0000,0000,,system. This is a rule, like you say a\Nparticular thing and then Siri is like: Dialogue: 0,0:05:53.93,0:05:58.13,Default,,0000,0000,0000,,"Yes, I know how to respond to this". And\Nwe even hard program these types of things Dialogue: 0,0:05:58.13,0:06:02.88,Default,,0000,0000,0000,,in, right? That is one version of AI, is\Nessentially: It's been pre-programmed to Dialogue: 0,0:06:02.88,0:06:08.84,Default,,0000,0000,0000,,do and understand certain things. Another\Nform that usually, for example for the Dialogue: 0,0:06:08.84,0:06:12.62,Default,,0000,0000,0000,,people that are trying to build AI robots\Nand the people that are trying to build Dialogue: 0,0:06:12.62,0:06:17.11,Default,,0000,0000,0000,,what we call "general AI", so this is\Nsomething that can maybe learn like a Dialogue: 0,0:06:17.11,0:06:20.19,Default,,0000,0000,0000,,human, they'll use reinforcement learning. Dialogue: 0,0:06:20.19,0:06:22.20,Default,,0000,0000,0000,,I don't specialize in reinforcement\Nlearning. Dialogue: 0,0:06:22.20,0:06:26.40,Default,,0000,0000,0000,,But what it does is it essentially\Ntries to reward you for Dialogue: 0,0:06:26.40,0:06:32.43,Default,,0000,0000,0000,,behaviour that you're expected to do. So\Nif you complete a task, you get a a Dialogue: 0,0:06:32.43,0:06:36.10,Default,,0000,0000,0000,,cookie. You complete two other tasks, you\Nget two or three more cookies depending on Dialogue: 0,0:06:36.10,0:06:41.76,Default,,0000,0000,0000,,how important the task is. And this will\Nhelp you learn how to behave to get more Dialogue: 0,0:06:41.76,0:06:45.99,Default,,0000,0000,0000,,points and it's used a lot in robots and\Ngaming and so forth. And I'm not really Dialogue: 0,0:06:45.99,0:06:49.34,Default,,0000,0000,0000,,going to talk about that today because\Nmost of that is still not really something Dialogue: 0,0:06:49.34,0:06:54.88,Default,,0000,0000,0000,,that you or I interact with. Well, what I\Nam gonna talk about today is neural Dialogue: 0,0:06:54.88,0:06:59.68,Default,,0000,0000,0000,,networks, or as some people like to call\Nthem "deep learning", right? So deep Dialogue: 0,0:06:59.68,0:07:04.12,Default,,0000,0000,0000,,learning 1: The neural network versus deep\Nlearning battle awhile ago. So here's an Dialogue: 0,0:07:04.12,0:07:09.95,Default,,0000,0000,0000,,example neural network: we have an input\Nlayer and that's where we essentially make Dialogue: 0,0:07:09.95,0:07:14.55,Default,,0000,0000,0000,,a quantitative version of whatever our\Ndata is. So we need to make it into Dialogue: 0,0:07:14.55,0:07:19.89,Default,,0000,0000,0000,,numbers. Then we have a hidden layer and\Nwe might have multiple hidden layers. And Dialogue: 0,0:07:19.89,0:07:23.76,Default,,0000,0000,0000,,depending on how deep our network is, or a\Nnetwork inside a network, right, which is Dialogue: 0,0:07:23.76,0:07:28.18,Default,,0000,0000,0000,,possible. We might have very much\Ndifferent layers there and they may even Dialogue: 0,0:07:28.18,0:07:33.54,Default,,0000,0000,0000,,act in cyclical ways. And then that's\Nwhere all the weights and the variables Dialogue: 0,0:07:33.54,0:07:39.26,Default,,0000,0000,0000,,and the learning happens. So that has..\Nholds a lot of information and data that Dialogue: 0,0:07:39.26,0:07:43.98,Default,,0000,0000,0000,,we eventually want to train there. And\Nfinally we have an output layer. And Dialogue: 0,0:07:43.98,0:07:47.53,Default,,0000,0000,0000,,depending on the network and what we're\Ntrying to do the output layer can vary Dialogue: 0,0:07:47.53,0:07:51.54,Default,,0000,0000,0000,,between something that looks like the\Ninput, like for example if we want to Dialogue: 0,0:07:51.54,0:07:55.72,Default,,0000,0000,0000,,machine translate, then I want the output\Nto look like the input, right, I want it Dialogue: 0,0:07:55.72,0:07:59.91,Default,,0000,0000,0000,,to just be in a different language, or the\Noutput could be a different class. It can Dialogue: 0,0:07:59.91,0:08:05.75,Default,,0000,0000,0000,,be, you know, this is a car or this is a\Ntrain and so forth. So it really depends Dialogue: 0,0:08:05.75,0:08:10.61,Default,,0000,0000,0000,,what you're trying to solve, but the\Noutput layer gives us the answer. And how Dialogue: 0,0:08:10.61,0:08:17.16,Default,,0000,0000,0000,,we train this is, we use backpropagation.\NBackpropagation is nothing new and neither Dialogue: 0,0:08:17.16,0:08:21.14,Default,,0000,0000,0000,,is one of the most popular methods to do\Nso, which is called stochastic gradient Dialogue: 0,0:08:21.14,0:08:26.46,Default,,0000,0000,0000,,descent. What we do when we go through\Nthat part of the training, is we go from Dialogue: 0,0:08:26.46,0:08:29.76,Default,,0000,0000,0000,,the output layer and we go backwards\Nthrough the network. That's why it's Dialogue: 0,0:08:29.76,0:08:34.83,Default,,0000,0000,0000,,called backpropagation, right? And as we\Ngo backwards through the network, in the Dialogue: 0,0:08:34.83,0:08:39.14,Default,,0000,0000,0000,,most simple way, we upvote and downvote\Nwhat's working and what's not working. So Dialogue: 0,0:08:39.14,0:08:42.73,Default,,0000,0000,0000,,we say: "oh you got it right, you get a\Nlittle bit more importance", or "you got Dialogue: 0,0:08:42.73,0:08:46.04,Default,,0000,0000,0000,,it wrong, you get a little bit less\Nimportance". And eventually we hope Dialogue: 0,0:08:46.04,0:08:50.48,Default,,0000,0000,0000,,over time, that they essentially correct\Neach other's errors enough that we get a Dialogue: 0,0:08:50.48,0:08:57.55,Default,,0000,0000,0000,,right answer. So that's a very general\Noverview of how it works and the cool Dialogue: 0,0:08:57.55,0:09:02.72,Default,,0000,0000,0000,,thing is: Because it works that way, we\Ncan fool it. And people have been Dialogue: 0,0:09:02.72,0:09:08.27,Default,,0000,0000,0000,,researching ways to fool it for quite some\Ntime. So I give you a brief overview of Dialogue: 0,0:09:08.27,0:09:13.29,Default,,0000,0000,0000,,the history of this field, so we can kind\Nof know where we're working from and maybe Dialogue: 0,0:09:13.29,0:09:19.22,Default,,0000,0000,0000,,hopefully then where we're going to. In\N2005 was one of the first most important Dialogue: 0,0:09:19.22,0:09:24.74,Default,,0000,0000,0000,,papers to approach adversarial learning\Nand it was written by a series of Dialogue: 0,0:09:24.74,0:09:29.63,Default,,0000,0000,0000,,researchers and they wanted to see, if\Nthey could act as an informed attacker and Dialogue: 0,0:09:29.63,0:09:34.44,Default,,0000,0000,0000,,attack a linear classifier. So this is\Njust a spam filter and they're like can I Dialogue: 0,0:09:34.44,0:09:37.85,Default,,0000,0000,0000,,send spam to my friend? I don't know why\Nthey would want to do this, but: "Can I Dialogue: 0,0:09:37.85,0:09:43.21,Default,,0000,0000,0000,,send spam to my friend, if I tried testing\Nout a few ideas?" And what they were able Dialogue: 0,0:09:43.21,0:09:47.64,Default,,0000,0000,0000,,to show is: Yes, rather than just, you\Nknow, trial and error which anybody can do Dialogue: 0,0:09:47.64,0:09:52.12,Default,,0000,0000,0000,,or a brute force attack of just like send\Na thousand emails and see what happens, Dialogue: 0,0:09:52.12,0:09:56.37,Default,,0000,0000,0000,,they were able to craft a few algorithms\Nthat they could use to try and find Dialogue: 0,0:09:56.37,0:10:03.24,Default,,0000,0000,0000,,important words to change, to make it go\Nthrough the spam filter. In 2007 NIPS, Dialogue: 0,0:10:03.24,0:10:08.02,Default,,0000,0000,0000,,which is a very popular machine learning\Nconference, had one of their first all-day Dialogue: 0,0:10:08.02,0:10:12.93,Default,,0000,0000,0000,,workshops on computer security. And when\Nthey did so, they had a bunch of different Dialogue: 0,0:10:12.93,0:10:16.78,Default,,0000,0000,0000,,people that were working on machine\Nlearning in computer security: From Dialogue: 0,0:10:16.78,0:10:21.43,Default,,0000,0000,0000,,malware detection, to network intrusion\Ndetection, to of course spam. And they Dialogue: 0,0:10:21.43,0:10:25.19,Default,,0000,0000,0000,,also had a few talks on this type of\Nadversarial learning. So how do you act as Dialogue: 0,0:10:25.19,0:10:29.98,Default,,0000,0000,0000,,an adversary to your own model? And then\Nhow do you learn how to counter that Dialogue: 0,0:10:29.98,0:10:35.65,Default,,0000,0000,0000,,adversary? In 2013 there was a really\Ngreat paper that got a lot of people's Dialogue: 0,0:10:35.65,0:10:40.00,Default,,0000,0000,0000,,attention called "Poisoning Attacks\Nagainst Support Vector Machines". Now Dialogue: 0,0:10:40.00,0:10:45.29,Default,,0000,0000,0000,,support vector machines are essentially\Nusually a linear classifier and we use Dialogue: 0,0:10:45.29,0:10:50.12,Default,,0000,0000,0000,,them a lot to say, "this is a member of\Nthis class, that, or another", when we Dialogue: 0,0:10:50.12,0:10:54.94,Default,,0000,0000,0000,,pertain to text. So I have a text and I\Nwant to know what the text is about or I Dialogue: 0,0:10:54.94,0:10:58.61,Default,,0000,0000,0000,,want to know if it's a positive or\Nnegative sentiment, a lot of times I'll Dialogue: 0,0:10:58.61,0:11:05.16,Default,,0000,0000,0000,,use a support vector machine. We call them\NSVM's as well. Battista Biggio was the Dialogue: 0,0:11:05.16,0:11:08.32,Default,,0000,0000,0000,,main researcher and he has actually\Nwritten quite a lot about these poisoning Dialogue: 0,0:11:08.32,0:11:15.57,Default,,0000,0000,0000,,attacks and he poisoned the training data.\NSo for a lot of these systems, sometimes Dialogue: 0,0:11:15.57,0:11:20.82,Default,,0000,0000,0000,,they have active learning. This means, you\Nor I, when we classify our emails as spam, Dialogue: 0,0:11:20.82,0:11:26.29,Default,,0000,0000,0000,,we're helping train the network. So he\Npoisoned the training data and was able to Dialogue: 0,0:11:26.29,0:11:32.36,Default,,0000,0000,0000,,show that by poisoning it in a particular\Nway, that he was able to then send spam Dialogue: 0,0:11:32.36,0:11:37.81,Default,,0000,0000,0000,,email because he knew what words were then\Nbenign, essentially. He went on to study a Dialogue: 0,0:11:37.81,0:11:43.22,Default,,0000,0000,0000,,few other things about biometric data if\Nyou're interested in biometrics. But then Dialogue: 0,0:11:43.22,0:11:49.33,Default,,0000,0000,0000,,in 2014 Christian Szegedy, Ian Goodfellow,\Nand a few other main researchers at Google Dialogue: 0,0:11:49.33,0:11:55.35,Default,,0000,0000,0000,,Brain released "Intriguing Properties of\NNeural Networks." That really became the Dialogue: 0,0:11:55.35,0:12:00.04,Default,,0000,0000,0000,,explosion of what we're seeing today in\Nadversarial learning. And what they were Dialogue: 0,0:12:00.04,0:12:04.63,Default,,0000,0000,0000,,able to do, is they were able to say "We\Nbelieve there's linear properties of these Dialogue: 0,0:12:04.63,0:12:08.79,Default,,0000,0000,0000,,neural networks, even if they're not\Nnecessarily linear networks. Dialogue: 0,0:12:08.79,0:12:15.56,Default,,0000,0000,0000,,And we believe we can exploit them to fool\Nthem". And they first introduced then the Dialogue: 0,0:12:15.56,0:12:23.19,Default,,0000,0000,0000,,fast gradient sign method, which we'll\Ntalk about later today. So how does it Dialogue: 0,0:12:23.19,0:12:28.83,Default,,0000,0000,0000,,work? First I want us to get a little bit\Nof an intuition around how this works. Dialogue: 0,0:12:28.83,0:12:35.31,Default,,0000,0000,0000,,Here's a graphic of gradient descent. And\Nin gradient descent we have this vertical Dialogue: 0,0:12:35.31,0:12:40.34,Default,,0000,0000,0000,,axis is our cost function. And what we're\Ntrying to do is: We're trying to minimize Dialogue: 0,0:12:40.34,0:12:47.40,Default,,0000,0000,0000,,cost, we want to minimize the error. And\Nso when we start out, we just chose random Dialogue: 0,0:12:47.40,0:12:51.79,Default,,0000,0000,0000,,weights and variables, so all of our\Nhidden layers, they just have maybe random Dialogue: 0,0:12:51.79,0:12:57.34,Default,,0000,0000,0000,,weights or random distribution. And then\Nwe want to get to a place where the Dialogue: 0,0:12:57.34,0:13:01.74,Default,,0000,0000,0000,,weights have meaning, right? We want our\Nnetwork to know something, even if it's Dialogue: 0,0:13:01.74,0:13:08.74,Default,,0000,0000,0000,,just a mathematical pattern, right? So we\Nstart in the high area of the graph, or Dialogue: 0,0:13:08.74,0:13:13.82,Default,,0000,0000,0000,,the reddish area, and that's where we\Nstarted, we have high error there. And Dialogue: 0,0:13:13.82,0:13:21.21,Default,,0000,0000,0000,,then we try to get to the lowest area of\Nthe graph, or here the dark blue that is Dialogue: 0,0:13:21.21,0:13:26.89,Default,,0000,0000,0000,,right about here. But sometimes what\Nhappens: As we learn, as we go through Dialogue: 0,0:13:26.89,0:13:33.30,Default,,0000,0000,0000,,epochs and training, we're moving slowly\Ndown and hopefully we're optimizing. But Dialogue: 0,0:13:33.30,0:13:37.37,Default,,0000,0000,0000,,what we might end up in instead of this\Nglobal minimum, we might end up in the Dialogue: 0,0:13:37.37,0:13:43.80,Default,,0000,0000,0000,,local minimum which is the other trail.\NAnd that's fine, because it's still zero Dialogue: 0,0:13:43.80,0:13:49.89,Default,,0000,0000,0000,,error, right? So we're still probably\Ngoing to be able to succeed, but we might Dialogue: 0,0:13:49.89,0:13:56.14,Default,,0000,0000,0000,,not get the best answer all the time. What\Nadversarial tries to do in the most basic Dialogue: 0,0:13:56.14,0:14:01.98,Default,,0000,0000,0000,,of ways, it essentially tries to push the\Nerror rate back up the hill for as many Dialogue: 0,0:14:01.98,0:14:07.71,Default,,0000,0000,0000,,units as it can. So it essentially tries\Nto increase the error slowly through Dialogue: 0,0:14:07.71,0:14:14.60,Default,,0000,0000,0000,,perturbations. And by disrupting, let's\Nsay, the weakest links like the one that Dialogue: 0,0:14:14.60,0:14:19.06,Default,,0000,0000,0000,,did not find the global minimum but\Ninstead found a local minimum, we can Dialogue: 0,0:14:19.06,0:14:23.07,Default,,0000,0000,0000,,hopefully fool the network, because we're\Nfinding those weak spots and we're Dialogue: 0,0:14:23.07,0:14:25.63,Default,,0000,0000,0000,,capitalizing on them, essentially. Dialogue: 0,0:14:31.25,0:14:34.14,Default,,0000,0000,0000,,So what does an adversarial example\Nactually look like? Dialogue: 0,0:14:34.14,0:14:37.43,Default,,0000,0000,0000,,You may have already seen this\Nbecause it's very popular on the Dialogue: 0,0:14:37.43,0:14:45.22,Default,,0000,0000,0000,,Twittersphere and a few other places, but\Nthis was a series of researches at MIT. It Dialogue: 0,0:14:45.22,0:14:51.06,Default,,0000,0000,0000,,was debated whether you could do adverse..\Nadversarial learning in the real world. A Dialogue: 0,0:14:51.06,0:14:57.34,Default,,0000,0000,0000,,lot of the research has just been a still\Nimage. And what they were able to show: Dialogue: 0,0:14:57.34,0:15:03.08,Default,,0000,0000,0000,,They created a 3D-printed turtle. I mean\Nit looks like a turtle to you as well, Dialogue: 0,0:15:03.08,0:15:09.91,Default,,0000,0000,0000,,correct? And this 3D-printed turtle by the\NInception Network, which is a very popular Dialogue: 0,0:15:09.91,0:15:16.79,Default,,0000,0000,0000,,computer vision network, is a rifle and it\Nis a rifle in every angle that you can Dialogue: 0,0:15:16.79,0:15:21.96,Default,,0000,0000,0000,,see. And the way they were able to do this\Nand, I don't know the next time it goes Dialogue: 0,0:15:21.96,0:15:25.91,Default,,0000,0000,0000,,around you can see perhaps, and it's a\Nlittle bit easier on the video which I'll Dialogue: 0,0:15:25.91,0:15:29.79,Default,,0000,0000,0000,,have posted, I'll share at the end, you\Ncan see perhaps that there's a slight Dialogue: 0,0:15:29.79,0:15:35.53,Default,,0000,0000,0000,,discoloration of the shell. They messed\Nwith the texture. By messing with this Dialogue: 0,0:15:35.53,0:15:39.91,Default,,0000,0000,0000,,texture and the colors they were able to\Nfool the neural network, they were able to Dialogue: 0,0:15:39.91,0:15:45.26,Default,,0000,0000,0000,,activate different neurons that were not\Nsupposed to be activated. Units, I should Dialogue: 0,0:15:45.26,0:15:51.13,Default,,0000,0000,0000,,say. So what we see here is, yeah, it can\Nbe done in the real world, and when I saw Dialogue: 0,0:15:51.13,0:15:56.34,Default,,0000,0000,0000,,this I started getting really excited.\NBecause, video surveillance is a real Dialogue: 0,0:15:56.34,0:16:02.53,Default,,0000,0000,0000,,thing, right? So if we can start fooling\N3D objects, we can perhaps start fooling Dialogue: 0,0:16:02.53,0:16:08.04,Default,,0000,0000,0000,,other things in the real world that we\Nwould like to fool. Dialogue: 0,0:16:08.04,0:16:12.44,Default,,0000,0000,0000,,{\i1}applause{\i0} Dialogue: 0,0:16:12.44,0:16:19.15,Default,,0000,0000,0000,,kjam: So why do adversarial examples\Nexist? We're going to talk a little bit Dialogue: 0,0:16:19.15,0:16:23.88,Default,,0000,0000,0000,,about some things that are approximations\Nof what's actually happening, so please Dialogue: 0,0:16:23.88,0:16:27.61,Default,,0000,0000,0000,,forgive me for not being always exact, but\NI would rather us all have a general Dialogue: 0,0:16:27.61,0:16:33.66,Default,,0000,0000,0000,,understanding of what's happening. Across\Nthe top row we have an input layer and Dialogue: 0,0:16:33.66,0:16:39.48,Default,,0000,0000,0000,,these images to the left, we can see, are\Nthe source images and this source image is Dialogue: 0,0:16:39.48,0:16:43.38,Default,,0000,0000,0000,,like a piece of farming equipment or\Nsomething. And on the right we have our Dialogue: 0,0:16:43.38,0:16:48.80,Default,,0000,0000,0000,,guide image. This is what we're trying to\Nget the network to see we want it to Dialogue: 0,0:16:48.80,0:16:55.07,Default,,0000,0000,0000,,missclassify this farm equipment as a pink\Nbird. So what these researchers did is Dialogue: 0,0:16:55.07,0:16:59.02,Default,,0000,0000,0000,,they targeted different layers of the\Nnetwork. And they said: "Okay, we're going Dialogue: 0,0:16:59.02,0:17:02.41,Default,,0000,0000,0000,,to use this method to target this\Nparticular layer and we'll see what Dialogue: 0,0:17:02.41,0:17:07.57,Default,,0000,0000,0000,,happens". And so as they targeted these\Ndifferent layers you can see what's Dialogue: 0,0:17:07.57,0:17:12.11,Default,,0000,0000,0000,,happening on the internal visualization.\NNow neural networks can't see, right? Dialogue: 0,0:17:12.11,0:17:17.94,Default,,0000,0000,0000,,They're looking at matrices of numbers but\Nwhat we can do is we can use those Dialogue: 0,0:17:17.94,0:17:26.56,Default,,0000,0000,0000,,internal values to try and see with our\Nhuman eyes what they are learning. And we Dialogue: 0,0:17:26.56,0:17:31.37,Default,,0000,0000,0000,,can see here clearly inside the network,\Nwe no longer see the farming equipment, Dialogue: 0,0:17:31.37,0:17:39.55,Default,,0000,0000,0000,,right? We see a pink bird. And this is not\Nvisible to our human eyes. Now if you Dialogue: 0,0:17:39.55,0:17:43.57,Default,,0000,0000,0000,,really study and if you enlarge the image\Nyou can start to see okay there's a little Dialogue: 0,0:17:43.57,0:17:48.19,Default,,0000,0000,0000,,bit of pink here or greens, I don't know\Nwhat's happening, but we can still see it Dialogue: 0,0:17:48.19,0:17:56.51,Default,,0000,0000,0000,,in the neural network we have tricked. Now\Npeople don't exactly know yet why these Dialogue: 0,0:17:56.51,0:18:03.16,Default,,0000,0000,0000,,blind spots exist. So it's still an area\Nof active research exactly why we can fool Dialogue: 0,0:18:03.16,0:18:09.43,Default,,0000,0000,0000,,neural networks so easily. There are some\Nprominent researchers that believe that Dialogue: 0,0:18:09.43,0:18:14.45,Default,,0000,0000,0000,,neural networks are essentially very\Nlinear and that we can use this simple Dialogue: 0,0:18:14.45,0:18:20.84,Default,,0000,0000,0000,,linearity to misclassify to jump into\Nanother area. But there are others that Dialogue: 0,0:18:20.84,0:18:24.82,Default,,0000,0000,0000,,believe that there's these pockets or\Nblind spots and that we can then find Dialogue: 0,0:18:24.82,0:18:28.50,Default,,0000,0000,0000,,these blind spots where these neurons\Nreally are the weakest links and they Dialogue: 0,0:18:28.50,0:18:33.16,Default,,0000,0000,0000,,maybe even haven't learned anything and if\Nwe change their activation then we can Dialogue: 0,0:18:33.16,0:18:37.58,Default,,0000,0000,0000,,fool the network easily. So this is still\Nan area of active research and let's say Dialogue: 0,0:18:37.58,0:18:44.32,Default,,0000,0000,0000,,you're looking for your thesis, this would\Nbe a pretty neat thing to work on. So Dialogue: 0,0:18:44.32,0:18:49.40,Default,,0000,0000,0000,,we'll get into just a brief overview of\Nsome of the math behind the most popular Dialogue: 0,0:18:49.40,0:18:55.57,Default,,0000,0000,0000,,methods. First we have the fast gradient\Nsign method and that is was used in the Dialogue: 0,0:18:55.57,0:18:59.95,Default,,0000,0000,0000,,initial paper and now there's been many\Niterations on it. And what we do is we Dialogue: 0,0:18:59.95,0:19:05.12,Default,,0000,0000,0000,,have our same cost function, so this is\Nthe same way that we're trying to train Dialogue: 0,0:19:05.12,0:19:13.11,Default,,0000,0000,0000,,our network and it's trying to learn. And\Nwe take the gradient sign of that and if Dialogue: 0,0:19:13.11,0:19:16.33,Default,,0000,0000,0000,,you can think, it's okay, if you're not\Nused to doing vector calculus, and Dialogue: 0,0:19:16.33,0:19:20.25,Default,,0000,0000,0000,,especially not without a pen and paper in\Nfront of you, but what you think we're Dialogue: 0,0:19:20.25,0:19:24.14,Default,,0000,0000,0000,,doing is we're essentially trying to\Ncalculate some approximation of a Dialogue: 0,0:19:24.14,0:19:29.70,Default,,0000,0000,0000,,derivative of the function. And this can\Nkind of tell us, where is it going. And if Dialogue: 0,0:19:29.70,0:19:37.30,Default,,0000,0000,0000,,we know where it's going, we can maybe\Nanticipate that and change. And then to Dialogue: 0,0:19:37.30,0:19:41.48,Default,,0000,0000,0000,,create the adversarial images, we then\Ntake the original input plus a small Dialogue: 0,0:19:41.48,0:19:48.77,Default,,0000,0000,0000,,number epsilon times that gradient's sign.\NFor the Jacobian Saliency Map, this is a Dialogue: 0,0:19:48.77,0:19:55.01,Default,,0000,0000,0000,,newer method and it's a little bit more\Neffective, but it takes a little bit more Dialogue: 0,0:19:55.01,0:20:02.25,Default,,0000,0000,0000,,compute. This Jacobian Saliency Map uses a\NJacobian matrix and if you remember also, Dialogue: 0,0:20:02.25,0:20:07.65,Default,,0000,0000,0000,,and it's okay if you don't, a Jacobian\Nmatrix will look at the full derivative of Dialogue: 0,0:20:07.65,0:20:12.05,Default,,0000,0000,0000,,a function, so you take the full\Nderivative of a cost function Dialogue: 0,0:20:12.05,0:20:18.27,Default,,0000,0000,0000,,at that vector, and it gives you a matrix\Nthat is a pointwise approximation, Dialogue: 0,0:20:18.27,0:20:22.55,Default,,0000,0000,0000,,if the function is differentiable\Nat that input vector. Don't Dialogue: 0,0:20:22.55,0:20:28.32,Default,,0000,0000,0000,,worry you can review this later too. But\Nthe Jacobian matrix then we use to create Dialogue: 0,0:20:28.32,0:20:33.06,Default,,0000,0000,0000,,this saliency map the same way where we're\Nessentially trying some sort of linear Dialogue: 0,0:20:33.06,0:20:38.83,Default,,0000,0000,0000,,approximation, or pointwise approximation,\Nand we then want to find two pixels that Dialogue: 0,0:20:38.83,0:20:43.86,Default,,0000,0000,0000,,we can perturb that cause the most\Ndisruption. And then we continue to the Dialogue: 0,0:20:43.86,0:20:48.97,Default,,0000,0000,0000,,next. Unfortunately this is currently a\NO(n²) problem, but there's a few people Dialogue: 0,0:20:48.97,0:20:53.91,Default,,0000,0000,0000,,that are trying to essentially find ways\Nthat we can approximate this and make it Dialogue: 0,0:20:53.91,0:21:01.32,Default,,0000,0000,0000,,faster. So maybe now you want to fool a\Nnetwork too and I hope you do, because Dialogue: 0,0:21:01.32,0:21:06.58,Default,,0000,0000,0000,,that's what we're going to talk about.\NFirst you need to pick a problem or a Dialogue: 0,0:21:06.58,0:21:13.46,Default,,0000,0000,0000,,network type you may already know. But you\Nmay want to investigate what perhaps is Dialogue: 0,0:21:13.46,0:21:19.02,Default,,0000,0000,0000,,this company using, what perhaps is this\Nmethod using and do a little bit of Dialogue: 0,0:21:19.02,0:21:23.73,Default,,0000,0000,0000,,research, because that's going to help\Nyou. Then you want to research state-of- Dialogue: 0,0:21:23.73,0:21:28.61,Default,,0000,0000,0000,,the-art methods and this is like a typical\Nresearch statement that you have a new Dialogue: 0,0:21:28.61,0:21:32.36,Default,,0000,0000,0000,,state-of-the-art method, but the good news\Nis is that the state-of-the-art two to Dialogue: 0,0:21:32.36,0:21:38.18,Default,,0000,0000,0000,,three years ago is most likely in\Nproduction or in systems today. So once Dialogue: 0,0:21:38.18,0:21:44.48,Default,,0000,0000,0000,,they find ways to speed it up, some\Napproximation of that is deployed. And a Dialogue: 0,0:21:44.48,0:21:48.28,Default,,0000,0000,0000,,lot of times these are then publicly\Navailable models, so a lot of times, if Dialogue: 0,0:21:48.28,0:21:51.48,Default,,0000,0000,0000,,you're already working with the deep\Nlearning framework they'll come Dialogue: 0,0:21:51.48,0:21:56.45,Default,,0000,0000,0000,,prepackaged with a few of the different\Npopular models, so you can even use that. Dialogue: 0,0:21:56.45,0:22:00.69,Default,,0000,0000,0000,,If you're already building neural networks\Nof course you can build your own. An Dialogue: 0,0:22:00.69,0:22:05.51,Default,,0000,0000,0000,,optional step, but one that might be\Nrecommended, is to fine-tune your model Dialogue: 0,0:22:05.51,0:22:10.75,Default,,0000,0000,0000,,and what this means is to essentially take\Na new training data set, maybe data that Dialogue: 0,0:22:10.75,0:22:15.49,Default,,0000,0000,0000,,you think this company is using or that\Nyou think this network is using, and Dialogue: 0,0:22:15.49,0:22:19.30,Default,,0000,0000,0000,,you're going to remove the last few layers\Nof the neural network and you're going to Dialogue: 0,0:22:19.30,0:22:24.81,Default,,0000,0000,0000,,retrain it. So you essentially are nicely\Npiggybacking on the work of the pre Dialogue: 0,0:22:24.81,0:22:30.65,Default,,0000,0000,0000,,trained model and you're using the final\Nlayers to create finesse. This essentially Dialogue: 0,0:22:30.65,0:22:37.17,Default,,0000,0000,0000,,makes your model better at the task that\Nyou have for it. Finally then you use a Dialogue: 0,0:22:37.17,0:22:40.26,Default,,0000,0000,0000,,library, and we'll go through a few of\Nthem, but some of the ones that I have Dialogue: 0,0:22:40.26,0:22:46.45,Default,,0000,0000,0000,,used myself is cleverhans, DeepFool and\Ndeep-pwning, and these all come with nice Dialogue: 0,0:22:46.45,0:22:51.58,Default,,0000,0000,0000,,built-in features for you to use for let's\Nsay the fast gradient sign method, the Dialogue: 0,0:22:51.58,0:22:56.74,Default,,0000,0000,0000,,Jacobian saliency map and a few other\Nmethods that are available. Finally it's Dialogue: 0,0:22:56.74,0:23:01.55,Default,,0000,0000,0000,,not going to always work so depending on\Nyour source and your target, you won't Dialogue: 0,0:23:01.55,0:23:05.84,Default,,0000,0000,0000,,always necessarily find a match. What\Nresearchers have shown is it's a lot Dialogue: 0,0:23:05.84,0:23:10.95,Default,,0000,0000,0000,,easier to fool a network that a cat is a\Ndog than it is to fool in networks that a Dialogue: 0,0:23:10.95,0:23:16.03,Default,,0000,0000,0000,,cat is an airplane. And this is just like\Nwe can make these intuitive, so you might Dialogue: 0,0:23:16.03,0:23:21.83,Default,,0000,0000,0000,,want to pick an input that's not super\Ndissimilar from where you want to go, but Dialogue: 0,0:23:21.83,0:23:28.26,Default,,0000,0000,0000,,is dissimilar enough. And you want to test\Nit locally and then finally test the one Dialogue: 0,0:23:28.26,0:23:38.15,Default,,0000,0000,0000,,for the highest misclassification rates on\Nthe target network. And you might say Dialogue: 0,0:23:38.15,0:23:44.23,Default,,0000,0000,0000,,Katharine, or you can call me kjam, that's\Nokay. You might say: "I don't know what Dialogue: 0,0:23:44.23,0:23:50.05,Default,,0000,0000,0000,,the person is using", "I don't know what\Nthe company is using" and I will say "it's Dialogue: 0,0:23:50.05,0:23:56.75,Default,,0000,0000,0000,,okay", because what's been proven: You can\Nattack a blackbox model, you do not have Dialogue: 0,0:23:56.75,0:24:01.95,Default,,0000,0000,0000,,to know what they're using, you do not\Nhave to know exactly how it works, you Dialogue: 0,0:24:01.95,0:24:06.76,Default,,0000,0000,0000,,don't even have to know their training\Ndata, because what you can do is if it Dialogue: 0,0:24:06.76,0:24:12.71,Default,,0000,0000,0000,,has.. okay, addendum it has to have some\NAPI you can interface with. But if it has Dialogue: 0,0:24:12.71,0:24:18.13,Default,,0000,0000,0000,,an API you can interface with or even any\NAPI you can interact with, that uses the Dialogue: 0,0:24:18.13,0:24:24.84,Default,,0000,0000,0000,,same type of learning, you can collect\Ntraining data by querying the API. And Dialogue: 0,0:24:24.84,0:24:28.70,Default,,0000,0000,0000,,then you're training your local model on\Nthat data that you're collecting. So Dialogue: 0,0:24:28.70,0:24:32.89,Default,,0000,0000,0000,,you're collecting the data, you're\Ntraining your local model, and as your Dialogue: 0,0:24:32.89,0:24:37.30,Default,,0000,0000,0000,,local model gets more accurate and more\Nsimilar to the deployed black box that you Dialogue: 0,0:24:37.30,0:24:43.41,Default,,0000,0000,0000,,don't know how it works, you are then\Nstill able to fool it. And what this paper Dialogue: 0,0:24:43.41,0:24:49.73,Default,,0000,0000,0000,,proved, Nicolas Papanov and a few other\Ngreat researchers, is that with usually Dialogue: 0,0:24:49.73,0:24:56.53,Default,,0000,0000,0000,,less than six thousand queries they were\Nable to fool the network between 84% and 97% certainty Dialogue: 0,0:24:59.30,0:25:03.42,Default,,0000,0000,0000,,And what the same group\Nof researchers also studied is the ability Dialogue: 0,0:25:03.42,0:25:09.24,Default,,0000,0000,0000,,to transfer the ability to fool one\Nnetwork into another network and they Dialogue: 0,0:25:09.24,0:25:14.91,Default,,0000,0000,0000,,called that transfer ability. So I can\Ntake a certain type of network and I can Dialogue: 0,0:25:14.91,0:25:19.32,Default,,0000,0000,0000,,use adversarial examples against this\Nnetwork to fool a different type of Dialogue: 0,0:25:19.32,0:25:26.27,Default,,0000,0000,0000,,machine learning technique. Here we have\Ntheir matrix, their heat map, that shows Dialogue: 0,0:25:26.27,0:25:32.73,Default,,0000,0000,0000,,us exactly what they were able to fool. So\Nwe have across the left-hand side here the Dialogue: 0,0:25:32.73,0:25:37.74,Default,,0000,0000,0000,,source machine learning technique, we have\Ndeep learning, logistic regression, SVM's Dialogue: 0,0:25:37.74,0:25:43.38,Default,,0000,0000,0000,,like we talked about, decision trees and\NK-nearest-neighbors. And across the bottom Dialogue: 0,0:25:43.38,0:25:47.34,Default,,0000,0000,0000,,we have the target machine learning, so\Nwhat were they targeting. They created the Dialogue: 0,0:25:47.34,0:25:51.47,Default,,0000,0000,0000,,adversaries with the left hand side and\Nthey targeted across the bottom. We Dialogue: 0,0:25:51.47,0:25:56.70,Default,,0000,0000,0000,,finally have an ensemble model at the end.\NAnd what they were able to show is like, Dialogue: 0,0:25:56.70,0:26:03.13,Default,,0000,0000,0000,,for example, SVM's and decision trees are\Nquite easy to fool, but logistic Dialogue: 0,0:26:03.13,0:26:08.48,Default,,0000,0000,0000,,regression a little bit less so, but still\Nstrong, for deep learning and K-nearest- Dialogue: 0,0:26:08.48,0:26:13.46,Default,,0000,0000,0000,,neighbors, if you train a deep learning\Nmodel or a K-nearest-neighbor model, then Dialogue: 0,0:26:13.46,0:26:18.18,Default,,0000,0000,0000,,that performs fairly well against itself.\NAnd so what they're able to show is that Dialogue: 0,0:26:18.18,0:26:23.32,Default,,0000,0000,0000,,you don't necessarily need to know the\Ntarget machine and you don't even have to Dialogue: 0,0:26:23.32,0:26:28.05,Default,,0000,0000,0000,,get it right, even if you do know, you can\Nuse a different type of machine learning Dialogue: 0,0:26:28.05,0:26:30.44,Default,,0000,0000,0000,,technique to target the network. Dialogue: 0,0:26:34.31,0:26:39.20,Default,,0000,0000,0000,,So we'll\Nlook at six lines of Python here and in Dialogue: 0,0:26:39.20,0:26:44.56,Default,,0000,0000,0000,,these six lines of Python I'm using the\Ncleverhans library and in six lines of Dialogue: 0,0:26:44.56,0:26:52.42,Default,,0000,0000,0000,,Python I can both generate my adversarial\Ninput and I can even predict on it. So if Dialogue: 0,0:26:52.42,0:27:02.35,Default,,0000,0000,0000,,you don't code Python, it's pretty easy to\Nlearn and pick up. And for example here we Dialogue: 0,0:27:02.35,0:27:06.83,Default,,0000,0000,0000,,have Keras and Keras is a very popular\Ndeep learning library in Python, it Dialogue: 0,0:27:06.83,0:27:12.07,Default,,0000,0000,0000,,usually works with a theano or a\Ntensorflow backend and we can just wrap Dialogue: 0,0:27:12.07,0:27:19.25,Default,,0000,0000,0000,,our model, pass it to the fast gradient\Nmethod, class and then set up some Dialogue: 0,0:27:19.25,0:27:24.63,Default,,0000,0000,0000,,parameters, so here's our epsilon and a\Nfew extra parameters, this is to tune our Dialogue: 0,0:27:24.63,0:27:30.86,Default,,0000,0000,0000,,adversary, and finally we can generate our\Nadversarial examples and then predict on Dialogue: 0,0:27:30.86,0:27:39.86,Default,,0000,0000,0000,,them. So in a very small amount of Python\Nwe're able to target and trick a network. Dialogue: 0,0:27:40.71,0:27:45.79,Default,,0000,0000,0000,,If you're already using tensorflow or\NKeras, it already works with those libraries. Dialogue: 0,0:27:48.83,0:27:52.61,Default,,0000,0000,0000,,Deep-pwning is one of the first\Nlibraries that I heard about in this space Dialogue: 0,0:27:52.61,0:27:58.20,Default,,0000,0000,0000,,and it was presented at Def Con in 2016\Nand what it comes with is a bunch of Dialogue: 0,0:27:58.20,0:28:03.32,Default,,0000,0000,0000,,tensorflow built-in code. It even comes\Nwith a way that you can train the model Dialogue: 0,0:28:03.32,0:28:06.73,Default,,0000,0000,0000,,yourself, so it has a few different\Nmodels, a few different convolutional Dialogue: 0,0:28:06.73,0:28:12.13,Default,,0000,0000,0000,,neural networks and these are\Npredominantly used in computer vision. Dialogue: 0,0:28:12.13,0:28:18.09,Default,,0000,0000,0000,,It also however has a semantic model and I\Nnormally work in NLP and I was pretty Dialogue: 0,0:28:18.09,0:28:24.24,Default,,0000,0000,0000,,excited to try it out. What it comes built\Nwith is the Rotten Tomatoes sentiment, so Dialogue: 0,0:28:24.24,0:28:29.90,Default,,0000,0000,0000,,this is Rotten Tomatoes movie reviews that\Ntry to learn is it positive or negative. Dialogue: 0,0:28:30.47,0:28:35.27,Default,,0000,0000,0000,,So the original text that I input in, when\NI was generating my adversarial networks Dialogue: 0,0:28:35.27,0:28:41.50,Default,,0000,0000,0000,,was "more trifle than triumph", which is a\Nreal review and the adversarial text that Dialogue: 0,0:28:41.50,0:28:46.08,Default,,0000,0000,0000,,it gave me was "jonah refreshing haunting\Nleaky" Dialogue: 0,0:28:49.47,0:28:52.66,Default,,0000,0000,0000,,...Yeah.. so I was able to fool my network Dialogue: 0,0:28:52.66,0:28:57.56,Default,,0000,0000,0000,,but I lost any type of meaning and\Nthis is really the problem when we think Dialogue: 0,0:28:57.56,0:29:03.54,Default,,0000,0000,0000,,about how we apply adversarial learning to\Ndifferent tasks is, it's easy for an image Dialogue: 0,0:29:03.54,0:29:08.96,Default,,0000,0000,0000,,if we make a few changes for it to retain\Nits image, right? It's many, many pixels, Dialogue: 0,0:29:08.96,0:29:14.14,Default,,0000,0000,0000,,but when we start going into language, if\Nwe change one word and then another word Dialogue: 0,0:29:14.14,0:29:18.95,Default,,0000,0000,0000,,and another word or maybe we changed all\Nof the words, we no longer understand as Dialogue: 0,0:29:18.95,0:29:23.12,Default,,0000,0000,0000,,humans. And I would say this is garbage\Nin, garbage out, this is not actual Dialogue: 0,0:29:23.12,0:29:28.76,Default,,0000,0000,0000,,adversarial learning. So we have a long\Nway to go when it comes to language tasks Dialogue: 0,0:29:28.76,0:29:32.74,Default,,0000,0000,0000,,and being able to do adversarial learning\Nand there is some research in this, but Dialogue: 0,0:29:32.74,0:29:37.28,Default,,0000,0000,0000,,it's not really advanced yet. So hopefully\Nthis is something that we can continue to Dialogue: 0,0:29:37.28,0:29:42.43,Default,,0000,0000,0000,,work on and advance further and if so we\Nneed to support a few different types of Dialogue: 0,0:29:42.43,0:29:47.43,Default,,0000,0000,0000,,networks that are more common in NLP than\Nthey are in computer vision. Dialogue: 0,0:29:50.33,0:29:54.76,Default,,0000,0000,0000,,There's some other notable open-source libraries that\Nare available to you and I'll cover just a Dialogue: 0,0:29:54.76,0:29:59.61,Default,,0000,0000,0000,,few here. There's a "Vanderbilt\Ncomputational economics research lab" that Dialogue: 0,0:29:59.61,0:30:03.68,Default,,0000,0000,0000,,has adlib and this allows you to do\Npoisoning attacks. So if you want to Dialogue: 0,0:30:03.68,0:30:09.43,Default,,0000,0000,0000,,target training data and poison it, then\Nyou can do so with that and use scikit- Dialogue: 0,0:30:09.43,0:30:16.59,Default,,0000,0000,0000,,learn. DeepFool allows you to do the fast\Ngradient sign method, but it tries to do Dialogue: 0,0:30:16.59,0:30:21.59,Default,,0000,0000,0000,,smaller perturbations, it tries to be less\Ndetectable to us humans. Dialogue: 0,0:30:23.17,0:30:28.28,Default,,0000,0000,0000,,It's based on Theano, which is another library that I believe uses Lua as well as Python. Dialogue: 0,0:30:29.67,0:30:34.05,Default,,0000,0000,0000,,"FoolBox" is kind of neat because I only\Nheard about it last week, but it collects Dialogue: 0,0:30:34.05,0:30:39.31,Default,,0000,0000,0000,,a bunch of different techniques all in one\Nlibrary and you could use it with one Dialogue: 0,0:30:39.31,0:30:43.16,Default,,0000,0000,0000,,interface. So if you want to experiment\Nwith a few different ones at once, I would Dialogue: 0,0:30:43.16,0:30:47.46,Default,,0000,0000,0000,,recommend taking a look at that and\Nfinally for something that we'll talk Dialogue: 0,0:30:47.46,0:30:53.60,Default,,0000,0000,0000,,about briefly in a short period of time we\Nhave "Evolving AI Lab", which release a Dialogue: 0,0:30:53.60,0:30:59.71,Default,,0000,0000,0000,,fooling library and this fooling library\Nis able to generate images that you or I Dialogue: 0,0:30:59.71,0:31:04.57,Default,,0000,0000,0000,,can't tell what it is, but that the neural\Nnetwork is convinced it is something. Dialogue: 0,0:31:05.30,0:31:09.94,Default,,0000,0000,0000,,So this we'll talk about maybe some\Napplications of this in a moment, but they Dialogue: 0,0:31:09.94,0:31:13.56,Default,,0000,0000,0000,,also open sourced all of their code and\Nthey're researchers, who open sourced Dialogue: 0,0:31:13.56,0:31:19.65,Default,,0000,0000,0000,,their code, which is always very exciting.\NAs you may have known from some of the Dialogue: 0,0:31:19.65,0:31:25.50,Default,,0000,0000,0000,,research I already cited, most of the\Nstudies and the research in this area has Dialogue: 0,0:31:25.50,0:31:29.83,Default,,0000,0000,0000,,been on malicious attacks. So there's very\Nfew people trying to figure out how to do Dialogue: 0,0:31:29.83,0:31:33.77,Default,,0000,0000,0000,,this for what I would call benevolent\Npurposes. Most of them are trying to act Dialogue: 0,0:31:33.77,0:31:39.54,Default,,0000,0000,0000,,as an adversary in the traditional\Ncomputer security sense. They're perhaps Dialogue: 0,0:31:39.54,0:31:43.89,Default,,0000,0000,0000,,studying spam filters and how spammers can\Nget by them. They're perhaps looking at Dialogue: 0,0:31:43.89,0:31:48.67,Default,,0000,0000,0000,,network intrusion or botnet-attacks and so\Nforth. They're perhaps looking at self- Dialogue: 0,0:31:48.67,0:31:53.39,Default,,0000,0000,0000,,driving cars so and I know that was\Nreferenced earlier as well at Henrick and Dialogue: 0,0:31:53.39,0:31:57.89,Default,,0000,0000,0000,,Karen's talk, they're perhaps trying to\Nmake a yield sign look like a stop sign or Dialogue: 0,0:31:57.89,0:32:02.76,Default,,0000,0000,0000,,a stop sign look like a yield sign or a\Nspeed limit, and so forth, and scarily Dialogue: 0,0:32:02.76,0:32:07.67,Default,,0000,0000,0000,,they are quite successful at this. Or\Nperhaps they're looking at data poisoning, Dialogue: 0,0:32:07.67,0:32:12.44,Default,,0000,0000,0000,,so how do we poison the model so we render\Nit useless? In a particular context, so we Dialogue: 0,0:32:12.44,0:32:17.99,Default,,0000,0000,0000,,can utilize that. And finally for malware.\NSo what a few researchers were able to Dialogue: 0,0:32:17.99,0:32:22.67,Default,,0000,0000,0000,,show is, by just changing a few things in\Nthe malware they were able to upload their Dialogue: 0,0:32:22.67,0:32:26.27,Default,,0000,0000,0000,,malware to Google Mail and send it to\Nsomeone and this was still fully Dialogue: 0,0:32:26.27,0:32:31.58,Default,,0000,0000,0000,,functional malware. In that same sense\Nthere's the malGAN project, which uses a Dialogue: 0,0:32:31.58,0:32:38.55,Default,,0000,0000,0000,,generative adversarial network to create\Nmalware that works, I guess. So there's a Dialogue: 0,0:32:38.55,0:32:43.33,Default,,0000,0000,0000,,lot of research of these kind of malicious\Nattacks within adversarial learning. Dialogue: 0,0:32:44.98,0:32:51.93,Default,,0000,0000,0000,,But what I wonder is how might we use this for\Ngood. And I put "good" in quotation marks, Dialogue: 0,0:32:51.93,0:32:56.18,Default,,0000,0000,0000,,because we all have different ethical and\Nmoral systems we use. And what you may Dialogue: 0,0:32:56.18,0:33:00.29,Default,,0000,0000,0000,,decide is ethical for you might be\Ndifferent. But I think as a community, Dialogue: 0,0:33:00.29,0:33:05.45,Default,,0000,0000,0000,,especially at a conference like this,\Nhopefully we can converge on some ethical Dialogue: 0,0:33:05.45,0:33:10.18,Default,,0000,0000,0000,,privacy concerned version of using these\Nnetworks. Dialogue: 0,0:33:13.24,0:33:20.99,Default,,0000,0000,0000,,So I've composed a few ideas and I hope that this is just a starting list of a longer conversation. Dialogue: 0,0:33:22.89,0:33:30.01,Default,,0000,0000,0000,,One idea is that we can perhaps use this type of adversarial learning to fool surveillance. Dialogue: 0,0:33:30.83,0:33:36.47,Default,,0000,0000,0000,,As surveillance affects you and I it even\Ndisproportionately affects people that Dialogue: 0,0:33:36.47,0:33:41.87,Default,,0000,0000,0000,,most likely can't be here. So whether or\Nnot we're personally affected, we can care Dialogue: 0,0:33:41.87,0:33:46.42,Default,,0000,0000,0000,,about the many lives that are affected by\Nthis type of surveillance. And we can try Dialogue: 0,0:33:46.42,0:33:49.67,Default,,0000,0000,0000,,and build ways to fool surveillance\Nsystems. Dialogue: 0,0:33:50.94,0:33:52.12,Default,,0000,0000,0000,,Stenography: Dialogue: 0,0:33:52.12,0:33:55.22,Default,,0000,0000,0000,,So we could potentially, in a world where more and more people Dialogue: 0,0:33:55.22,0:33:58.78,Default,,0000,0000,0000,,have less of a private way of sending messages to one another Dialogue: 0,0:33:58.78,0:34:03.08,Default,,0000,0000,0000,,We can perhaps use adversarial learning to send private messages. Dialogue: 0,0:34:03.83,0:34:08.31,Default,,0000,0000,0000,,Adware fooling: So\Nagain, where I might have quite a lot of Dialogue: 0,0:34:08.31,0:34:13.86,Default,,0000,0000,0000,,privilege and I don't actually see ads\Nthat are predatory on me as much, there is Dialogue: 0,0:34:13.86,0:34:19.45,Default,,0000,0000,0000,,a lot of people in the world that face\Npredatory advertising. And so how can we Dialogue: 0,0:34:19.45,0:34:23.60,Default,,0000,0000,0000,,help those problems by developing\Nadversarial techniques? Dialogue: 0,0:34:24.64,0:34:26.52,Default,,0000,0000,0000,,Poisoning your own private data: Dialogue: 0,0:34:27.39,0:34:30.60,Default,,0000,0000,0000,,This depends on whether you\Nactually need to use the service and Dialogue: 0,0:34:30.60,0:34:34.59,Default,,0000,0000,0000,,whether you like how the service is\Nhelping you with the machine learning, but Dialogue: 0,0:34:34.59,0:34:40.11,Default,,0000,0000,0000,,if you don't care or if you need to\Nessentially have a burn box of your data. Dialogue: 0,0:34:40.11,0:34:45.76,Default,,0000,0000,0000,,Then potentially you could poison your own\Nprivate data. Finally, I want us to use it Dialogue: 0,0:34:45.76,0:34:51.14,Default,,0000,0000,0000,,to investigate deployed models. So even\Nif we don't actually need a use for Dialogue: 0,0:34:51.14,0:34:56.01,Default,,0000,0000,0000,,fooling this particular network, the more\Nwe know about what's deployed and how we Dialogue: 0,0:34:56.01,0:35:00.35,Default,,0000,0000,0000,,can fool it, the more we're able to keep\Nup with this technology as it continues to Dialogue: 0,0:35:00.35,0:35:04.63,Default,,0000,0000,0000,,evolve. So the more that we're practicing,\Nthe more that we're ready for whatever Dialogue: 0,0:35:04.63,0:35:09.80,Default,,0000,0000,0000,,might happen next. And finally I really\Nwant to hear your ideas as well. So I'll Dialogue: 0,0:35:09.80,0:35:13.94,Default,,0000,0000,0000,,be here throughout the whole Congress and\Nof course you can share during the Q&A Dialogue: 0,0:35:13.94,0:35:17.07,Default,,0000,0000,0000,,time. If you have great ideas, I really\Nwant to hear them. Dialogue: 0,0:35:20.64,0:35:26.08,Default,,0000,0000,0000,,So I decided to play around a little bit with some of my ideas. Dialogue: 0,0:35:26.81,0:35:32.72,Default,,0000,0000,0000,,\NAnd I was convinced perhaps that I could make Facebook think I was a cat. Dialogue: 0,0:35:33.30,0:35:36.50,Default,,0000,0000,0000,,This is my goal. Can Facebook think I'm a cat? Dialogue: 0,0:35:37.82,0:35:40.70,Default,,0000,0000,0000,,Because nobody really likes Facebook. I\Nmean let's be honest, right? Dialogue: 0,0:35:41.55,0:35:44.17,Default,,0000,0000,0000,,But I have to be on it because my mom messages me there Dialogue: 0,0:35:44.17,0:35:46.02,Default,,0000,0000,0000,,and she doesn't use the email anymore. Dialogue: 0,0:35:46.02,0:35:47.89,Default,,0000,0000,0000,,So I'm on Facebook. Anyways. Dialogue: 0,0:35:48.48,0:35:55.15,Default,,0000,0000,0000,,So I used a pre-trained Inception model and Keras and I fine-tuned the layers. Dialogue: 0,0:35:55.15,0:35:57.19,Default,,0000,0000,0000,,And I'm not a\Ncomputer vision person really. But it Dialogue: 0,0:35:57.19,0:36:01.77,Default,,0000,0000,0000,,took me like a day of figuring out how\Ncomputer vision people transfer their data Dialogue: 0,0:36:01.77,0:36:06.35,Default,,0000,0000,0000,,into something I can put inside of a\Nnetwork figure that out and I was able to Dialogue: 0,0:36:06.35,0:36:12.04,Default,,0000,0000,0000,,quickly train a model and the model could\Nonly distinguish between people and cats. Dialogue: 0,0:36:12.04,0:36:15.14,Default,,0000,0000,0000,,That's all the model knew how to do. I\Ngive it a picture it says it's a person or Dialogue: 0,0:36:15.14,0:36:19.63,Default,,0000,0000,0000,,it's a cat. I actually didn't try just\Ngiving it an image of something else, it Dialogue: 0,0:36:19.63,0:36:25.38,Default,,0000,0000,0000,,would probably guess it's a person or a\Ncat maybe, 50/50, who knows. What I did Dialogue: 0,0:36:25.38,0:36:31.93,Default,,0000,0000,0000,,was, I used an image of myself and\Neventually I had my fast gradient sign Dialogue: 0,0:36:31.93,0:36:37.70,Default,,0000,0000,0000,,method, I used cleverhans, and I was able\Nto slowly increase the epsilon and so the Dialogue: 0,0:36:37.70,0:36:44.10,Default,,0000,0000,0000,,epsilon as it's low, you and I can't see\Nthe perturbations, but also the network Dialogue: 0,0:36:44.10,0:36:48.92,Default,,0000,0000,0000,,can't see the perturbations. So we need to\Nincrease it, and of course as we increase Dialogue: 0,0:36:48.92,0:36:53.30,Default,,0000,0000,0000,,it, when we're using a technique like\NFGSM, we are also increasing the noise Dialogue: 0,0:36:53.30,0:37:00.83,Default,,0000,0000,0000,,that we see. And when I got 2.21 epsilon\Nand I kept uploading it to Facebook and Dialogue: 0,0:37:00.83,0:37:02.35,Default,,0000,0000,0000,,Facebook kept saying: "Yeah, do you want\Nto tag yourself?" and I'm like: Dialogue: 0,0:37:02.37,0:37:04.22,Default,,0000,0000,0000,,"no Idon't, I'm just testing". Dialogue: 0,0:37:05.12,0:37:11.38,Default,,0000,0000,0000,,Finally I got deployed to an epsilon and Facebook no longer knew I was a face Dialogue: 0,0:37:11.38,0:37:15.32,Default,,0000,0000,0000,,So I was just a\Nbook, I was a cat book, maybe. Dialogue: 0,0:37:15.34,0:37:19.59,Default,,0000,0000,0000,,{\i1}applause{\i0} Dialogue: 0,0:37:21.31,0:37:24.74,Default,,0000,0000,0000,,kjam: So, unfortunately, as we see, I\Ndidn't actually become a cat, because that Dialogue: 0,0:37:24.74,0:37:30.63,Default,,0000,0000,0000,,would be pretty neat. But I was able to\Nfool it. I spoke with the computer visions Dialogue: 0,0:37:30.63,0:37:34.76,Default,,0000,0000,0000,,specialists that I know and she actually\Nworks in this and I was like: "What Dialogue: 0,0:37:34.76,0:37:39.02,Default,,0000,0000,0000,,methods do you think Facebook was using?\NDid I really fool the neural network or Dialogue: 0,0:37:39.02,0:37:43.14,Default,,0000,0000,0000,,what did I do?" And she's convinced most\Nlikely that they're actually using a Dialogue: 0,0:37:43.14,0:37:47.58,Default,,0000,0000,0000,,statistical method called Viola-Jones,\Nwhich takes a look at the statistical Dialogue: 0,0:37:47.58,0:37:53.28,Default,,0000,0000,0000,,distribution of your face and tries to\Nguess if there's really a face there. But Dialogue: 0,0:37:53.28,0:37:58.80,Default,,0000,0000,0000,,what I was able to show: transferability.\NThat is, I can use my neural network even Dialogue: 0,0:37:58.80,0:38:05.38,Default,,0000,0000,0000,,to fool this statistical model, so now I\Nhave a very noisy but happy photo on FB Dialogue: 0,0:38:08.55,0:38:14.14,Default,,0000,0000,0000,,Another use case potentially is\Nadversarial stenography and I was really Dialogue: 0,0:38:14.14,0:38:18.59,Default,,0000,0000,0000,,excited reading this paper. What this\Npaper covered and they actually released Dialogue: 0,0:38:18.59,0:38:22.86,Default,,0000,0000,0000,,the library, as I mentioned. They study\Nthe ability of a neural network to be Dialogue: 0,0:38:22.86,0:38:26.31,Default,,0000,0000,0000,,convinced that something's there that's\Nnot actually there. Dialogue: 0,0:38:27.15,0:38:30.18,Default,,0000,0000,0000,,And what they used, they used the MNIST training set. Dialogue: 0,0:38:30.24,0:38:33.42,Default,,0000,0000,0000,,I'm sorry, if that's like a trigger word Dialogue: 0,0:38:33.42,0:38:38.41,Default,,0000,0000,0000,,if you've used MNIST a million times, then\NI'm sorry for this, but what they use is Dialogue: 0,0:38:38.41,0:38:43.29,Default,,0000,0000,0000,,MNIST, which is zero through nine of\Ndigits, and what they were able to show Dialogue: 0,0:38:43.29,0:38:48.79,Default,,0000,0000,0000,,using evolutionary networks is they were\Nable to generate things that to us look Dialogue: 0,0:38:48.79,0:38:53.28,Default,,0000,0000,0000,,maybe like art and they actually used it\Non the CIFAR data set too, which has Dialogue: 0,0:38:53.28,0:38:57.32,Default,,0000,0000,0000,,colors, and it was quite beautiful. Some\Nof what they created in fact they showed Dialogue: 0,0:38:57.32,0:39:04.34,Default,,0000,0000,0000,,in a gallery. And what the network sees\Nhere is the digits across the top. They Dialogue: 0,0:39:04.34,0:39:12.17,Default,,0000,0000,0000,,see that digit, they are more than 99%\Nconvinced that that digit is there and Dialogue: 0,0:39:12.17,0:39:15.48,Default,,0000,0000,0000,,what we see is pretty patterns or just\Nnoise. Dialogue: 0,0:39:16.78,0:39:19.70,Default,,0000,0000,0000,,When I was reading this paper I was thinking, Dialogue: 0,0:39:19.70,0:39:23.62,Default,,0000,0000,0000,,how can we use this to send\Nmessages to each other that nobody else Dialogue: 0,0:39:23.62,0:39:28.51,Default,,0000,0000,0000,,will know is there? I'm just sending\Nreally nice.., I'm an artist and this is Dialogue: 0,0:39:28.51,0:39:35.20,Default,,0000,0000,0000,,my art and I'm sharing it with my friend.\NAnd in a world where I'm afraid to go home Dialogue: 0,0:39:35.20,0:39:42.36,Default,,0000,0000,0000,,because there's a crazy person in charge\Nand I'm afraid that they might look at my Dialogue: 0,0:39:42.36,0:39:47.04,Default,,0000,0000,0000,,phone, in my computer, and a million other\Nthings and I just want to make sure that Dialogue: 0,0:39:47.04,0:39:51.65,Default,,0000,0000,0000,,my friend has my pin number or this or\Nthat or whatever. I see a use case for my Dialogue: 0,0:39:51.65,0:39:56.12,Default,,0000,0000,0000,,life, but again I leave a fairly\Nprivileged life, there are other people Dialogue: 0,0:39:56.12,0:40:01.69,Default,,0000,0000,0000,,where their actual life and livelihood and\Nsecurity might depend on using a technique Dialogue: 0,0:40:01.69,0:40:06.15,Default,,0000,0000,0000,,like this. And I think we could use\Nadversarial learning to create a new form Dialogue: 0,0:40:06.15,0:40:07.36,Default,,0000,0000,0000,,of stenography. Dialogue: 0,0:40:11.29,0:40:17.07,Default,,0000,0000,0000,,Finally I cannot impress\Nenough that the more information we have Dialogue: 0,0:40:17.07,0:40:20.62,Default,,0000,0000,0000,,about the systems that we interact with\Nevery day, that our machine learning Dialogue: 0,0:40:20.62,0:40:24.85,Default,,0000,0000,0000,,systems, that our AI systems, or whatever\Nyou want to call it, that our deep Dialogue: 0,0:40:24.85,0:40:29.70,Default,,0000,0000,0000,,networks, the more information we have,\Nthe better we can fight them, right. We Dialogue: 0,0:40:29.70,0:40:33.92,Default,,0000,0000,0000,,don't need perfect knowledge, but the more\Nknowledge that we have, the better an Dialogue: 0,0:40:33.92,0:40:41.36,Default,,0000,0000,0000,,adversary we can be. I thankfully now live\Nin Germany and if you are also a European Dialogue: 0,0:40:41.36,0:40:46.77,Default,,0000,0000,0000,,resident: We have GDPR, which is the\Ngeneral data protection regulation and it Dialogue: 0,0:40:46.77,0:40:55.65,Default,,0000,0000,0000,,goes into effect in May of 2018. We can\Nuse gdpr to make requests about our data, Dialogue: 0,0:40:55.65,0:41:00.45,Default,,0000,0000,0000,,we can use GDPR to make requests about\Nmachine learning systems that we interact Dialogue: 0,0:41:00.45,0:41:07.84,Default,,0000,0000,0000,,with, this is a right that we have. And in\Nrecital 71 of the GDPR it states: "The Dialogue: 0,0:41:07.84,0:41:12.55,Default,,0000,0000,0000,,data subject should have the right to not\Nbe subject to a decision, which may Dialogue: 0,0:41:12.55,0:41:17.73,Default,,0000,0000,0000,,include a measure, evaluating personal\Naspects relating to him or her which is Dialogue: 0,0:41:17.73,0:41:22.88,Default,,0000,0000,0000,,based solely on automated processing and\Nwhich produces legal effects concerning Dialogue: 0,0:41:22.88,0:41:28.01,Default,,0000,0000,0000,,him or her or similarly significantly\Naffects him or her, such as automatic Dialogue: 0,0:41:28.01,0:41:33.62,Default,,0000,0000,0000,,refusal of an online credit application or\Ne-recruiting practices without any human Dialogue: 0,0:41:33.62,0:41:39.27,Default,,0000,0000,0000,,intervention." And I'm not a lawyer and I\Ndon't know how this will be implemented Dialogue: 0,0:41:39.27,0:41:43.99,Default,,0000,0000,0000,,and it's a recital, so we don't even know,\Nif it will be in force the same way, but Dialogue: 0,0:41:43.99,0:41:50.72,Default,,0000,0000,0000,,the good news is: Pieces of this same\Nsentiment are in the actual amendments and Dialogue: 0,0:41:50.72,0:41:55.58,Default,,0000,0000,0000,,if they're in the amendments, then we can\Nlegally use them. And what it also says Dialogue: 0,0:41:55.58,0:41:59.92,Default,,0000,0000,0000,,is, we can ask companies to port our data\Nother places, we can ask companies to Dialogue: 0,0:41:59.92,0:42:03.89,Default,,0000,0000,0000,,delete our data, we can ask for\Ninformation about how our data is Dialogue: 0,0:42:03.89,0:42:09.01,Default,,0000,0000,0000,,processed, we can ask for information\Nabout what different automated decisions Dialogue: 0,0:42:09.01,0:42:15.75,Default,,0000,0000,0000,,are being made, and the more we all here\Nask for that data, the more we can also Dialogue: 0,0:42:15.75,0:42:20.53,Default,,0000,0000,0000,,share that same information with people\Nworldwide. Because the systems that we Dialogue: 0,0:42:20.53,0:42:25.09,Default,,0000,0000,0000,,interact with, they're not special to us,\Nthey're the same types of systems that are Dialogue: 0,0:42:25.09,0:42:30.61,Default,,0000,0000,0000,,being deployed everywhere in the world. So\Nwe can help our fellow humans outside of Dialogue: 0,0:42:30.61,0:42:36.40,Default,,0000,0000,0000,,Europe by being good caretakers and using\Nour rights to make more information Dialogue: 0,0:42:36.40,0:42:41.96,Default,,0000,0000,0000,,available to the entire world and to use\Nthis information, to find ways to use Dialogue: 0,0:42:41.96,0:42:46.24,Default,,0000,0000,0000,,adversarial learning to fool these types\Nof systems. Dialogue: 0,0:42:47.51,0:42:56.50,Default,,0000,0000,0000,,{\i1}applause{\i0} Dialogue: 0,0:42:56.66,0:43:03.36,Default,,0000,0000,0000,,So how else might we be able to harness\Nthis for good? I cannot focus enough on Dialogue: 0,0:43:03.36,0:43:08.26,Default,,0000,0000,0000,,GDPR and our right to collect more\Ninformation about the information they're Dialogue: 0,0:43:08.26,0:43:14.11,Default,,0000,0000,0000,,already collecting about us and everyone\Nelse. So use it, let's find ways to share Dialogue: 0,0:43:14.11,0:43:17.74,Default,,0000,0000,0000,,the information we gain from it. So I\Ndon't want it to just be that one person Dialogue: 0,0:43:17.74,0:43:21.02,Default,,0000,0000,0000,,requests it and they learn something. Se\Nhave to find ways to share this Dialogue: 0,0:43:21.02,0:43:28.08,Default,,0000,0000,0000,,information with one another. Test low-\Ntech ways. I'm so excited about the maker Dialogue: 0,0:43:28.08,0:43:32.85,Default,,0000,0000,0000,,space here and maker culture and other\Nlow-tech or human-crafted ways to fool Dialogue: 0,0:43:32.85,0:43:37.89,Default,,0000,0000,0000,,networks. We can use adversarial learning\Nperhaps to get good ideas on how to fool Dialogue: 0,0:43:37.89,0:43:43.35,Default,,0000,0000,0000,,networks, to get lower tech ways. What if\NI painted red pixels all over my face? Dialogue: 0,0:43:43.35,0:43:48.60,Default,,0000,0000,0000,,Would I still be recognized? Would I not?\NLet's experiment with things that we learn Dialogue: 0,0:43:48.60,0:43:53.57,Default,,0000,0000,0000,,from adversarial learning and try to find\Nother lower-tech solutions to the same problem Dialogue: 0,0:43:55.43,0:43:59.93,Default,,0000,0000,0000,,Finally. or nearly finally, we\Nneed to increase the research beyond just Dialogue: 0,0:43:59.93,0:44:04.01,Default,,0000,0000,0000,,computer vision. Quite a lot of\Nadversarial learning has been only in Dialogue: 0,0:44:04.01,0:44:08.22,Default,,0000,0000,0000,,computer vision and while I think that's\Nimportant and it's also been very Dialogue: 0,0:44:08.22,0:44:12.03,Default,,0000,0000,0000,,practical, because we can start to see how\Nwe can fool something, we need to figure Dialogue: 0,0:44:12.03,0:44:15.92,Default,,0000,0000,0000,,out natural language processing, we need\Nto figure out other ways that machine Dialogue: 0,0:44:15.92,0:44:19.93,Default,,0000,0000,0000,,learning systems are being used, and we\Nneed to come up with clever ways to fool them. Dialogue: 0,0:44:21.80,0:44:26.00,Default,,0000,0000,0000,,Finally, spread the word! So I don't\Nwant the conversation to end here, I don't Dialogue: 0,0:44:26.00,0:44:30.95,Default,,0000,0000,0000,,want the conversation to end at Congress,\NI want you to go back to your hacker Dialogue: 0,0:44:30.95,0:44:36.53,Default,,0000,0000,0000,,collective, your local CCC, the people\Nthat you talk with, your co-workers and I Dialogue: 0,0:44:36.53,0:44:41.34,Default,,0000,0000,0000,,want you to spread the word. I want you to\Ndo workshops on adversarial learning, I Dialogue: 0,0:44:41.34,0:44:47.93,Default,,0000,0000,0000,,want more people to not treat this AI as\Nsomething mystical and powerful, because Dialogue: 0,0:44:47.93,0:44:52.34,Default,,0000,0000,0000,,unfortunately it is powerful, but it's not\Nmystical! So we need to demystify this Dialogue: 0,0:44:52.34,0:44:57.04,Default,,0000,0000,0000,,space, we need to experiment, we need to\Nhack on it and we need to find ways to Dialogue: 0,0:44:57.04,0:45:02.31,Default,,0000,0000,0000,,play with it and spread the word to other\Npeople. Finally, I really want to hear Dialogue: 0,0:45:02.31,0:45:10.48,Default,,0000,0000,0000,,your other ideas and before I leave today\Nhave to say a little bit about why I Dialogue: 0,0:45:10.48,0:45:15.82,Default,,0000,0000,0000,,decided to join the resiliency track this\Nyear. I read about the resiliency track Dialogue: 0,0:45:15.82,0:45:21.91,Default,,0000,0000,0000,,and I was really excited. It spoke to me.\NAnd I said I want to live in a world Dialogue: 0,0:45:21.91,0:45:27.23,Default,,0000,0000,0000,,where, even if there's an entire burning\Ntrash fire around me, I know that there Dialogue: 0,0:45:27.23,0:45:32.01,Default,,0000,0000,0000,,are other people that I care about, that I\Ncan count on, that I can work with to try Dialogue: 0,0:45:32.01,0:45:37.84,Default,,0000,0000,0000,,and at least protect portions of our\Nworld. To try and protect ourselves, to Dialogue: 0,0:45:37.84,0:45:43.94,Default,,0000,0000,0000,,try and protect people that do not have as\Nmuch privilege. So, what I want to be a Dialogue: 0,0:45:43.94,0:45:49.24,Default,,0000,0000,0000,,part of, is something that can use maybe\Nthe skills I have and the skills you have Dialogue: 0,0:45:49.24,0:45:56.59,Default,,0000,0000,0000,,to do something with that. And your data\Nis a big source of value for everyone. Dialogue: 0,0:45:56.59,0:46:02.82,Default,,0000,0000,0000,,Any free service you use, they are selling\Nyour data. OK, I don't know that for a Dialogue: 0,0:46:02.82,0:46:08.42,Default,,0000,0000,0000,,fact, but it is very certain, I feel very\Ncertain about the fact that they're most Dialogue: 0,0:46:08.42,0:46:12.56,Default,,0000,0000,0000,,likely selling your data. And if they're\Nselling your data, they might also be Dialogue: 0,0:46:12.56,0:46:17.73,Default,,0000,0000,0000,,buying your data. And there is a whole\Nmarket, that's legal, that's freely Dialogue: 0,0:46:17.73,0:46:22.67,Default,,0000,0000,0000,,available, to buy and sell your data. And\Nthey make money off of that, and they mine Dialogue: 0,0:46:22.67,0:46:28.91,Default,,0000,0000,0000,,more information, and make more money off\Nof that, and so forth. So, I will read a Dialogue: 0,0:46:28.91,0:46:35.41,Default,,0000,0000,0000,,little bit of my opinions that I put forth\Non this. Determine who you share your data Dialogue: 0,0:46:35.41,0:46:41.91,Default,,0000,0000,0000,,with and for what reasons. GDPR and data\Nportability give us European residents Dialogue: 0,0:46:41.91,0:46:44.41,Default,,0000,0000,0000,,stronger rights than most of the world.\N Dialogue: 0,0:46:44.92,0:46:47.94,Default,,0000,0000,0000,,Let's use them. Let's choose privacy Dialogue: 0,0:46:47.94,0:46:52.80,Default,,0000,0000,0000,,concerned ethical data companies over\Ncorporations that are entirely built on Dialogue: 0,0:46:52.80,0:46:58.26,Default,,0000,0000,0000,,selling ads. Let's build start-ups,\Norganizations, open-source tools and Dialogue: 0,0:46:58.26,0:47:05.69,Default,,0000,0000,0000,,systems that we can be truly proud of. And\Nlet's port our data to those. Dialogue: 0,0:47:05.91,0:47:15.31,Default,,0000,0000,0000,,{\i1}Applause{\i0} Dialogue: 0,0:47:15.41,0:47:18.94,Default,,0000,0000,0000,,Herald: Amazing. We have,\Nwe have time for a few questions. Dialogue: 0,0:47:18.94,0:47:21.86,Default,,0000,0000,0000,,K.J.: I'm not done yet, sorry, it's fine.\NHerald: I'm so sorry. Dialogue: 0,0:47:21.86,0:47:24.75,Default,,0000,0000,0000,,K.J.: {\i1}Laughs{\i0} It's cool.\NNo big deal. Dialogue: 0,0:47:24.75,0:47:31.52,Default,,0000,0000,0000,,So, machine learning. Closing remarks is\Nbrief round up. Closing remarks. There is Dialogue: 0,0:47:31.52,0:47:35.25,Default,,0000,0000,0000,,that machine learning is not very\Nintelligent. I think artificial Dialogue: 0,0:47:35.25,0:47:39.33,Default,,0000,0000,0000,,intelligence is a misnomer in a lot of\Nways, but this doesn't mean that people Dialogue: 0,0:47:39.33,0:47:43.83,Default,,0000,0000,0000,,are going to stop using it. In fact\Nthere's very smart, powerful, and rich Dialogue: 0,0:47:43.83,0:47:49.85,Default,,0000,0000,0000,,people that are investing more than ever\Nin it. So it's not going anywhere. And Dialogue: 0,0:47:49.85,0:47:53.62,Default,,0000,0000,0000,,it's going to be something that\Npotentially becomes more dangerous over Dialogue: 0,0:47:53.62,0:47:58.57,Default,,0000,0000,0000,,time. Because as we hand over more of\Nthese to these systems, it could Dialogue: 0,0:47:58.57,0:48:04.24,Default,,0000,0000,0000,,potentially control more and more of our\Nlives. We can use, however, adversarial Dialogue: 0,0:48:04.24,0:48:09.32,Default,,0000,0000,0000,,machine learning techniques to find ways\Nto fool "black box" networks. So we can Dialogue: 0,0:48:09.32,0:48:14.40,Default,,0000,0000,0000,,use these and we know we don't have to\Nhave perfect knowledge. However, Dialogue: 0,0:48:14.40,0:48:18.93,Default,,0000,0000,0000,,information is powerful. And the more\Ninformation that we do have, the more were Dialogue: 0,0:48:18.93,0:48:25.86,Default,,0000,0000,0000,,able to become a good GDPR based\Nadversary. So please use GDPR and let's Dialogue: 0,0:48:25.86,0:48:31.23,Default,,0000,0000,0000,,discuss ways where we can share\Ninformation. Finally, please support open- Dialogue: 0,0:48:31.23,0:48:35.59,Default,,0000,0000,0000,,source tools and research in this space,\Nbecause we need to keep up with where the Dialogue: 0,0:48:35.59,0:48:41.79,Default,,0000,0000,0000,,state of the art is. So we need to keep\Nourselves moving and open in that way. And Dialogue: 0,0:48:41.79,0:48:46.67,Default,,0000,0000,0000,,please, support ethical data companies. Or\Nstart one. If you come to me and you say Dialogue: 0,0:48:46.67,0:48:50.24,Default,,0000,0000,0000,,"Katharine, I'm going to charge you this\Nmuch money, but I will never sell your Dialogue: 0,0:48:50.24,0:48:56.52,Default,,0000,0000,0000,,data. And I will never buy your data." I\Nwould much rather you handle my data. So I Dialogue: 0,0:48:56.52,0:49:03.39,Default,,0000,0000,0000,,want us, especially those within the EU,\Nto start a new economy around trust, and Dialogue: 0,0:49:03.39,0:49:12.74,Default,,0000,0000,0000,,privacy, and ethical data use.\N{\i1}Applause{\i0} Dialogue: 0,0:49:12.74,0:49:15.83,Default,,0000,0000,0000,,Thank you very much.\NThank you. Dialogue: 0,0:49:15.83,0:49:18.05,Default,,0000,0000,0000,,Herald: OK. We still have time for a few\Nquestions. Dialogue: 0,0:49:18.05,0:49:20.39,Default,,0000,0000,0000,,K.J.: No, no, no. No worries, no worries.\NHerald: Less than the last time I walked Dialogue: 0,0:49:20.39,0:49:23.87,Default,,0000,0000,0000,,up here, but we do.\NK.J.: Yeah, now I'm really done. Dialogue: 0,0:49:23.87,0:49:27.73,Default,,0000,0000,0000,,Herald: Come up to one of the mics in the\Nfront section and raise your hand. Can we Dialogue: 0,0:49:27.73,0:49:31.58,Default,,0000,0000,0000,,take a question from mic one.\NQuestion: Thank you very much for the very Dialogue: 0,0:49:31.58,0:49:37.86,Default,,0000,0000,0000,,interesting talk. One impression that I\Ngot during the talk was, with the Dialogue: 0,0:49:37.86,0:49:42.42,Default,,0000,0000,0000,,adversarial learning approach aren't we\Njust doing pen testing and Quality Dialogue: 0,0:49:42.42,0:49:47.92,Default,,0000,0000,0000,,Assurance for the AI companies they're\Njust going to build better machines. Dialogue: 0,0:49:47.92,0:49:52.91,Default,,0000,0000,0000,,Answer: That's a very good question and of\Ncourse most of this research right now is Dialogue: 0,0:49:52.91,0:49:56.78,Default,,0000,0000,0000,,coming from those companies, because\Nthey're worried about this. What, however, Dialogue: 0,0:49:56.78,0:50:02.29,Default,,0000,0000,0000,,they've shown is, they don't really have a\Ngood way to fool, to learn how to fool Dialogue: 0,0:50:02.29,0:50:08.71,Default,,0000,0000,0000,,this. Most likely they will need to use a\Ndifferent type of network, eventually. So Dialogue: 0,0:50:08.71,0:50:13.44,Default,,0000,0000,0000,,probably, whether it's the blind spots or\Nthe linearity of these networks, they are Dialogue: 0,0:50:13.44,0:50:18.00,Default,,0000,0000,0000,,easy to fool and they will have to come up\Nwith a different method for generating Dialogue: 0,0:50:18.00,0:50:24.52,Default,,0000,0000,0000,,something that is robust enough to not be\Ntricked. So, to some degree yes, its a Dialogue: 0,0:50:24.52,0:50:28.52,Default,,0000,0000,0000,,cat-and-mouse game, right. But that's why\NI want the research and the open source to Dialogue: 0,0:50:28.52,0:50:33.41,Default,,0000,0000,0000,,continue as well. And I would be highly\Nsuspect if they all of a sudden figure out Dialogue: 0,0:50:33.41,0:50:38.17,Default,,0000,0000,0000,,a way to make a neural network which has\Nproven linear relationships, that we can Dialogue: 0,0:50:38.17,0:50:42.56,Default,,0000,0000,0000,,exploit, nonlinear. And if so, it's\Nusually a different type of network that's Dialogue: 0,0:50:42.56,0:50:47.43,Default,,0000,0000,0000,,a lot more expensive to train and that\Ndoesn't actually generalize well. So we're Dialogue: 0,0:50:47.43,0:50:51.28,Default,,0000,0000,0000,,going to really hit them in a way where\Nthey're going to have to be more specific, Dialogue: 0,0:50:51.28,0:50:59.62,Default,,0000,0000,0000,,try harder, and I would rather do that\Nthan just kind of give up. Dialogue: 0,0:50:59.62,0:51:02.56,Default,,0000,0000,0000,,Herald: Next one.\NMic 2 Dialogue: 0,0:51:02.56,0:51:07.84,Default,,0000,0000,0000,,Q: Hello. Thank you for the nice talk. I\Nwanted to ask, have you ever tried looking Dialogue: 0,0:51:07.84,0:51:14.72,Default,,0000,0000,0000,,at from the other direction? Like, just\Ntrying to feed the companies falsely Dialogue: 0,0:51:14.72,0:51:21.56,Default,,0000,0000,0000,,classified data. And just do it with so\Nmassive amounts of data, so that they Dialogue: 0,0:51:21.56,0:51:25.38,Default,,0000,0000,0000,,learn from it at a certain point.\NA: Yes, that's these poisoning attacks. So Dialogue: 0,0:51:25.38,0:51:30.02,Default,,0000,0000,0000,,when we talk about poison attacks, we are\Nessentially feeding bad training data and Dialogue: 0,0:51:30.02,0:51:35.12,Default,,0000,0000,0000,,we're trying to get them to learn bad\Nthings. Or I wouldn't say bad things, but Dialogue: 0,0:51:35.12,0:51:37.54,Default,,0000,0000,0000,,we're trying to get them to learn false\Ninformation. Dialogue: 0,0:51:37.54,0:51:42.78,Default,,0000,0000,0000,,And that already happens on accident all\Nthe time so I think the more to we can, if Dialogue: 0,0:51:42.78,0:51:46.49,Default,,0000,0000,0000,,we share information and they have a\Npublicly available API, where they're Dialogue: 0,0:51:46.49,0:51:49.97,Default,,0000,0000,0000,,actually actively learning from our\Ninformation, then yes I would say Dialogue: 0,0:51:49.97,0:51:55.18,Default,,0000,0000,0000,,poisoning is a great attack way. And we\Ncan also share information of maybe how Dialogue: 0,0:51:55.18,0:51:58.36,Default,,0000,0000,0000,,that works.\NSo especially I would be intrigued if we Dialogue: 0,0:51:58.36,0:52:02.33,Default,,0000,0000,0000,,can do poisoning for adware and malicious\Nad targeting. Dialogue: 0,0:52:02.33,0:52:07.30,Default,,0000,0000,0000,,Mic 2: OK, thank you.\NHerald: One more question from the Dialogue: 0,0:52:07.30,0:52:12.30,Default,,0000,0000,0000,,internet and then we run out of time.\NK.J. Oh no, sorry Dialogue: 0,0:52:12.30,0:52:14.29,Default,,0000,0000,0000,,Herald: So you can find Katherine after.\NSignal-Angel: Thank you. One question from Dialogue: 0,0:52:14.29,0:52:18.21,Default,,0000,0000,0000,,the internet. What exactly can I do to\Nharden my model against adversarial Dialogue: 0,0:52:18.21,0:52:21.21,Default,,0000,0000,0000,,samples?\NK.J.: Sorry? Dialogue: 0,0:52:21.21,0:52:27.08,Default,,0000,0000,0000,,Signal: What exactly can I do to harden my\Nmodel against adversarial samples? Dialogue: 0,0:52:27.08,0:52:33.34,Default,,0000,0000,0000,,K.J.: Not much. What they have shown is,\Nthat if you train on a mixture of real Dialogue: 0,0:52:33.34,0:52:39.30,Default,,0000,0000,0000,,training data and adversarial data it's a\Nlittle bit harder to fool, but that just Dialogue: 0,0:52:39.30,0:52:44.72,Default,,0000,0000,0000,,means that you have to try more iterations\Nof adversarial input. So right now, the Dialogue: 0,0:52:44.72,0:52:51.52,Default,,0000,0000,0000,,recommendation is to train on a mixture of\Nadversarial and real training data and to Dialogue: 0,0:52:51.52,0:52:56.33,Default,,0000,0000,0000,,continue to do that over time. And I would\Nargue that you need to maybe do data Dialogue: 0,0:52:56.33,0:53:00.40,Default,,0000,0000,0000,,validation on input. And if you do data\Nvalidation on input maybe you can Dialogue: 0,0:53:00.40,0:53:05.10,Default,,0000,0000,0000,,recognize abnormalities. But that's\Nbecause I come from mainly like production Dialogue: 0,0:53:05.10,0:53:09.22,Default,,0000,0000,0000,,levels not theoretical, and I think maybe\Nyou should just test things, and see if Dialogue: 0,0:53:09.22,0:53:15.21,Default,,0000,0000,0000,,look weird you should maybe not take them\Ninto the system. Dialogue: 0,0:53:15.21,0:53:19.34,Default,,0000,0000,0000,,Herald: And that's all for the questions.\NI wish we had more time but we just don't. Dialogue: 0,0:53:19.34,0:53:21.66,Default,,0000,0000,0000,,Please give it up for Katharine Jarmul Dialogue: 0,0:53:21.66,0:53:26.20,Default,,0000,0000,0000,,{\i1}Applause{\i0} Dialogue: 0,0:53:26.20,0:53:31.05,Default,,0000,0000,0000,,{\i1}34c3 postroll music{\i0} Dialogue: 0,0:53:31.05,0:53:47.95,Default,,0000,0000,0000,,subtitles created by c3subtitles.de\Nin the year 2019. Join, and help us!