0:00:00.099,0:00:16.600
Music[br]Herald: The next talk is about how risky
0:00:16.600,0:00:23.210
is software you use. So you may be heard[br]about Trump versus a Russian security
0:00:23.210,0:00:30.949
company. We won't judge this, we won't[br]comment this, but we dislike the
0:00:30.949,0:00:36.590
prejudgments of this case. Tim Carstens[br]and Parker Thompson will tell you a little
0:00:36.590,0:00:43.300
bit more about how risky the software is[br]you use. Tim Carstens is CITL's Acting
0:00:43.300,0:00:48.350
Director and Parker Thompson is CITL's[br]lead engineer. Please welcome with a very,
0:00:48.350,0:00:53.879
very warm applause: Tim and Parker![br]Thanks.
0:00:53.879,0:01:05.409
Applause[br]Tim Carstens: Howdy, howdy. So my name is
0:01:05.409,0:01:13.010
Tim Carstens. I'm the acting director of[br]the cyber independent testing lab. It's
0:01:13.010,0:01:19.039
four words there, we'll talk about all for[br]today, especially cyber. With me today as
0:01:19.039,0:01:25.760
our lead engineer Parker Thompson. Not on[br]stage or our other collaborators: Patrick
0:01:25.760,0:01:32.929
Stach, Sarah Zatko, and present in the[br]room but not on stage - Mudge. So today
0:01:32.929,0:01:37.010
we're going to be talking about our work,[br]the lead in. The introduction that was
0:01:37.010,0:01:40.289
given is phrased in terms of Kaspersky and[br]all of that, I'm not gonna be speaking
0:01:40.289,0:01:45.370
about Kaspersky and I guarantee you I'm[br]not gonna be speaking about my president.
0:01:45.370,0:01:50.010
Right, yeah? Okay. Thank you.[br]Applause
0:01:50.010,0:01:55.289
All right, so why don't we go ahead and[br]kick off: I'll mention now parts of this
0:01:55.289,0:02:00.539
presentation are going to be quite[br]technical. Not most of it, and I will
0:02:00.539,0:02:04.030
always include analogies and all these[br]other things if you are here in security
0:02:04.030,0:02:10.530
but not a bit-twiddler. But if you do want[br]to be able to review some of the technical
0:02:10.530,0:02:14.810
material, if I go through it too fast you[br]like to read if you're a mathematician or
0:02:14.810,0:02:20.510
if you are a computer scientist, our sides[br]are already available for download at this
0:02:20.510,0:02:25.400
site here. We think our pal our partners[br]at power door for getting that set up for
0:02:25.400,0:02:31.630
us. Let's let's get started on the real[br]material here. Alright, so we are CITL: a
0:02:31.630,0:02:35.770
nonprofit organization based in the United[br]States founded by our chief scientist
0:02:35.770,0:02:43.020
Sarah Zatko and our board chair Mudge. And[br]our mission is a public good mission - we
0:02:43.020,0:02:47.400
are hackers but our mission here is[br]actually to look out for people who do not
0:02:47.400,0:02:50.460
know very much about machines[br]or as much as the other hackers do.
0:02:50.460,0:02:56.030
Specifically, we seek to improve the state[br]of software security by providing the
0:02:56.030,0:03:01.340
public with accurate reporting on the[br]security of popular software, right? And
0:03:01.340,0:03:05.520
so there was a mouthful for you. But no[br]doubt, no doubt, every single one of you
0:03:05.520,0:03:10.700
has received questions of the form: what[br]do I run on my phone, what do I do with
0:03:10.700,0:03:13.950
this, what do I do with that, how do I[br]protect myself - all these other things
0:03:13.950,0:03:19.770
lots of people in the general public[br]looking for agency in computing. No one's
0:03:19.770,0:03:25.000
offering it to them, and so we're trying[br]to go ahead and provide a forcing function
0:03:25.000,0:03:29.980
on the software field in order to, you[br]know, again be able to enable consumers
0:03:29.980,0:03:36.480
and users and all these things. Our social[br]good work is funded largely by charitable
0:03:36.480,0:03:40.820
monies from the Ford Foundation whom we[br]thank a great deal, but we also have major
0:03:40.820,0:03:44.920
partnerships with Consumer Reports, which[br]is a major organization in the United
0:03:44.920,0:03:51.620
States that generally, broadly, looks at[br]consumer goods for safety and performance.
0:03:51.620,0:03:55.690
But also partners with The Digital[br]Standard, which probably would be of great
0:03:55.690,0:03:59.460
interest to many people here at Congress[br]as it is a holistic standard for
0:03:59.460,0:04:04.340
protecting user rights. We'll talk about[br]some of the work that goes into those
0:04:04.340,0:04:10.250
things here in a bit, but first I want to[br]give the big picture of what it is we're
0:04:10.250,0:04:17.940
really trying to do in one one short[br]little sentence. Something like this but
0:04:17.940,0:04:23.710
for security, right? What are the[br]important facts, how does it rate, you
0:04:23.710,0:04:26.810
know, is it easy to consume, is it easy to[br]go ahead and look and say this thing is
0:04:26.810,0:04:31.300
good this thing is not good. Something[br]like this, but for software security.
0:04:33.120,0:04:39.190
Sounds hard doesn't it? So I want to talk[br]a little bit about what I mean by
0:04:39.190,0:04:44.870
something like this.[br]There are lots of consumer outlook and
0:04:44.870,0:04:50.270
watchdog and protection groups - some[br]private, some government, which are
0:04:50.270,0:04:54.820
looking to do this for various things that[br]are not a software security. And you can
0:04:54.820,0:04:58.210
see some examples here that are big in the[br]United States - I happen to not like these
0:04:58.210,0:05:02.120
as much as some of the newer consumer[br]labels coming out from the EU. But
0:05:02.120,0:05:05.080
nonetheless they are examples of the kinds[br]of things people have done in other
0:05:05.080,0:05:10.870
fields, fields that are not security to[br]try to achieve that same end. And when
0:05:10.870,0:05:17.410
these things work well, it is for three[br]reasons: One, it has to contain the
0:05:17.410,0:05:22.960
relevant information. Two: it has to be[br]based in fact, we're not talking opinions,
0:05:22.960,0:05:28.800
this is not a book club or something like[br]that. And three: it has to be actionable,
0:05:28.800,0:05:32.520
it has to be actionable - you have to be[br]able to know how to make a decision based
0:05:32.520,0:05:36.370
on it. How do you do that for software[br]security? How do you do that for
0:05:36.370,0:05:43.760
software security? So the rest of the talk[br]is going to go in three parts.
0:05:43.760,0:05:49.450
First, we're going to give a bit of an[br]overview for more of the consumer facing
0:05:49.450,0:05:52.820
side of things for that we do: look at[br]some data that we have reported on early
0:05:52.820,0:05:57.260
and all these other kinds of good things.[br]We're then going to go ahead and get
0:05:57.940,0:06:06.011
terrifyingly, terrifyingly technical. And[br]then after that we'll talk about tools to
0:06:06.011,0:06:09.560
actually implement all this stuff. The[br]technical part comes before the tools. So
0:06:09.560,0:06:12.170
it just tells you how terrifyingly[br]technical we're gonna get. It's gonna be
0:06:12.170,0:06:19.680
fun right. So how do you do this for[br]software security: a consumer version. So,
0:06:19.680,0:06:25.010
if you set forth to the task of trying to[br]measure software security, right, many
0:06:25.010,0:06:27.680
people here probably do work in the[br]security field perhaps as consultants
0:06:27.680,0:06:32.281
doing reviews; certainly I used to. Then[br]probably what you're thinking to yourself
0:06:32.281,0:06:38.650
right now is that there are lots and lots[br]and lots and lots of things that affect
0:06:38.650,0:06:44.150
the security of a piece of software. Some[br]of which are, mmm, you're only gonna see
0:06:44.150,0:06:47.640
them if you go reversing. And some of[br]which are just you know kicking around on
0:06:47.640,0:06:51.580
the ground waiting for you to notice,[br]right. So we're going to talk about both
0:06:51.580,0:06:55.919
of those kinds of things that you might[br]measure. But here you see these giant
0:06:55.919,0:07:03.380
charts that basically go through on the[br]left - on the left we have Microsoft Excel
0:07:03.380,0:07:08.310
on OS X on the right Google Chrome for OS[br]X this is a couple years old at this point
0:07:08.310,0:07:12.850
maybe one and a half years old but over[br]here I'm not expecting you to be able to
0:07:12.850,0:07:16.250
read these - the real point is to say look[br]at all of the different things you can
0:07:16.250,0:07:20.490
measure very easily.[br]How do you distill, it how do you boil it
0:07:20.490,0:07:26.770
down, right. So this is a the opposite of[br]a good consumer safety label. This is just
0:07:26.770,0:07:29.780
um if you ever done any consulting this is[br]the kind of report you hand a client to
0:07:29.780,0:07:32.870
tell them how good their software is,[br]right? It's the opposite of consumer
0:07:32.870,0:07:39.800
grade. But the reason I'm showing it here[br]is because, you know, I'm gonna call out
0:07:39.800,0:07:42.650
some things and maybe you can't process[br]all of this because it's too much
0:07:42.650,0:07:46.911
material, you know. But I'm gonna call it[br]some things and once I call them out just
0:07:46.911,0:07:52.949
like NP you're gonna recognize them[br]instantly. So for example, Excel, at the
0:07:52.949,0:07:56.820
time of this review - look at this column[br]of dots. What's this dots telling you?
0:07:56.820,0:07:59.990
It's telling you look at all these[br]libraries -all of them are 32-bit only.
0:07:59.990,0:08:07.180
Not 64 bits, not 64 bits. Take a look at[br]Chrome - exact opposite, exact opposite
0:08:07.180,0:08:14.021
64-bit binary, right? What are some other[br]things? Excel, again, on OSX maybe you can
0:08:14.021,0:08:19.550
see these danger warning signs that go[br]straight straight up the whole thing.
0:08:19.550,0:08:27.520
That's the the absence of major heat[br]protection flags in the binary headers.
0:08:27.520,0:08:31.919
We'll talk about some what that means[br]exactly in a bit. But also if you hop over
0:08:31.919,0:08:35.639
here you'll see like yeah yeah yeah like[br]Chrome has all the different heat
0:08:35.639,0:08:41.578
protections that a binary might enable, on[br]OSX that is, but it also has more dots in
0:08:41.578,0:08:44.649
this column here off to the right. And[br]what do those dots represent?
0:08:44.649,0:08:52.050
Those dots represent functions, functions[br]that historically have been the source of
0:08:52.050,0:08:54.460
you know if you call these functions are[br]very hard to call correctly - if you're a
0:08:54.460,0:08:59.029
C programmer the "gets" function is a good[br]example. But there are lots of them. And
0:08:59.029,0:09:03.279
you can see here the Chrome doesn't mind,[br]it uses them all a bunch. And Excel not so
0:09:03.279,0:09:08.360
much. And if you know the history of[br]Microsoft and the trusted computing
0:09:08.360,0:09:12.379
initiative and the SDO and all of that you[br]will know that a very long time ago
0:09:12.379,0:09:17.180
Microsoft made the decision and they said[br]we're gonna start purging some of these
0:09:17.180,0:09:22.009
risky functions from our code bases[br]because we think it's easier to ban them
0:09:22.009,0:09:24.990
than teach our devs to use them correctly.[br]And you see that reverberating out in
0:09:24.990,0:09:28.980
their software. Google on the other hand[br]says yeah yeah yeah those functions can be
0:09:28.980,0:09:31.920
dangerous to use but if you know how to[br]use them they can be very good and so
0:09:31.920,0:09:38.959
they're permitted. The point all of this[br]is building to is that if you start by
0:09:38.959,0:09:42.540
just measuring every little thing that[br]like your static analyzers can detect in a
0:09:42.540,0:09:47.760
piece of software. Two things: one, you[br]wind up with way more data than you can
0:09:47.760,0:09:55.269
show in a slide. And two: the engineering[br]process, the software development life
0:09:55.269,0:09:59.779
cycle that went into the software will[br]leave behind artifacts that tell you
0:09:59.779,0:10:05.170
something about the decisions that went[br]into designing that engineering process.
0:10:05.170,0:10:10.179
And so you know, Google for example:[br]quite rigorous as far as hitting you know
0:10:10.179,0:10:14.379
GCC dash, and then enable all of the[br]compiler protections. Microsoft may be
0:10:14.379,0:10:19.949
less good at that, but much more rigid in[br]things that's were very popular ideas when
0:10:19.949,0:10:24.199
they introduced trusted computing,[br]alright. So the big takeaway from this
0:10:24.199,0:10:29.040
material is that again the software[br]engineering process results in artifacts
0:10:29.040,0:10:35.610
in the software that people can find.[br]Alright. Ok, so that's that's a whole
0:10:35.610,0:10:40.579
bunch of data, certainly it's not a[br]consumer-friendly label. So how do you
0:10:40.579,0:10:45.899
start to get in towards the consumer zone?[br]Well, the main defect of the big reports
0:10:45.899,0:10:51.240
that we just saw is that it's too much[br]information. It's a very dense on data but
0:10:51.240,0:10:55.649
it's very hard to distill it to the "so[br]what" of it, right?
0:10:55.649,0:11:00.470
And so this here is one of our earlier[br]attempts to go ahead and do that
0:11:00.470,0:11:04.990
distillation. What are these charts how[br]did we come up with these? Well on the
0:11:04.990,0:11:08.490
previous slide when we saw all these[br]different factors that you can analyze in
0:11:08.490,0:11:14.189
software, basically here's whose views[br]that we arrive at this. For each of those
0:11:14.189,0:11:18.639
things: pick a weight. Go ahead and[br]compute a score, average against the
0:11:18.639,0:11:22.110
weight: tada, now you have some number.[br]You can do that for each of the libraries
0:11:22.110,0:11:25.819
and the piece of software. And if you do[br]that for each of the libraries in the
0:11:25.819,0:11:29.399
software you can then go ahead and produce[br]these histograms to show, you know, like
0:11:29.399,0:11:35.619
this percentage of the DLLs had a score in[br]this range. Boom, there's a bar, right.
0:11:35.619,0:11:39.269
How do you pick those weights? We'll talk[br]about that in a sec - it's very technical.
0:11:39.269,0:11:45.339
But the the takeaway though, is you know[br]that you wind up with these charts. Now
0:11:45.339,0:11:48.329
I've obscured the labels, I've obscured[br]the labels and the reason I've done that
0:11:48.329,0:11:52.329
is because I don't really care that much[br]about the actual counts. I want to talk
0:11:52.329,0:11:57.420
about the shapes, the shapes of these[br]charts: it's a qualitative thing.
0:11:57.420,0:12:02.540
So here: good scores appear on the right,[br]bad scores appear on the left. The
0:12:02.540,0:12:06.269
histogram measuring all the libraries and[br]components and so a very secure piece of
0:12:06.269,0:12:12.879
software in this model manifests as a tall[br]bar far to the right. And you can see a
0:12:12.879,0:12:17.910
clear example at in our custom Gentoo[br]build. Anyone here is a Gentoo fan knows -
0:12:17.910,0:12:21.189
hey I'm going to install this thing, I[br]think I'm going to go ahead and turn on
0:12:21.189,0:12:25.120
every single one of those flags, and lo[br]and behold if you do that yeah you wind up
0:12:25.120,0:12:30.520
with tall bar far to the right. Here's in[br]Ubuntu 16, I bet it's 16.04 but I don't
0:12:30.520,0:12:35.959
recall exactly, 16 LTS. Here you see a lot[br]of tall bars to the right - not quite as
0:12:35.959,0:12:39.619
consolidated as a custom Gentoo build, but[br]that makes sense doesn't it right? Because
0:12:39.619,0:12:44.769
then you know you don't do your whole[br]Ubuntu build. Now I want to contrast. I
0:12:44.769,0:12:50.360
want to contrast. So over here on the[br]right we see in the same model, an
0:12:50.360,0:12:55.929
analysis of the firmware obtained from two[br]smart televisions. Last year's models from
0:12:55.929,0:12:59.920
Samsung and LG. And here the model[br]numbers. We did this work in concert with
0:12:59.920,0:13:05.039
Consumer Reports. And what do you notice[br]about these histograms, right. Are the
0:13:05.039,0:13:11.790
bars tall and to the right? No, they look[br]almost normal, not quite, but that doesn't
0:13:11.790,0:13:16.619
really matter. The main thing that matters[br]is that this is the shape you would expect
0:13:16.619,0:13:23.649
to get if you were playing a random game[br]basically to decide what security features
0:13:23.649,0:13:27.879
to enable in your software. This is the[br]shape of not having a security program, is
0:13:27.879,0:13:33.540
my bet. That's my bet. And so what do you[br]see? You see heavy concentration here in
0:13:33.540,0:13:38.800
the middle, right, that seems fair, and[br]like it tails off. On the Samsung nothing
0:13:38.800,0:13:43.549
scored all that great, same on the LG.[br]Both of them are you know running their
0:13:43.549,0:13:46.639
respective operating systems and they're[br]basically just inheriting whatever
0:13:46.639,0:13:51.249
security came from whatever open source[br]thing they forked, right.
0:13:51.249,0:13:55.000
So this is this is the kind of message,[br]this right here is the kind of thing that
0:13:55.000,0:14:01.929
we serve to exist for. This is us[br]producing charts showing that the current
0:14:01.929,0:14:08.019
practices in the not-so consumer-friendly[br]space of running your own Linux distros
0:14:08.019,0:14:13.290
far exceed the products being delivered,[br]certainly in this case in the smart TV
0:14:13.290,0:14:24.941
market. But I think you might agree with[br]me, it's much worse than this. So let's
0:14:24.941,0:14:28.319
dig into that a little bit more, I have a[br]different point that I want to make about
0:14:28.319,0:14:33.959
that same data set - so this table here[br]this table is again looking at the LG
0:14:33.959,0:14:39.769
Samsung and Gentoo Linux installations.[br]And on this table we're just pulling out
0:14:39.769,0:14:43.839
some of the easy to identify security[br]features you might enable in a binary
0:14:43.839,0:14:49.989
right. So percentage of binaries with[br]address space layout randomization, right?
0:14:49.989,0:14:56.429
Let's talk about that on our Gentoo build[br]it's over 99%. That also holds for the
0:14:56.429,0:15:02.699
Amazon Linux AMI - it holds in Ubuntu.[br]ASLR is incredibly common in modern Linux.
0:15:02.699,0:15:09.290
And despite that, fewer than 70 percent of[br]the binaries on the LG television had it
0:15:09.290,0:15:13.739
enabled. Fewer than 70 percent. And the[br]Samsung was doing, you know, better than
0:15:13.739,0:15:19.780
that I guess, but 80 percent is a pretty[br]disappointing when a default Linux
0:15:19.780,0:15:25.190
install, you know, mainstream Linux distro[br]is going to get you 99, right? And it only
0:15:25.190,0:15:28.079
gets worse, it only gets worse right you[br]know?
0:15:28.079,0:15:32.379
RELRO support, if you don't know what that[br]is that's ok but if you do, look abysmal
0:15:32.379,0:15:37.809
coverage look at this abysmal coverage[br]coming out of these IOT devices very sad.
0:15:37.809,0:15:40.749
And you see it over and over and over[br]again. I'm showing this because some
0:15:40.749,0:15:46.339
people in this room or watching this video[br]ship software - and I have a message, I
0:15:46.339,0:15:50.309
have a message to those people who ship[br]software who aren't working on say Chrome
0:15:50.309,0:15:58.609
or any of the other big-name Pwn2Own kinds[br]of targets. Look at this: you can be
0:15:58.609,0:16:02.480
leading the pack by mastering the[br]fundamentals. You can be leading the pack
0:16:02.480,0:16:07.079
by mastering the fundamentals. This is a[br]point that really as a security field we
0:16:07.079,0:16:11.179
really need to be driving home. You know,[br]one of the things that we're seeing here
0:16:11.179,0:16:15.709
in our data is that if you're the vendor[br]who is shipping the product everyone has
0:16:15.709,0:16:19.390
heard of in the security field and maybe[br]your game is pretty decent right? If
0:16:19.390,0:16:23.599
you're shipping say Windows or if you're[br]shipping Firefox or whatever. But if
0:16:23.599,0:16:26.149
you're if you're doing one of these things[br]where people are just kind of beating you
0:16:26.149,0:16:30.619
up for default passwords, then your[br]problems are way further than just default
0:16:30.619,0:16:35.399
passwords, right? Like the house, the[br]house is messy it needs to be cleaned,
0:16:35.399,0:16:43.190
needs to be cleaned. So the rest of the[br]talk like I said we're going to be
0:16:43.190,0:16:47.019
discussing a lot of other things that[br]amount to getting you know a peek behind
0:16:47.019,0:16:50.689
the curtain and where some of these things[br]come from and getting very specific about
0:16:50.689,0:16:54.420
how this business works, but if you're[br]interested in more of the high level
0:16:54.420,0:16:58.980
material - especially if you're interested[br]in interesting results and insights, some
0:16:58.980,0:17:01.949
of which I'm going to have here later. But[br]I really encourage you though to take a
0:17:01.949,0:17:06.750
look at the talk from this past summer by[br]our chief scientist Sarah Zatko, which is
0:17:06.750,0:17:11.217
predominantly on the topic of surprising[br]results in the data.
0:17:14.892,0:17:18.539
Today, though, this being our first time[br]presenting here in Europe, we figured we
0:17:18.539,0:17:22.869
would take more of an overarching kind of[br]view. What we're doing and why we're
0:17:22.869,0:17:26.619
excited about it and where it's headed. So[br]we're about to move into a little bit of
0:17:26.619,0:17:31.600
the underlying theory, you know. Why do I[br]think it's reasonable to even try to
0:17:31.600,0:17:35.429
measure the security of software from a[br]technical perspective. But before we can
0:17:35.429,0:17:39.310
get into that I need to talk a little bit[br]about our goals, so that the decisions and
0:17:39.310,0:17:45.380
the theory; the motivation is clear,[br]right. Our goals are really simple: it's a
0:17:45.380,0:17:51.399
very easy organization to run because of[br]that. Goal number one: remain independent
0:17:51.399,0:17:56.260
of vendor influence. We are not the first[br]organization to purport to be looking out
0:17:56.260,0:18:02.470
for the consumer. But unlike many of our[br]predecessors, we are not taking money from
0:18:02.470,0:18:09.920
the people we review, right? Seems like[br]some basic stuff. Seems like some basic
0:18:09.920,0:18:17.539
stuff right? Thank you, okay.[br]Two: automated, comparable, quantitative
0:18:17.539,0:18:23.790
analysis. Why automated? Well, we need our[br]test results to be reproducible. And Tim
0:18:23.790,0:18:27.720
goes in opens up your software in IDA and[br]finds a bunch of stuff that makes them all
0:18:27.720,0:18:32.620
stoped - that's not a very repeatable kind[br]of a standard for things. And so we're
0:18:32.620,0:18:36.440
interested in things which are automated.[br]We'll talk about, maybe a few hackers in
0:18:36.440,0:18:39.940
here know how hard that is. We'll talk[br]about that, but then last we also we're
0:18:39.940,0:18:43.539
well acting as a watchdog - we're[br]protecting the interests of the user, the
0:18:43.539,0:18:47.630
consumer, however you would like to look[br]at it. But we also have three non goals,
0:18:47.630,0:18:52.510
three non goals that are equally[br]important. One: we have a non goal of
0:18:52.510,0:18:56.859
finding and disclosing vulnerabilities. I[br]reserve the right to find and disclose
0:18:56.859,0:19:01.370
vulnerabilities. But that's not my goal,[br]it's not my goal. Another non goal is to
0:19:01.370,0:19:04.840
tell software vendors what to do. If a[br]vendor asks me how to remediate their
0:19:04.840,0:19:08.500
terrible score, I will tell them what we[br]are measuring but I'm not there to help
0:19:08.500,0:19:11.950
them remediate. It's on them to be able to[br]ship a secure product without me holding
0:19:11.950,0:19:19.049
their hand. We'll see. And then three:[br]non-goal, perform free security testing
0:19:19.049,0:19:24.090
for vendors. Our testing happens after you[br]release. Because when you release your
0:19:24.090,0:19:28.980
software you are telling people it is[br]ready to be used. Is it really though, is
0:19:28.980,0:19:31.799
it really though, right?[br]Applause
0:19:31.799,0:19:37.309
Yeah, thank you. Yeah, so we are not there[br]to give you a preview of what your score
0:19:37.309,0:19:42.270
will be. There is no sum of money you can[br]hand me that will get you an early preview
0:19:42.270,0:19:46.169
of what your score is - you can try me,[br]you can try me: there's a fee for trying
0:19:46.169,0:19:50.260
me. There's a fee for trying me. But I'm[br]not gonna look at your stuff until I'm
0:19:50.260,0:19:58.549
ready to drop it, right. Yeah bitte, yeah.[br]All right. So moving into this theory
0:19:58.549,0:20:02.770
territory. Three big questions, three big[br]questions that need to be addressed if you
0:20:02.770,0:20:06.990
want to do our work efficiently: what[br]works, what works for improving security,
0:20:06.990,0:20:13.030
what are the things that need or that you[br]really want to see in software. Two: how
0:20:13.030,0:20:17.120
do you recognize when it's being done?[br]It's no good if someone hands you a piece
0:20:17.120,0:20:20.169
of software and says, "I've done all the[br]latest things" and it's a complete black
0:20:20.169,0:20:24.529
box. If you can't check the claim, the[br]claim is as good as false, in practical
0:20:24.529,0:20:30.210
terms, period, right. Software has to be[br]reviewable or a priori, I'll think you're
0:20:30.210,0:20:35.730
full of it. And then three: who's doing it[br]- of all the things that work, that you
0:20:35.730,0:20:39.820
can recognize, who's actually doing it.[br]You know, let's go ahead - our field is
0:20:39.820,0:20:47.429
famous for ruining people's holidays and[br]weekends over Friday bug disclosures, you
0:20:47.429,0:20:51.799
know New Year's Eve bug disclosures. I[br]would like us to also be famous for
0:20:51.799,0:20:59.250
calling out those teams and those software[br]organizations which are being as good as
0:20:59.250,0:21:04.240
the bad guys are being bad, yeah? So[br]provide someone an incentive to be maybe
0:21:04.240,0:21:19.460
happy to see us for a change, right. Okay,[br]so thank you. Yeah, all right. So how do
0:21:19.460,0:21:26.120
we actually pull these things off; the[br]basic idea. So, I'm going to get into some
0:21:26.120,0:21:29.470
deeper theory: if you're not a theorist I[br]want you to focus on this slide.
0:21:29.470,0:21:33.429
And I'm gonna bring it back, it's not all[br]theory from here on out after this but if
0:21:33.429,0:21:39.289
you're not a theorist I really want you to[br]focus on this slide. The basic motivation,
0:21:39.289,0:21:42.560
the basic motivation behind what we're[br]doing; the technical motivation - why we
0:21:42.560,0:21:47.020
think that it's possible to measure and[br]report on security. It all boils down to
0:21:47.020,0:21:53.020
this right. So we start with a thought[br]experiment, a gedankent, right? Given a
0:21:53.020,0:21:58.650
piece of software we can ask: overall, how[br]secure is it? Kind of a vague question but
0:21:58.650,0:22:03.000
you could imagine you know there's[br]versions of that question. And two: what
0:22:03.000,0:22:07.820
are its vulnerabilities. Maybe you want to[br]nitpick with me about what the word
0:22:07.820,0:22:11.240
vulnerability means but broadly you know[br]this is a much more specific question
0:22:11.240,0:22:18.850
right. And here's here's the enticing[br]thing: the first question appears to ask
0:22:18.850,0:22:24.929
for less information than the second[br]question. And maybe if we were taking bets
0:22:24.929,0:22:28.520
I would put my money on, yes, it actually[br]does ask for less information. What do I
0:22:28.520,0:22:33.240
mean by that what do I mean by that? Well,[br]let's say that someone told you all of the
0:22:33.240,0:22:38.389
vulnerabilities in a system right? They[br]said, "Hey, I got them all", right? You're
0:22:38.389,0:22:41.580
like all right that's cool, that's cool.[br]And if someone asks you hey how secure is
0:22:41.580,0:22:45.440
this system you can give them a very[br]precise answer. You can say it has N
0:22:45.440,0:22:48.620
vulnerabilities, and they're of this kind[br]and like all this stuff right so certainly
0:22:48.620,0:22:54.630
the second question is enough to answer[br]the first. But, is the reverse true?
0:22:54.630,0:22:58.470
Namely, if someone were to tell you, for[br]example, "hey, this piece of software has
0:22:58.470,0:23:06.210
exactly 32 vulnerabilities in it." Does[br]that make it easier to find any of them?
0:23:06.210,0:23:12.320
Right, there's room for to maybe do that[br]using some algorithms that are not yet in
0:23:12.320,0:23:15.840
existence.[br]Certainly the computer scientists in here
0:23:15.840,0:23:19.450
are saying, "well, you know, yeah maybe[br]counting the number of SAT solutions
0:23:19.450,0:23:22.700
doesn't help you practically find[br]solutions. But it might and we just don't
0:23:22.700,0:23:27.120
know." Okay fine fine fine. Maybe these[br]things are the same, but the my experience
0:23:27.120,0:23:30.410
in security, and the experience of many[br]others perhaps is that they probably
0:23:30.410,0:23:36.510
aren't the same question. And this[br]motivates what I'm calling here is Zatko's
0:23:36.510,0:23:40.870
question, which is basically asking for an[br]algorithm that demonstrates that the first
0:23:40.870,0:23:45.970
question is easier than the second[br]question, right. So Zatko's question:
0:23:45.970,0:23:49.360
develop a heuristic which can to[br]efficiently answer one, but not
0:23:49.360,0:23:53.549
necessarily two. If you're looking for a[br]metaphor, if you want to know why I care
0:23:53.549,0:23:56.640
about this distinction, I want you to[br]think about some certain controversial
0:23:56.640,0:24:00.990
technologies: maybe think about say[br]nuclear technology, right. An algorithm
0:24:00.990,0:24:04.529
that answers one, but not two, it's a very[br]safe algorithm to publish. Very safe
0:24:04.529,0:24:11.369
algorithm publish indeed. Okay, Claude[br]Shannon would like more information. happy
0:24:11.369,0:24:16.039
to oblige. Let's take a look at this[br]question from a different perspective
0:24:16.039,0:24:19.379
maybe a more hands-on perspective: the[br]hacker perspective, right? If you're a
0:24:19.379,0:24:22.389
hacker and you're watching me up here and[br]I'm waving my hands around and I'm showing
0:24:22.389,0:24:25.930
you charts maybe you're thinking to[br]yourself yeah boy, what do you got? Right,
0:24:25.930,0:24:29.730
how does this actually go. And maybe what[br]you're thinking to yourself is that, you
0:24:29.730,0:24:34.350
know, finding good vulns: that's an[br]artisan craft right? You're in IDA, you
0:24:34.350,0:24:37.250
know you're reversing old way you're doing[br]all these things or hit and Comm, I don't
0:24:37.250,0:24:41.429
know all that stuff. And like, you know,[br]this kind of clever game; cleverness is
0:24:41.429,0:24:47.210
not like this thing that feels very[br]automatable. But you know on the other
0:24:47.210,0:24:51.360
hand there are a lot of tools that do[br]automate things and so it's not completely
0:24:51.360,0:24:57.110
not automatable.[br]And if you're into fuzzing then perhaps
0:24:57.110,0:25:01.500
you are aware of this very simple[br]observation, which is that if your harness
0:25:01.500,0:25:04.940
is perfect if you really know what you're[br]doing if you have a decent fuzzer then in
0:25:04.940,0:25:08.840
principle fuzzing can find every single[br]problem. You have to be able to look for
0:25:08.840,0:25:13.870
it you have to be able harness for it but[br]in principle it will, right. So the hacker
0:25:13.870,0:25:19.210
perspective on Zatko's question is maybe[br]of two minds on the one hand assessing
0:25:19.210,0:25:22.399
security is a game of cleverness, but on[br]the other hand we're kind of right now at
0:25:22.399,0:25:25.880
the cusp of having some game-changing tech[br]really go - maybe you're saying like
0:25:25.880,0:25:29.580
fuzzing is not at the cusp, I promise it's[br]just at the cusp. We haven't seen all the
0:25:29.580,0:25:33.690
fuzzing has to offer right and so maybe[br]there's room maybe there's room for some
0:25:33.690,0:25:41.200
automation to be possible in pursuit of[br]Zatko's question. Of course, there are
0:25:41.200,0:25:45.920
many challenges still in, you know, using[br]existing hacker technology. Mostly of the
0:25:45.920,0:25:49.570
form of various open questions. For[br]example if you're into fuzzing, you know,
0:25:49.570,0:25:53.039
hey: identifying unique crashes. There's[br]an open question. We'll talk about some of
0:25:53.039,0:25:57.060
those, we'll talk about some of those. But[br]I'm going to offer another perspective
0:25:57.060,0:26:01.490
here: so maybe you're not in the business[br]of doing software reviews but you know a
0:26:01.490,0:26:05.929
little computer science. And maybe that[br]computer science has you wondering what's
0:26:05.929,0:26:12.679
this guy talking about, right. I'm here to[br]acknowledge that. So whatever you think
0:26:12.679,0:26:16.610
the word security means: I've got a list[br]of questions up here. Whatever you think
0:26:16.610,0:26:19.502
the word security means, probably, some of[br]these questions are relevant to your
0:26:19.502,0:26:23.299
definition. Right.[br]Does the software have a hidden backdoor
0:26:23.299,0:26:26.600
or any kind of hidden functionality, does[br]it handle crypto material correctly, etc,
0:26:26.600,0:26:30.429
so forth. Anyone in here who knows some[br]computers abilities theory knows that
0:26:30.429,0:26:34.240
every single one of these questions and[br]many others like them are undecidable due
0:26:34.240,0:26:37.960
to reasons essentially no different than[br]the reason the halting problem is
0:26:37.960,0:26:41.330
undecidable,\ which is to say due to[br]reasons essentially first identified and
0:26:41.330,0:26:46.019
studied by Alan Turing a long time before[br]we had microarchitectures and all these
0:26:46.019,0:26:50.350
other things. And so, the computability[br]perspective says that, you know, whatever
0:26:50.350,0:26:54.640
your definition of security is ultimately[br]you have this recognizability problem:
0:26:54.640,0:26:57.900
fancy way of saying that algorithms won't[br]be able to recognize secure software
0:26:57.900,0:27:02.690
because of the undecidability these[br]issues. The takeaway, the takeaway is that
0:27:02.690,0:27:07.090
the computability angle on all of this[br]says: anyone who's in the business that
0:27:07.090,0:27:12.394
we're in has to use heuristics. You have[br]to, you have to.
0:27:15.006,0:27:24.850
All right, this guy gets it. All right, so[br]on the tech side our last technical
0:27:24.850,0:27:28.380
perspective that we're going to take now[br]is certainly the most abstract which is
0:27:28.380,0:27:32.220
the Bayesian perspective, right. So if[br]you're a frequentist, you need to get with
0:27:32.220,0:27:37.490
the times you know it's everything[br]Bayesian now. So, let's talk about this
0:27:37.490,0:27:43.899
for a bit. Only two slides of math, I[br]promise, only two! So, let's say that I
0:27:43.899,0:27:47.120
have some corpus of software. Perhaps it's[br]a collection of all modern browsers,
0:27:47.120,0:27:50.510
perhaps it's the collection of all the[br]packages in the Debian repository, perhaps
0:27:50.510,0:27:53.990
it's everything on github that builds on[br]this system, perhaps it's a hard drive
0:27:53.990,0:27:58.159
full of warez that some guy mailed you,[br]right? You have some corpus of software
0:27:58.159,0:28:02.980
and for a random program in that corpus we[br]can consider this probability: the
0:28:02.980,0:28:07.180
probability distribution of which software[br]is secure versus which is not. For reasons
0:28:07.180,0:28:11.080
described on the computability[br]perspective, this number is not a
0:28:11.080,0:28:17.130
computable number for any reasonable[br]definition of security. So that's a neat
0:28:17.130,0:28:21.220
and so, for practical terms, if you want[br]to do some probabilistic reasoning, you
0:28:21.220,0:28:27.509
need some surrogate for that and so we[br]consider this here. So, instead of
0:28:27.509,0:28:31.000
considering the probability that a piece[br]of software is secure, a non computable
0:28:31.000,0:28:35.960
non verifiable claim, we take a look here[br]at this indexed collection of
0:28:35.960,0:28:38.840
probabilities. This is an infinite[br]countable family of probability
0:28:38.840,0:28:44.330
distributions, basically P sub h,k is just[br]the probability that for a random piece of
0:28:44.330,0:28:50.330
software in the corpus, h work units of[br]fuzzing will find no more than k unique
0:28:50.330,0:28:56.130
crashes, right. And why is this relevant?[br]Well, at the bottom we have this analytic
0:28:56.130,0:28:59.389
observation, which is that in the limit as[br]h goes to infinity you're basically
0:28:59.389,0:29:03.679
saying: "Hey, you know, if I fuzz this[br]thing for infinity times, you know, what's
0:29:03.679,0:29:07.549
that look like?" And, essentially, here we[br]have analytically that this should
0:29:07.549,0:29:12.970
converge. The P sub h,1 should converge to[br]the probability that a piece of software
0:29:12.970,0:29:16.331
just simply cannot be made to crash. Not[br]the same thing as being secure, but
0:29:16.331,0:29:23.730
certainly not a small concern relevant to[br]security. So, none of that stuff actually
0:29:23.730,0:29:30.620
was Bayesian yet, so we need to get there.[br]And so here we go, right: so, the previous
0:29:30.620,0:29:35.080
slide described a probability distribution[br]measured based on fuzzing. But fuzzing is
0:29:35.080,0:29:39.130
expensive and it is also not an answer to[br]Zatko's question because it finds
0:29:39.130,0:29:43.759
vulnerabilities, it doesn't measure[br]security in the general sense and so
0:29:43.759,0:29:47.110
here's where we make the jump to[br]conditional probabilities: Let M be some
0:29:47.110,0:29:51.929
observable property of software has ASLR,[br]has RELRO, calls these functions, doesn't
0:29:51.929,0:29:56.770
call those functions... take your pick.[br]For random s in S we now consider these
0:29:56.770,0:30:02.070
conditional probability distributions and[br]this is the same kind of probability as we
0:30:02.070,0:30:08.379
had on the previous slide but conditioned[br]on this observable being true, and this
0:30:08.379,0:30:11.480
leads to the refined of the Siddall[br]variant of Zatko's question:
0:30:11.480,0:30:17.120
Which observable properties of software[br]satisfy that, when the software has
0:30:17.120,0:30:22.590
property m, the probability of fuzzing[br]being hard is very high? That's what this
0:30:22.590,0:30:27.121
version of this question phrases, and here[br]we say, you know, in large log(h)/k, in
0:30:27.121,0:30:31.590
other words: exponentially more fuzzing[br]than you expect to find bugs. So this is
0:30:31.590,0:30:36.350
the technical version of what we're after.[br]All of this can be explored, you can
0:30:36.350,0:30:40.340
brute-force your way to finding all of[br]this stuff, and that's exactly what we're
0:30:40.340,0:30:48.050
doing. So we're looking for all kinds of[br]things, we're looking for all kinds of
0:30:48.050,0:30:53.840
things that correlate with fuzzing having[br]low yield on a piece of software, and
0:30:53.840,0:30:57.360
there's a lot of ways in which that can[br]happen. It could be that you are looking
0:30:57.360,0:31:01.409
at a feature of software that literally[br]prevents crashes. Maybe it's the never
0:31:01.409,0:31:08.210
crash flag, I don't know. But most of the[br]things I've talked about, ASLR, RERO, etc.
0:31:08.210,0:31:12.169
don't prevent crashes. In fact a ASLR can[br]take non-crashing programs and make them
0:31:12.169,0:31:16.849
crashing. It's the number one reason[br]vendors don't enable it, right? So why am
0:31:16.849,0:31:20.079
I talking about ASLR? Why am I talking[br]about RELRO? Why am i talking about all
0:31:20.079,0:31:22.899
these things that have nothing to do with[br]stopping crashes and I'm claiming I'm
0:31:22.899,0:31:27.399
measuring crashes? This is because, in the[br]Bayesian perspective, correlation is not
0:31:27.399,0:31:31.730
the same thing as causation, right?[br]Correlation is not the same thing as
0:31:31.730,0:31:35.340
causation. It could be that M's presence[br]literally prevents crashes, but it could
0:31:35.340,0:31:39.749
also be that, by some underlying[br]coincidence, the things we're looking for
0:31:39.749,0:31:43.600
are mostly only found in software that's[br]robust against crashing.
0:31:43.600,0:31:48.799
If you're looking for security, I submit[br]to you that the difference doesn't matter.
0:31:48.799,0:31:54.929
Okay, end of my math, danke. I will now go[br]ahead and do this like really nice analogy
0:31:54.929,0:31:59.279
of all those things that I just described,[br]right. So we're looking for indicators of
0:31:59.279,0:32:03.640
a piece of software being secure enough to[br]be good for consumers, right. So here's an
0:32:03.640,0:32:08.131
analogy. Let's say you're a geologist, you[br]study minerals and all of that and you're
0:32:08.131,0:32:13.560
looking for diamonds. Who isn't, right?[br]Want those diamonds! And like how do you
0:32:13.560,0:32:18.270
find diamonds? Even in places that are[br]rich in diamonds, diamonds are not common.
0:32:18.270,0:32:21.279
You don't just go walking around in your[br]boots, kicking until your toe stubs on a
0:32:21.279,0:32:27.049
diamond? You don't do that. Instead you[br]look for other minerals that are mostly
0:32:27.049,0:32:31.860
only found near diamonds but are much more[br]abundant in those locations than the
0:32:31.860,0:32:37.960
diamonds. So, this is mineral science 101,[br]I guess, I don't know. So, for example,
0:32:37.960,0:32:41.389
you want to go find diamond: put on your[br]boots and go kicking until you find some
0:32:41.389,0:32:46.110
chromite, look for some diopside, you[br]know, look for some garnet. None of these
0:32:46.110,0:32:50.340
things turn into diamonds, none of these[br]things cause diamonds but if you're
0:32:50.340,0:32:54.020
finding good concentrations of these[br]things, then, statistically, there's
0:32:54.020,0:32:58.249
probably diamonds nearby. That's what[br]we're doing. We're not looking for the
0:32:58.249,0:33:02.570
things that cause good security per se.[br]Rather, we're looking for the indicators
0:33:02.570,0:33:08.349
that you have put the effort into your[br]software, right? How's that working out
0:33:08.349,0:33:15.070
for us? How's that working out for us?[br]Well, we're still doing studies. It's, you
0:33:15.070,0:33:18.490
know, early to say exactly but we do have[br]the following interesting coincidence: and
0:33:18.490,0:33:24.789
so, here presented I have a collection of[br]prices that somebody gave much for so-
0:33:24.789,0:33:30.369
called the underground exploits. And I can[br]tell you these prices are maybe a little
0:33:30.369,0:33:34.450
low these days but if you work in that[br]business, if you go to Cyscin, if you do
0:33:34.450,0:33:39.009
that kind of stuff, maybe you know that[br]this is ballpark, it's ballpark.
0:33:39.009,0:33:44.080
Alright, and, just a coincidence, maybe it[br]means we're on the right track, I don't
0:33:44.080,0:33:48.740
know, but it's an encouraging sign: When[br]we run these programs through our
0:33:48.740,0:33:53.060
analysis, our rankings more or less[br]correspond to the actual prices that you
0:33:53.060,0:33:58.279
encounter in the wild for access via these[br]applications. Up above, I have one of our
0:33:58.279,0:34:02.059
histogram charts. You can see here that[br]Chrome and Edge in this particular model
0:34:02.059,0:34:06.149
scored very close to the same and it's a[br]test model, so, let's say they're
0:34:06.149,0:34:11.480
basically the same.[br]Firefox, you know, behind there a little
0:34:11.480,0:34:15.040
bit. I don't have Safari on this chart[br]because - this or all Windows applications
0:34:15.040,0:34:21.091
- but the Safari score falls in between.[br]So, lots of theory, lots of theory, lots
0:34:21.091,0:34:27.920
of theory and then we have this. So, we're[br]going to go ahead now and hand off to our
0:34:27.920,0:34:31.679
lead engineer, Parker, who is going to[br]talk about some of the concrete stuff, the
0:34:31.679,0:34:35.210
non-chalkboard stuff, the software stuff[br]that actually makes this work.
0:34:35.956,0:34:40.980
Thompson: Yeah, so I want to talk about[br]the process of actually doing it. Building
0:34:40.980,0:34:45.050
the tooling that's required to collect[br]these observables. Effectively, how do you
0:34:45.050,0:34:50.560
go mining for indicator indicator[br]minerals? But first the progression of
0:34:50.560,0:34:55.810
where we are and where we're going. We[br]initially broke this out into three major
0:34:55.810,0:35:00.360
tracks of our technology. We have our[br]static analysis engine, which started as a
0:35:00.360,0:35:05.790
prototype, and we have now recently[br]completed a much more mature and solid
0:35:05.790,0:35:09.930
engine that's allowing us to be much more[br]extensible and digging deeper into
0:35:09.930,0:35:16.320
programs, and provide a much more deep[br]observables. Then, we have the data
0:35:16.320,0:35:21.510
collection and data reporting. Tim showed[br]some of our early stabs at this. We're
0:35:21.510,0:35:25.450
right now in the process of building new[br]engines to make the data more accessible
0:35:25.450,0:35:30.150
and easy to work with and hopefully more[br]of that will be available soon. Finally,
0:35:30.150,0:35:35.910
we have our fuzzer track. We needed to get[br]some early data, so we played with some
0:35:35.910,0:35:40.680
existing off-the-shelf fuzzers, including[br]AFL, and, while that was fun,
0:35:40.680,0:35:44.190
unfortunately it's a lot of work to[br]manually instrument a lot of fuzzers for
0:35:44.190,0:35:48.830
hundreds of binaries.[br]So, we then built an automated solution
0:35:48.830,0:35:52.930
that started to get us closer to having a[br]fuzzing harness that could autogenerate
0:35:52.930,0:35:57.840
itself, depending on the software, the[br]software's behavior. But, right now,
0:35:57.840,0:36:01.650
unfortunately that technology showed us[br]more deficiencies than it showed
0:36:01.650,0:36:07.360
successes. So, we are now working on a[br]much more mature fuzzer that will allow us
0:36:07.360,0:36:12.780
to dig deeper into programs as we're[br]running and collect very specific things
0:36:12.780,0:36:21.260
that we need for our model and our[br]analysis. But on to our analytic pipeline
0:36:21.260,0:36:25.831
today. This is one of the most concrete[br]components of our engine and one of the
0:36:25.831,0:36:29.000
most fun![br]We effectively wanted some type of
0:36:29.000,0:36:34.550
software hopper, where you could just pour[br]programs in, installers and then, on the
0:36:34.550,0:36:39.560
other end, come reports: Fully annotated[br]and actionable information that we can
0:36:39.560,0:36:45.320
present to people. So, we went about the[br]process of building a large-scale engine.
0:36:45.320,0:36:50.500
It starts off with a simple REST API,[br]where we can push software in, which then
0:36:50.500,0:36:55.600
gets moved over to our computation cluster[br]that effectively provides us a fabric to
0:36:55.600,0:37:00.310
work with. It makes is made up of a lot of[br]different software suites, starting off
0:37:00.310,0:37:06.730
with our data processing, which is done by[br]apache spark and then moves over into data
0:37:06.730,0:37:12.910
data handling and data analysis in spark,[br]and then we have a common HDFS layer to
0:37:12.910,0:37:17.530
provide a place for the data to be stored[br]and then a resource manager and Yarn. All
0:37:17.530,0:37:22.500
of that is backed by our compute and data[br]nodes, which scale out linearly. That then
0:37:22.500,0:37:27.590
moves into our data science engine, which[br]is effectively spark with Apache Zeppelin,
0:37:27.590,0:37:30.480
which provides us a really fun interface[br]where we can work with the data in an
0:37:30.480,0:37:35.830
interactive manner but be kicking off[br]large-scale jobs into the cluster. And
0:37:35.830,0:37:40.110
finally, this goes into our report[br]generation engine. What this bought us,
0:37:40.110,0:37:46.030
was the ability to linearly scale and make[br]that hopper bigger and bigger as we need,
0:37:46.030,0:37:50.740
but also provide us a way to process data[br]that doesn't fit in a single machine's
0:37:50.740,0:37:54.110
RAM. You can push the instance sizes as[br]you large as you want, but we have
0:37:54.110,0:38:00.300
datasets that blow away any single host[br]RAM set. So this allows us to work with
0:38:00.300,0:38:08.690
really large collections of observables.[br]I want to dive down now into our actual
0:38:08.690,0:38:13.160
static analysis. But first we have to[br]explore the problem space, because it's a
0:38:13.160,0:38:19.490
nasty one. Effectively in settles mission[br]is to process as much software as
0:38:19.490,0:38:25.790
possible. Hopefully all of it, but it's[br]hard to get your hand on all the binaries
0:38:25.790,0:38:29.260
that are out there. When you start to look[br]at that problem you understand there's a
0:38:29.260,0:38:34.830
lot of combinations: there's a lot of CPU[br]architectures, there's a lot of operating
0:38:34.830,0:38:38.610
systems, there's a lot of file formats,[br]there's a lot of environments the software
0:38:38.610,0:38:43.160
gets deployed into, and every single one[br]of them has their own app Archer app
0:38:43.160,0:38:47.320
armory features. And it can be[br]specifically set for one combination
0:38:47.320,0:38:51.671
button on another and you don't want to[br]penalize a developer for not turning on a
0:38:51.671,0:38:56.290
feature they had no access to ever turn[br]on. So effectively we need to solve this
0:38:56.290,0:39:01.050
in a much more generic way. And so what we[br]did is our static analysis engine
0:39:01.050,0:39:04.630
effectively looks like a gigantic[br]collection of abstraction libraries to
0:39:04.630,0:39:12.390
handle binary programs. You take in some[br]type of input file be it ELF, PE, MachO
0:39:12.390,0:39:17.730
and then the pipeline splits. It goes off[br]into two major analyzer classes, our
0:39:17.730,0:39:22.360
format analyzers, which look at the[br]software much like how a linker or loader
0:39:22.360,0:39:26.600
would look at it. I want to understand how[br]it's going to be loaded up, what type of
0:39:26.600,0:39:30.680
armory feature is going to be applied and[br]then we can run analyzers over that. In
0:39:30.680,0:39:34.520
order to achieve that we need abstraction[br]libraries that can provide us an abstract
0:39:34.520,0:39:40.900
memory map, a symbol resolver, generic[br]section properties. So all that feeds in
0:39:40.900,0:39:46.060
and then we run over a collection of[br]analyzers to collect data and observables.
0:39:46.060,0:39:49.650
Next we have our code analyzers, these are[br]the analyzers that run over the code
0:39:49.650,0:39:57.600
itself. I need to be able to look at every[br]possible executable path. In order to do
0:39:57.600,0:40:02.400
that we need to do function discovery,[br]feed that into a control flow recovery
0:40:02.400,0:40:07.880
engine, and then as a post-processing step[br]dig through all of the possible metadata
0:40:07.880,0:40:12.820
in the software, such as like a switch[br]table, or something like that to get even
0:40:12.820,0:40:20.770
deeper into the software. Then this[br]provides us a basic list of basic blocks,
0:40:20.770,0:40:24.470
functions, instruction ranges. And does so[br]in an efficient manner so we can process a
0:40:24.470,0:40:30.550
lot of software as it goes. Then all that[br]gets fed over into the main modular
0:40:30.550,0:40:36.570
analyzers. Finally, all of this comes[br]together and gets put into a gigantic blob
0:40:36.570,0:40:41.850
of observables and fed up to the pipeline.[br]We really want to thank the Ford
0:40:41.850,0:40:46.920
Foundation for supporting our work in[br]this, because the pipeline and the static
0:40:46.920,0:40:51.840
analysis has been a massive boon for our[br]project and we're only beginning now to
0:40:51.840,0:40:58.920
really get our engine running and we're[br]having a great time with it. So digging
0:40:58.920,0:41:03.760
into the observables themselves, what are[br]we looking at and let's break them apart.
0:41:03.760,0:41:08.980
So the format structure components, things[br]like ASLR, DEP, RELRO.
0:41:08.980,0:41:13.370
basic app armory, that's going to go into[br]the feature and gonna be enabled at the OS
0:41:13.370,0:41:17.830
layer when it gets loaded up or linked.[br]And we also collect other metadata about
0:41:17.830,0:41:22.000
the program such as like: "What libraries[br]are linked in?", "What's its dependency
0:41:22.000,0:41:26.400
tree look like – completely?", "How did[br]those software, how did those library
0:41:26.400,0:41:32.040
score?", because that can affect your main[br]software. Interesting example on Linux, if
0:41:32.040,0:41:35.840
you link a library that requires an[br]executable stack, guess what your software
0:41:35.840,0:41:39.990
now has an executable stack, even if you[br]didn't mark that. So we need to be owners
0:41:39.990,0:41:44.700
to understand what ecosystem the software[br]is gonna live in. And the code structure
0:41:44.700,0:41:47.590
analyzers look at things like[br]functionality: "What's the software
0:41:47.590,0:41:52.600
doing?", "What type of app armory is[br]getting injected into the code?". A great
0:41:52.600,0:41:55.850
example of that is something like stack[br]guards or fortify source. These are our
0:41:55.850,0:42:01.550
main features that only really apply and[br]can be observed inside of the control flow
0:42:01.550,0:42:08.240
or inside of the actual instructions[br]themselves. This is why control
0:42:08.240,0:42:10.880
photographs are key.[br]We played around with a number of
0:42:10.880,0:42:15.980
different ways of analyzing software that[br]we could scale out and ultimately we had
0:42:15.980,0:42:20.171
to come down to working with control[br]photographs. Provided here is a basic
0:42:20.171,0:42:23.400
visualization of what I'm talking about[br]with a control photograph, provided by
0:42:23.400,0:42:28.690
Benja, which has wonderful visualization[br]tools, hence this photo, and not our
0:42:28.690,0:42:33.170
engine because we don't build their very[br]many visualization engines. But you
0:42:33.170,0:42:38.470
basically have a function that's broken up[br]into basic blocks, which is broken up into
0:42:38.470,0:42:42.910
instructions, and then you have basic flow[br]between them. Having this as an iterable
0:42:42.910,0:42:47.650
structure that we can work with, allows us[br]to walk over that and walk every single
0:42:47.650,0:42:50.790
instruction, understand the references,[br]understand where code and data is being
0:42:50.790,0:42:54.500
referenced, and how is it being[br]referenced.
0:42:54.500,0:42:57.640
And then what type of functionalities[br]being used, so this is a great way to find
0:42:57.640,0:43:02.530
something, like whether or not your stack[br]guards are being applied on every function
0:43:02.530,0:43:08.340
that needs them, how deep are they being[br]applied, and is the compiler possibly
0:43:08.340,0:43:11.850
introducing errors into your armory[br]features. which are interesting side
0:43:11.850,0:43:19.590
studies. Also why we did this is because[br]we want to push the concept of what type
0:43:19.590,0:43:28.339
of observables even farther. Let's say[br]take this example you want to be able to
0:43:28.339,0:43:34.340
take instruction abstractions. Let's say[br]for all major architectures you can break
0:43:34.340,0:43:38.690
them up into major categories. Be it[br]arithmetic instructions, data manipulation
0:43:38.690,0:43:45.850
instructions, like load stores and then[br]control flow instructions. Then with these
0:43:45.850,0:43:52.830
basic fundamental building blocks you can[br]make artifacts. Think of them like a unit
0:43:52.830,0:43:56.400
of functionality: has some type of input,[br]some type of output, it provides some type
0:43:56.400,0:44:01.280
of operation on it. And then with these[br]little units of functionality, you can
0:44:01.280,0:44:05.210
link them together and think of these[br]artifacts as may be sub-basic block or
0:44:05.210,0:44:09.440
crossing a few basic blocks, but a[br]different way to break up the software.
0:44:09.440,0:44:13.130
Because a basic block is just a branch[br]break, but we want to look at
0:44:13.130,0:44:18.680
functionality brakes, because these[br]artifacts can provide the basic
0:44:18.680,0:44:24.891
fundamental building blocks of the[br]software itself. It's more important, when
0:44:24.891,0:44:28.840
we want to start doing symbolic lifting.[br]So that we can lift the entire software up
0:44:28.840,0:44:35.250
into a generic representation, that we can[br]slice and dice as needed.
0:44:38.642,0:44:42.760
Moving from there, I want to talk about[br]fuzzing a little bit more. Fuzzing is
0:44:42.760,0:44:47.370
effectively at the heart of our project.[br]It provides us the rich dataset that we
0:44:47.370,0:44:52.040
can use to derive a model. It also[br]provides us awesome other metadata on the
0:44:52.040,0:44:58.060
side. But why? Why do we care about[br]fuzzing? Why is fuzzing the metric, that
0:44:58.060,0:45:04.680
you build an engine, that you build a[br]model that you drive some type of reason
0:45:04.680,0:45:11.560
from? So think of the set of bugs,[br]vulnerabilities, and exploitable
0:45:11.560,0:45:16.930
vulnerabilities. In an ideal world you'd[br]want to just have a machine that pulls out
0:45:16.930,0:45:20.250
exploitable vulnerabilities.[br]Unfortunately, this is exceedingly costly
0:45:20.250,0:45:25.690
for a series of decision problems, that go[br]between these sets. So now consider the
0:45:25.690,0:45:31.900
superset of bugs or faults. A fuzzer can[br]easily recognize, or other software can
0:45:31.900,0:45:37.400
easily recognize faults, but if you want[br]to move down the sets you unfortunately
0:45:37.400,0:45:42.770
need to jump through a lot of decision[br]hoops. For example, if you want to move to
0:45:42.770,0:45:45.760
a vulnerability you have to understand:[br]Does the attacker have some type of
0:45:45.760,0:45:51.150
control? Is there a trust boundary being[br]crossed? Is this software configured in
0:45:51.150,0:45:55.000
the right way for this to be vulnerable[br]right now? So they're human factors that
0:45:55.000,0:45:59.280
are not deducible from the outside. You[br]then amplify this decision problem even
0:45:59.280,0:46:05.320
worse going to exploitable[br]vulnerabilities. So if we collect the
0:46:05.320,0:46:11.360
superset of bugs, we will know that there[br]is some proportion of subsets in there.
0:46:11.360,0:46:15.830
And this provides us a datasets easily[br]recognizable and we can collect in a cost-
0:46:15.830,0:46:22.170
efficient manner. Finally, fuzzing is key[br]and we're investing a lot of our time
0:46:22.170,0:46:26.570
right now and working on a new fuzzing[br]engine, because there are some key things
0:46:26.570,0:46:32.290
we want to do.[br]We want to be able to understand all of
0:46:32.290,0:46:35.340
the different paths the software could be[br]taking, and as you're fuzzing you're
0:46:35.340,0:46:40.010
effectively driving the software down as[br]many unique paths while referencing as
0:46:40.010,0:46:47.760
many unique data manipulations as[br]possible. So if we save off every path,
0:46:47.760,0:46:51.840
annotate the ones that are faulting, we[br]now have this beautiful rich data set of
0:46:51.840,0:46:57.060
exactly where the software went as we were[br]driving it in specific ways. Then we feed
0:46:57.060,0:47:02.010
that back into our static analysis engine[br]and begin to generate those instruction
0:47:02.010,0:47:07.680
out of those instruction abstractions,[br]those artifacts. And with that, imagine we
0:47:07.680,0:47:14.560
have these gigantic traces of instruction[br]abstractions. From there we can then begin
0:47:14.560,0:47:20.990
to train the model to explore around the[br]fault location and begin to understand and
0:47:20.990,0:47:27.300
try and study the fundamental building[br]blocks of what a bug looks like in an
0:47:27.300,0:47:32.990
abstract instruction agnostic way. This is[br]why we're spending a lot of time on our
0:47:32.990,0:47:36.980
Fuzzing engine right now. But hopefully[br]soon we'll be able to talk about that more
0:47:36.980,0:47:40.381
and maybe a tech track and not the policy[br]track.
0:47:44.748,0:47:49.170
C: Yeah, so from then on when anything[br]went wrong with the computer we said it
0:47:49.170,0:47:55.700
had bugs in it. laughs All right, I[br]promised you a technical journey, I
0:47:55.700,0:47:59.461
promised you a technical journey into the[br]dark abyss of as deep as you want to get
0:47:59.461,0:48:03.460
with it. So let's go ahead and bring it[br]up. Let's wrap it up and bring it up a
0:48:03.460,0:48:07.340
little bit here. We've talked a great deal[br]today about some theory. We've talked
0:48:07.340,0:48:09.970
about development in our tooling and[br]everything else and so I figured I should
0:48:09.970,0:48:14.010
end with some things that are not in[br]progress, but in fact which are done in
0:48:14.010,0:48:20.630
yesterday's news. Just to go ahead and[br]make that shared here with Europe. So in
0:48:20.630,0:48:24.140
the midst of all of our development we[br]have been discovering and reporting bugs,
0:48:24.140,0:48:28.680
again this not our primary purpose really.[br]But you know you can't help but do it. You
0:48:28.680,0:48:32.170
know how computers are these days. You[br]find bugs just for turning them on, right?
0:48:32.170,0:48:38.610
So we've been disclosing all of that a[br]little while ago. At DEFCON and Black Hat
0:48:38.610,0:48:43.030
our chief scientist Sarah together with[br]Mudge went ahead and dropped this
0:48:43.030,0:48:47.840
bombshell on the Firefox team which is[br]that for some period of time they had ASLR
0:48:47.840,0:48:54.310
disabled on OS X. When we first found it[br]we assumed it was a bug in our tools. When
0:48:54.310,0:48:57.720
we first mentioned it in a talk they came[br]to us and said it's definitely a bug on
0:48:57.720,0:49:03.140
our tools or might be or some level of[br]surprise and then people started looking
0:49:03.140,0:49:08.840
into it and in fact at one point it had[br]been enabled and then temporarily
0:49:08.840,0:49:12.960
disabled. No one knew, everyone thought it[br]was on. It takes someone looking to notice
0:49:12.960,0:49:18.010
that kind of stuff, right. Major shout out[br]though, they fixed it immediately despite
0:49:18.010,0:49:23.950
our full disclosure on stage and[br]everything. So very impressed, but in
0:49:23.950,0:49:27.870
addition to popping surprises on people[br]we've also been doing the usual process of
0:49:27.870,0:49:32.890
submitting patches and bugs, particularly[br]to LLVM and Qemu and if you work in
0:49:32.890,0:49:35.810
software analysis you could probably guess[br]why.
0:49:36.510,0:49:39.280
Incidentally, if you're looking for a[br]target to fuzz if you want to go home from
0:49:39.280,0:49:45.870
CCC and you want to find a ton of findings[br]LLVM comes with a bunch of parsers. You
0:49:45.870,0:49:50.060
should fuzz them, you should fuzz them and[br]I say that because I know for a fact you
0:49:50.060,0:49:53.170
are gonna get a bunch of findings and it'd[br]be really nice. I would appreciate it if I
0:49:53.170,0:49:56.360
didn't have to pay the people to fix them.[br]So if you wouldn't mind disclosing them
0:49:56.360,0:50:00.240
that would help. But besides these bug[br]reports and all these other things we've
0:50:00.240,0:50:04.210
also been working with lots of others. You[br]know we gave a talk earlier this summer,
0:50:04.210,0:50:06.910
Sarah gave a talk earlier this summer,[br]about these things and she presented
0:50:06.910,0:50:11.830
findings on comparing some of these base[br]scores of different Linux distributions.
0:50:11.830,0:50:16.320
And based on those findings there was a[br]person on the fedora red team, Jason
0:50:16.320,0:50:20.470
Calloway, who sat there and well I can't[br]read his mind but I'm sure that he was
0:50:20.470,0:50:24.700
thinking to himself: golly it would be[br]nice to not, you know, be surprised at the
0:50:24.700,0:50:28.560
next one of these talks. They score very[br]well by the way. They were leading in
0:50:28.560,0:50:33.660
many, many of our metrics. Well, in any[br]case, he left Vegas and he went back home
0:50:33.660,0:50:36.850
and him and his colleagues have been[br]working on essentially re-implementing
0:50:36.850,0:50:41.570
much of our tooling so that they can check[br]the stuff that we check before they
0:50:41.570,0:50:47.530
release. Before they release. Looking for[br]security before you release. So that would
0:50:47.530,0:50:51.520
be a good thing for others to do and I'm[br]hoping that that idea really catches on.
0:50:51.520,0:50:58.990
laughs Yeah, yeah right, that would be[br]nice. That would be nice.
0:50:58.990,0:51:04.310
But in addition to that, in addition to[br]that our mission really is to get results
0:51:04.310,0:51:08.220
out to the public and so in order to[br]achieve that, we have broad partnerships
0:51:08.220,0:51:12.340
with Consumer Reports and the digital[br]standard. Especially if you're into cyber
0:51:12.340,0:51:16.410
policy, I really encourage you to take a[br]look at the proposed digital standard,
0:51:16.410,0:51:21.220
which is encompassing of the things we[br]look for and and and so much more. URLs,
0:51:21.220,0:51:25.720
data, traffic, motion and cryptography and[br]update mechanisms and all that good stuff.
0:51:25.720,0:51:31.951
So, where we are and where we're going,[br]the big takeaways here for if you're
0:51:31.951,0:51:36.310
looking for that, so what, three points[br]for you: one we are building a tooling
0:51:36.310,0:51:39.750
necessary to do larger and larger and[br]larger studies regarding these surrogate
0:51:39.750,0:51:44.980
security stores. My hope is that in some[br]period of the not-too-distant future, I
0:51:44.980,0:51:48.600
would like to be able to, with my[br]colleagues, publish some really nice
0:51:48.600,0:51:51.640
findings about what are the things that[br]you can observe in software, which have a
0:51:51.640,0:51:57.390
suspiciously high correlation with the[br]software being good. Right, nobody really
0:51:57.390,0:52:00.390
knows right now. It's an empirical[br]question. As far as I know, the study
0:52:00.390,0:52:03.080
hasn't been done. We've been running it on[br]the small scale. We're building the
0:52:03.080,0:52:06.620
tooling to do it on a much larger scale.[br]We are hoping that this winds up being a
0:52:06.620,0:52:11.480
useful field in security as that[br]technology develops. In the meantime our
0:52:11.480,0:52:15.560
static analyzers are already making[br]surprising discoveries: hit YouTube and
0:52:15.560,0:52:21.300
take a look for Sara Zatko's recent talks[br]at DEFCON/Blackhat. Lots of fun findings
0:52:21.300,0:52:25.910
in there. Lots of things that anyone who[br]looks would have found it. Lots of that.
0:52:25.910,0:52:29.080
And then lastly, if you were in the[br]business of shipping software and you are
0:52:29.080,0:52:32.620
thinking to yourself.. okay so these guys,[br]someone gave them some money to mess up my
0:52:32.620,0:52:36.840
day and you're wondering: what can I do to[br]not have my day messed up? One simple
0:52:36.840,0:52:40.870
piece of advice, one simple piece of[br]advice: make sure your software employs
0:52:40.870,0:52:45.920
every exploit mitigation technique Mudge[br]has ever or will ever hear of. And he's
0:52:45.920,0:52:49.500
heard of a lot of them. He's only gonna,[br]you know all that, turn all those things
0:52:49.500,0:52:52.280
on and if you don't know anything about[br]that stuff, if nobody on your team knows
0:52:52.280,0:52:57.370
anything about that stuff didn't I don't[br]even know I'm saying this if you hear you
0:52:57.370,0:53:00.972
know about that stuff so do that. If[br]you're not here, then you should be here.
0:53:04.428,0:53:16.330
Danke, Danke.[br]Herald Angel: Thank you, Tim and Parker.
0:53:17.501,0:53:23.630
Do we have any questions from the[br]audience? It's really hard to see you with
0:53:23.630,0:53:30.120
that bright light in my face. I think the[br]signal angel has a question. Signal Angel:
0:53:30.120,0:53:34.550
So the IRC channel was impressed by your[br]tools and your models that you wrote. And
0:53:34.550,0:53:38.050
they are wondering what's going to happen[br]to that, because you do have funding from
0:53:38.050,0:53:42.040
the Ford foundation now and so what are[br]your plans with this? Do you plan on
0:53:42.040,0:53:46.080
commercializing this or is it going to be[br]open source or how do we get our hands on
0:53:46.080,0:53:49.150
this?[br]C: It's an excellent question. So for the
0:53:49.150,0:53:53.550
time being the money that we are receiving[br]is to develop the tooling, pay for the AWS
0:53:53.550,0:53:57.790
instances, pay for the engineers and all[br]that stuff. The direction as an
0:53:57.790,0:54:01.410
organization that we would like to take[br]things I have no interest in running a
0:54:01.410,0:54:05.410
monopoly. That sounds like a fantastic[br]amount of work and I really don't want to
0:54:05.410,0:54:09.430
do it. However, I have a great deal of[br]interest in taking the gains that we are
0:54:09.430,0:54:13.860
making in the technology and releasing the[br]data so that other competent researchers
0:54:13.860,0:54:19.020
can go through and find useful things that[br]we may not have noticed ourselves. So
0:54:19.020,0:54:22.150
we're not at a point where we are[br]releasing data in bulk just yet, but that
0:54:22.150,0:54:26.432
is simply a matter of engineering our[br]tools, are still in flux as we, you know.
0:54:26.432,0:54:29.230
When we do that, we want to make sure the[br]data is correct and so our software has to
0:54:29.230,0:54:33.640
have its own low bug counts and all these[br]other things. But ultimately for the
0:54:33.640,0:54:37.950
scientific aspect of our mission. Though[br]the science is not our primary mission.
0:54:37.950,0:54:41.920
Our primary mission is to apply it to help[br]consumers. At the same time, it is our
0:54:41.920,0:54:47.590
belief that an opaque model is as good as[br]crap, no one should trust an opaque model,
0:54:47.590,0:54:50.940
if somebody is telling you that they have[br]some statistics and they do not provide
0:54:50.940,0:54:54.540
you with any underlying data and it is not[br]reproducible you should ignore them.
0:54:54.540,0:54:58.360
Consequently what we are working towards[br]right now is getting to a point where we
0:54:58.360,0:55:02.730
will be able to share all of those[br]findings. The surrogate scores, the
0:55:02.730,0:55:06.000
interesting correlations between[br]observables and fuzzing. All that will be
0:55:06.000,0:55:09.200
public as the material comes online.[br]Signal Angel: Thank you.
0:55:09.200,0:55:11.870
C: Thank you.[br]Herald Angel: Thank you. And microphone
0:55:11.870,0:55:14.860
number three please.[br]Mic3: Hi, thanks so some really
0:55:14.860,0:55:18.450
interesting work you presented here. So[br]there's something I'm not sure I
0:55:18.450,0:55:22.910
understand about the approach that you're[br]taking. If you are evaluating the security
0:55:22.910,0:55:26.320
of say a library function or the[br]implementation of a network protocol for
0:55:26.320,0:55:29.780
example you know there'd be a precise[br]specification you could check that
0:55:29.780,0:55:35.190
against. And the techniques you're using[br]would make sense to me. But it's not so
0:55:35.190,0:55:37.970
clear since you've set the goal that[br]you've set for yourself is to evaluate
0:55:37.970,0:55:43.580
security of consumer software. It's not[br]clear to me whether it's fair to call
0:55:43.580,0:55:47.430
these results security scores in the[br]absence of a threat model so. So my
0:55:47.430,0:55:50.350
question is, you know, how is it[br]meaningful to make a claim that a piece of
0:55:50.350,0:55:52.240
software is secure if you don't have a[br]threat model for it?
0:55:52.240,0:55:56.090
C: This is an excellent question and I[br]anyone who disagrees is they should the
0:55:56.090,0:56:01.330
wrong. Security without a threat model is[br]not security at all. It's absolutely a
0:56:01.330,0:56:05.560
true point. So the things that we are[br]looking for, most of them are things that
0:56:05.560,0:56:08.800
you will already find present in your[br]threat model. And so for example we were
0:56:08.800,0:56:12.390
reporting on the presence of things like a[br]ASLR and lots of other things that get to
0:56:12.390,0:56:17.030
the heart of exploitability of a piece of[br]software. So for example if we are
0:56:17.030,0:56:19.870
reviewing a piece of software, that has no[br]attack surface
0:56:19.870,0:56:24.160
then it is canonically not in the threat[br]model and in that sense it makes no sense
0:56:24.160,0:56:29.270
to report on its overall security. On the[br]other hand, if we're talking about
0:56:29.270,0:56:33.470
software like say a word processor, a[br]browser, anything on your phone, anything
0:56:33.470,0:56:36.120
that talks on the network, we're talking[br]about those kinds of applications then I
0:56:36.120,0:56:39.280
would argue that exploit mitigations and[br]the other things that we are measuring are
0:56:39.280,0:56:44.330
almost certainly very relevant. So there's[br]a sense in which what we are measuring is
0:56:44.330,0:56:48.411
the lowest common denominator among what[br]we imagine or the dominant threat models
0:56:48.411,0:56:53.180
for the applications. The hand-wavy[br]answer, but I promised heuristics so there
0:56:53.180,0:56:55.180
you go.[br]Mic3: Thanks.
0:56:55.180,0:57:01.620
C: Thank you.[br]Herald Angel: Any questions? No raising
0:57:01.620,0:57:07.060
hands, okay. And then the herald can ask a[br]question, because I never can. So the
0:57:07.060,0:57:11.920
question is: you mentioned earlier these[br]security labels and for example what
0:57:11.920,0:57:15.880
institution could give out the security[br]labels? Because as obviously the vendor
0:57:15.880,0:57:21.740
has no interest in IT security?[br]C: Yes it's a very good question. So our
0:57:21.740,0:57:25.580
partnership with Consumer Reports. I don't[br]know if you're familiar with them, but in
0:57:25.580,0:57:31.340
the United States Consumer Reports is a[br]major huge consumer watchdog organization.
0:57:31.340,0:57:36.550
They test the safety of automobiles, they[br]test you know lots of consumer appliances.
0:57:36.550,0:57:40.070
All kinds of things both to see if they[br]function more or less as advertised but
0:57:40.070,0:57:45.210
most importantly they're checking for[br]quality, reliability and safety. So our
0:57:45.210,0:57:49.840
partnership with Consumer Reports is all[br]about us doing our work and then
0:57:49.840,0:57:54.060
publishing that. And so for example the[br]televisions that we presented the data on
0:57:54.060,0:57:58.290
all of that was collected and published in[br]partnership with Consumer Reports.
0:57:58.290,0:58:00.970
Herald: Thank you.[br]C: Thank you.
0:58:02.630,0:58:12.430
Herald: Any other questions for stream. I[br]hear a no. Well in this case people thank
0:58:12.430,0:58:16.440
you.[br]Thank Tim and Parker for their nice talk
0:58:16.440,0:58:19.964
and please give them a very very warm hall[br]round of applause.
0:58:19.964,0:58:24.694
applause[br]C: Thank you. T: Thank you.
0:58:24.694,0:58:51.000
subtitles created by c3subtitles.de[br]in the year 2017. Join, and help us!