0:00:00.000,0:00:09.520
36c3 prerol music
0:00:18.410,0:00:23.250
Herald: So, the next talk for this[br]afternoon is about high speed binary
0:00:23.250,0:00:28.110
fuzzing. We have two researchers that will[br]be presenting the product of their latest
0:00:28.110,0:00:33.640
work, which is a framework for static[br]binary rewriting. Our speakers are—the
0:00:33.640,0:00:38.580
first one is a computer science master's[br]student at EPFL and the second one is a
0:00:38.580,0:00:42.730
security researcher and assistant[br]professor at EPFL. Please give a big round
0:00:42.730,0:00:45.048
of applause to Nspace and gannimo.
0:00:45.048,0:00:50.280
Applause
0:00:50.280,0:00:52.610
gannimo (Mathias Payer): Thanks for the[br]introduction. It's a pleasure to be here,
0:00:52.610,0:00:57.850
as always. We're going to talk about[br]different ways to speed up your fuzzing
0:00:57.850,0:01:02.050
and to find different kinds of[br]vulnerabilities or to tweak your binaries
0:01:02.050,0:01:08.070
in somewhat unintended ways. I'm Mathias[br]Payer or I go by gannimo on Twitter and I
0:01:08.070,0:01:14.440
am an assistant professor at EPFL working[br]on different forms of software security:
0:01:14.440,0:01:18.700
fuzzing sanitization, but also different[br]kinds of mitigations. And Matteo over
0:01:18.700,0:01:24.160
there is working on his master's thesis on[br]different forms of binary rewriting for
0:01:24.160,0:01:27.820
the kernel. And today we're going to take[br]you on a journey on how to actually
0:01:27.820,0:01:32.180
develop very fast and very efficient[br]binary rewriting mechanisms that allow you
0:01:32.180,0:01:37.710
to do unintended modifications to the[br]binaries and allow you to explore
0:01:37.710,0:01:45.700
different kinds of unintended features in[br]binaries. So about this talk. What we
0:01:45.700,0:01:49.729
discovered or the reason why we set out on[br]this journey was that fuzzing binaries is
0:01:49.729,0:01:56.460
really, really hard. There's very few[br]tools in user space. There's—it's
0:01:56.460,0:01:59.680
extremely hard to set it up and it's[br]extremely hard to set it up in a
0:01:59.680,0:02:04.479
performant way. The setup is complex. You[br]have to compile different tools. You have
0:02:04.479,0:02:08.520
to modify it. And the results are not[br]really that satisfactory. As soon as you
0:02:08.520,0:02:13.320
move to the kernel, fuzzing binaries in a[br]kernel is even harder. There's no tooling
0:02:13.320,0:02:16.880
whatsoever, there's very few users[br]actually working with binary code in the
0:02:16.880,0:02:22.630
kernel or modifying binary code, and it's[br]just a nightmare to work with. So what we
0:02:22.630,0:02:26.850
are presenting today is a new approach[br]that allows you to instrument any form of
0:02:26.850,0:02:31.920
binary code or modern binary code based on[br]static rewriting, which gives you full
0:02:31.920,0:02:36.819
native performance. You only pay for the[br]instrumentation that you add, and you can
0:02:36.819,0:02:41.690
do very heavyweight transformations on top[br]of it. The picture, if you look at the
0:02:41.690,0:02:47.470
modern system, let's say we are looking at[br]a modern setup. Let's say you're looking
0:02:47.470,0:02:52.700
at cat pictures in your browser: Chrome[br]plus the kernel plus the libc plus the
0:02:52.700,0:02:57.920
graphical user interface together clog up[br]at about 100 million lines of code.
0:02:57.920,0:03:02.670
Instrumenting all of this for some form of[br]security analysis is a nightmare,
0:03:02.670,0:03:06.690
especially along this large stack of[br]software. There's quite a bit of different
0:03:06.690,0:03:11.260
compilers involved. There's different[br]linkers. It may be compiled on a different
0:03:11.260,0:03:14.620
system, with different settings and so on.[br]And then getting your instrumentation
0:03:14.620,0:03:18.569
across all of this is pretty much[br]impossible and extremely hard to work
0:03:18.569,0:03:24.269
with. And we want to enable you to select[br]those different parts that you're actually
0:03:24.269,0:03:29.629
interested in. Modify those and then focus[br]your fuzzing or analysis approaches on
0:03:29.629,0:03:35.040
those small subsets of the code, giving[br]you a much better and stronger capability
0:03:35.040,0:03:38.690
to test the systems that you're, or those[br]parts of the system that you're really,
0:03:38.690,0:03:45.659
really interested in. Who's worked on[br]fuzzing before? Quick show of hands. Wow,
0:03:45.659,0:03:54.379
that's a bunch of you. Do you use AFL?[br]Yeah, most of you, AFL. Libfuzzer? Cool,
0:03:54.379,0:03:59.760
about 10, 15 percent libfuzzer, 30 percent[br]fuzzing, and AFL. There's a quite good
0:03:59.760,0:04:03.980
knowledge of fuzzing, so I'm not going to[br]spend too much time on fuzzing, but for
0:04:03.980,0:04:07.500
those that haven't really run their[br]fuzzing campaigns yet, it's a very simple
0:04:07.500,0:04:12.060
software testing technique. You're[br]effectively taking a binary, let's say
0:04:12.060,0:04:16.480
Chrome, as a target and you're running[br]this in some form of execution
0:04:16.480,0:04:20.959
environment. And fuzzing then consists of[br]some form of input generation that creates
0:04:20.959,0:04:26.620
new test cases, throws them at your[br]program and sees—and checks what is
0:04:26.620,0:04:31.310
happening with your program. And either[br]everything is OK, and your code is being
0:04:31.310,0:04:35.640
executed, and your input—the program[br]terminates, everything is fine, or you
0:04:35.640,0:04:39.773
have a bug report. If you have a bug[br]report, you can use this. Find the
0:04:39.773,0:04:44.520
vulnerability, maybe develop a PoC and[br]then come up with some form of either
0:04:44.520,0:04:49.240
exploit or patch or anything else. Right.[br]So this is pretty much fuzzing in a in a
0:04:49.240,0:04:55.560
nutshell. How do you get fuzzing to be[br]effective? How can you cover large source
0:04:55.560,0:05:00.419
bases, complex code, and complex[br]environment? Well, there's a couple of
0:05:00.419,0:05:04.979
simple steps that you can take. And let's[br]walk quickly through effective fuzzing
0:05:04.979,0:05:12.630
101. Well, first, you want to be able to[br]create test cases that actually trigger
0:05:12.630,0:05:18.100
bugs. And this is a very, very[br]complicated, complicated part. And we need
0:05:18.100,0:05:22.800
to have some notion of the inputs that a[br]program accepts. And we need to have some
0:05:22.800,0:05:27.780
notion of how we can explore different[br]parts of the program, right? Different
0:05:27.780,0:05:30.870
parts of functionality. Well, on one hand,[br]we could have a developer write all the
0:05:30.870,0:05:34.370
test cases by hand, but this would be kind[br]of boring. It would also require a lot of
0:05:34.370,0:05:40.220
human effort in creating these different[br]inputs and so on. So coverage guided
0:05:40.220,0:05:46.990
fuzzing has evolved as a very simple way[br]to guide the fuzzing process, leveraging
0:05:46.990,0:05:51.220
the information on which parts of the code[br]have been executed by simply tracing the
0:05:51.220,0:05:58.500
individual path through the program based[br]on the execution flow. So we can—the
0:05:58.500,0:06:03.460
fuzzer can use this feedback to then[br]modify the inputs that are being thrown at
0:06:03.460,0:06:09.830
the fuzzing process. The second step is[br]the fuzzer must be able to detect bugs. If
0:06:09.830,0:06:13.080
you've ever looked at a memory corruption,[br]if you're just writing one byte after the
0:06:13.080,0:06:18.490
end of a buffer, it's highly likely that[br]your software is not going to crash. But
0:06:18.490,0:06:21.180
it's still a bug, and it may still be[br]exploitable based on the underlying
0:06:21.180,0:06:26.690
conditions. So we want to be able to[br]detect violations as soon as they happen,
0:06:26.690,0:06:31.600
for example, based on on some form of[br]sanitization that we add, some form of
0:06:31.600,0:06:35.400
instrumentation that we add to the to the[br]binary, that then tells us, hey, there's a
0:06:35.400,0:06:39.729
violation of the memory safety property,[br]and we terminate the application right
0:06:39.729,0:06:45.300
away as a feedback to the fuzzer. Third,[br]but the—and last but not least: Speed is
0:06:45.300,0:06:49.569
key, right? For if you're running a[br]fuzzing campaign, you have a fixed
0:06:49.569,0:06:54.639
resource budget. You have a couple of[br]cores, and you want to run for 24 hours,
0:06:54.639,0:06:59.470
48 hours, a couple of days. But in any[br]way, whatever your constraints are, you
0:06:59.470,0:07:04.210
have a fixed amount of instructions that[br]you can actually execute. And you have to
0:07:04.210,0:07:08.699
decide, am I spending my instructions on[br]generating new inputs, tracking
0:07:08.699,0:07:14.139
constraints, finding bugs, running[br]sanitization or executing the program? And
0:07:14.139,0:07:17.790
you need to find a balance between all of[br]them, as it is a zero sum game. You have a
0:07:17.790,0:07:20.870
fixed amount of resources and you're[br]trying to make the best with these
0:07:20.870,0:07:26.890
resources. So any overhead is slowing you[br]down. And again, this becomes an
0:07:26.890,0:07:30.819
optimization problem. How can you most[br]effectively use the resources that you
0:07:30.819,0:07:37.580
have available? As we are fuzzing with[br]source code, it's quite easy to actually
0:07:37.580,0:07:41.770
leverage existing mechanisms, and we add[br]all that instrumentation at compile time.
0:07:41.770,0:07:45.630
We take source code, we pipe it through[br]the compiler and modern compiler
0:07:45.630,0:07:51.169
platforms, allow you to instrument and add[br]little code snippets during the
0:07:51.169,0:07:55.419
compilation process that then carry out[br]all these tasks that are useful for
0:07:55.419,0:08:00.270
fuzzing. For example, modern compilers can[br]add short snippets of code for coverage
0:08:00.270,0:08:03.990
tracking that will record which parts of[br]the code that you have executed, or for
0:08:03.990,0:08:08.770
sanitization which record and check every[br]single memory access if it is safe or not.
0:08:08.770,0:08:12.360
And then when you're running the[br]instrumented binary, everything is fine
0:08:12.360,0:08:17.380
and you can detect the policy violations[br]as you go along. Now if you would have
0:08:17.380,0:08:21.330
source code for everything, this would be[br]amazing. But it's often not the case,
0:08:21.330,0:08:28.129
right? We may be able on Linux to cover a[br]large part of the protocol stack by
0:08:28.129,0:08:33.940
focusing only on source-code-based[br]approaches. But there may be applications
0:08:33.940,0:08:39.300
where no source code is available. If we[br]move to Android or other mobile systems,
0:08:39.300,0:08:43.199
there's many drivers that are not[br]available as open source or just available
0:08:43.199,0:08:48.630
as binary blobs, or the full software[br]stack may be closed-source and we only get
0:08:48.630,0:08:52.329
the binaries. And we still want to find[br]vulnerabilities in these complex software
0:08:52.329,0:08:59.530
stacks that span hundreds of millions of[br]lines of code in a very efficient way. The
0:08:59.530,0:09:04.620
only solution to cover this part of[br]massive code base is to actually rewrite
0:09:04.620,0:09:08.990
and focus on binaries. A very simple[br]approach could be black box fuzzing, but
0:09:08.990,0:09:11.620
this is—this doesn't really get you[br]anywhere because you don't get any
0:09:11.620,0:09:16.100
feedback; you don't get any information if[br]you're triggering bugs. So one simple
0:09:16.100,0:09:20.290
approach, and this is the approach that is[br]most dominantly used today, is to rewrite
0:09:20.290,0:09:26.040
the program or the binary dynamically. So[br]you're taking the binary and during
0:09:26.040,0:09:32.010
execution you use some form of dynamic[br]binary instrumentation based on Pin, angr,
0:09:32.010,0:09:37.140
or some other binary rewriting tool and[br]translate the targeted runtime, adding
0:09:37.140,0:09:43.330
this binary instrumentation on top of it[br]as you're executing it. It's simple, it's
0:09:43.330,0:09:46.930
straightforward, but it comes at a[br]terrible performance cost of ten to a
0:09:46.930,0:09:51.600
hundred x slow down, which is not really[br]effective. And you're spending all your
0:09:51.600,0:09:57.600
cores and your cycles on just executing[br]the binary instrumentation. So we don't
0:09:57.600,0:10:01.790
really want to do this and we want to have[br]something that's more effective than that.
0:10:01.790,0:10:07.360
So what we are focusing on is to do static[br]rewriting. It involves a much more complex
0:10:07.360,0:10:12.380
analysis as we are rewriting the binary[br]before it is being executed, and we have
0:10:12.380,0:10:17.880
to recover all of the control flow, all of[br]the different mechanisms, but it results
0:10:17.880,0:10:24.690
in a much better performance. And we can[br]get more bang for our buck. So why is
0:10:24.690,0:10:30.830
static rewriting so challenging? Well,[br]first, simply adding code will break the
0:10:30.830,0:10:35.320
target. So if you are disassembling this[br]piece of code here, which is a simple loop
0:10:35.320,0:10:40.620
that loads data, decrements the registers,[br]and then jumps if you're not at the end of
0:10:40.620,0:10:46.470
the array and keeps iterating through this[br]array. Now, as you look at the jump-not-
0:10:46.470,0:10:52.100
zero instruction, the last instruction of[br]the snippet, it is a relative offset. So
0:10:52.100,0:10:57.990
it jumps backward seven bytes. Which is[br]nice if you just execute the code as is.
0:10:57.990,0:11:02.040
But as soon as you want to insert new[br]code, you change the offsets in the
0:11:02.040,0:11:07.110
program, and you're modifying all these[br]different offsets. And simply adding new
0:11:07.110,0:11:12.769
code somewhere in between will break the[br]target. So a core feature that we need to
0:11:12.769,0:11:18.170
enforce, or core property that we need to[br]enforce, is that we must find all the
0:11:18.170,0:11:24.050
references and properly adjust them, both[br]relative offsets and absolute offsets as
0:11:24.050,0:11:29.800
well. Getting a single one wrong will[br]break everything. What makes this problem
0:11:29.800,0:11:34.520
really, really hard is that if you're[br]looking at the binary, a byte is a byte,
0:11:34.520,0:11:38.320
right? There's no way for us to[br]distinguish between scalars and
0:11:38.320,0:11:43.649
references, and in fact they are[br]indistinguishable. Getting a single
0:11:43.649,0:11:50.400
reference wrong breaks the target and[br]would introduce arbitrary crashes. So we
0:11:50.400,0:11:54.460
have to come up with ways that allow us to[br]distinguish between the two. So for
0:11:54.460,0:11:59.899
example, if you have this code here, it[br]takes a value and stores it somewhere on
0:11:59.899,0:12:07.060
the stack. This could come from two[br]different kind of high-level constructs.
0:12:07.060,0:12:12.170
On one hand, it could be taking the[br]address of a function and storing this
0:12:12.170,0:12:16.540
function address somewhere and in a stack[br]variable. Or it could be just storing a
0:12:16.540,0:12:21.579
scalar in a stack variable. And these two[br]are indistinguishable, and rewriting them,
0:12:21.579,0:12:25.220
as soon as we add new code, the offsets[br]will change. If it is a function, we would
0:12:25.220,0:12:31.800
have to modify the value; if it is a[br]scalar, we have to keep the same value. So
0:12:31.800,0:12:35.510
how can we come up with a way that allows[br]us to distinguish between the two and
0:12:35.510,0:12:44.610
rewrite binaries by recovering this[br]missing information? So let us take—let me
0:12:44.610,0:12:48.120
take you or let us take you on a journey[br]towards instrumenting binaries in the
0:12:48.120,0:12:53.070
kernel. This is what we aim for. We'll[br]start with the simple case of
0:12:53.070,0:12:57.410
instrumenting binaries in user land, talk[br]about different kinds of coverage guided
0:12:57.410,0:13:01.750
fuzzing and what kind of instrumentation[br]we can add, what kind of sanitization we
0:13:01.750,0:13:06.390
can add, and then focusing on taking it[br]all together and applying it to kernel
0:13:06.390,0:13:11.480
binaries to see what what will fall out of[br]it. Let's start with instrumenting
0:13:11.480,0:13:17.019
binaries first. I will now talk a little[br]bit about RetroWrite, our mechanism and
0:13:17.019,0:13:24.560
our tool that enables static binary[br]instrumentation by symbolizing existing
0:13:24.560,0:13:30.800
binaries. So we recover the information[br]and we translate relative offsets and
0:13:30.800,0:13:39.710
absolute offsets into actual labels that[br]are added to the assembly file. The
0:13:39.710,0:13:42.760
instrumentation can then work on the[br]recovered assembly file, which can then be
0:13:42.760,0:13:48.110
reassembled into a binary that can then be[br]executed for fuzzing. We implement
0:13:48.110,0:13:52.459
coverage tracking and binary address[br]sanitizer on top of this, leveraging
0:13:52.459,0:13:57.970
abstraction as we go forward. The key to[br]enabling this kind of binary rewriting is
0:13:57.970,0:14:02.170
position-independent code. And position-[br]independent code has become the de-facto
0:14:02.170,0:14:07.420
standard for any code that is being[br]executed on a modern system. And it
0:14:07.420,0:14:12.019
effectively says that it is code that can[br]be loaded at any arbitrary address in your
0:14:12.019,0:14:15.600
address space as you are executing[br]binaries. It is essential and a
0:14:15.600,0:14:19.010
requirement if you want to have address[br]space layout randomization or if you want
0:14:19.010,0:14:22.269
to use shared libraries, which de facto[br]you want to use in all these different
0:14:22.269,0:14:26.090
systems. So since a couple of years, all[br]the code that you're executing on your
0:14:26.090,0:14:33.079
phones, on your desktops, on your laptops[br]is position-independent code. And the idea
0:14:33.079,0:14:36.680
between the position-independent code is[br]that you can load it anywhere in your
0:14:36.680,0:14:41.040
address space and you can therefore not[br]use any hard-coded static addresses and
0:14:41.040,0:14:44.420
you have to inform the system of[br]relocations or pick relative
0:14:44.420,0:14:52.920
addresses—to—on how the system can[br]relocate these different mechanisms. On
0:14:52.920,0:14:58.540
x86_64, position-independent code[br]leverages addressing that is relative to
0:14:58.540,0:15:03.440
the instruction pointer. So for example,[br]it uses the current instruction pointer
0:15:03.440,0:15:07.519
and then a relative offset to that[br]instruction pointer to reference global
0:15:07.519,0:15:12.030
variables, other functions and so on. And[br]this is a very easy way for us to
0:15:12.030,0:15:17.710
distinguish references from constants,[br]especially in PIE binaries. If it is RIP-
0:15:17.710,0:15:21.360
relative, it is a reference; everything[br]else is a constant. And we can build our
0:15:21.360,0:15:25.690
translation algorithm and our translation[br]mechanism based on this fundamental
0:15:25.690,0:15:30.130
finding to remove any form of heuristic[br]that is needed by focusing especially on
0:15:30.130,0:15:35.030
position-independent code. So we're[br]supporting position-independent code; we
0:15:35.030,0:15:38.920
are—we don't support non-position-[br]independent code, but we give you the
0:15:38.920,0:15:43.200
guarantee that we can rewrite all the[br]different code that is out there. So
0:15:43.200,0:15:48.449
symbolization works as follows: If you[br]have the little bit of code on the lower
0:15:48.449,0:15:54.030
right, symbolization replaces first all[br]the references with assembler labels. So
0:15:54.030,0:15:57.700
look at the call instruction and the jump-[br]not-zero instruction; the call instruction
0:15:57.700,0:16:02.399
references an absolute address and the[br]jump-not-zero instruction jumps backward
0:16:02.399,0:16:08.259
relative 15 bytes. So by focusing on these[br]relative jumps and calls, we can replace
0:16:08.259,0:16:12.020
them with actual labels and rewrite the[br]binary as follows: so we're calling a
0:16:12.020,0:16:15.839
function, replacing it with the actual[br]label, and for the jump-not-zero we are
0:16:15.839,0:16:21.020
inserting an actual label in the assembly[br]code and adding a backward reference. For
0:16:21.020,0:16:26.089
PC-relative addresses, for example the[br]data load, we can then replace it with the
0:16:26.089,0:16:30.329
name of the actual data that we have[br]recovered, and we can then add all the
0:16:30.329,0:16:35.630
different relocations and use that as[br]auxiliary information on top of it. After
0:16:35.630,0:16:43.480
these three steps, we can insert any new[br]code in between, and can therefore add
0:16:43.480,0:16:47.420
different forms of instrumentations or run[br]some more higher-level analysis on top of
0:16:47.420,0:16:53.940
it, and then reassemble the file for[br]fuzzing or coverage-guided tracking,
0:16:53.940,0:16:59.100
address sanitization or whatever else you[br]want to do. I will now hand over to
0:16:59.100,0:17:04.490
Matteo, who will cover coverage-guided[br]fuzzing and sanitization and then
0:17:04.490,0:17:07.260
instrumenting the binaries in the kernel.[br]Go ahead.
0:17:07.260,0:17:11.300
Nspace (Matteo Rizzo): So, now that we[br]have this really nice framework to rewrite
0:17:11.300,0:17:16.500
binaries, one of the things that we want[br]to add to actually get the fuzzing is this
0:17:16.500,0:17:22.960
coverage-tracking instrumentation. So[br]coverage-guided fuzzing is a way, a
0:17:22.960,0:17:27.549
method, for—to let the fuzzer discover[br]interesting inputs, an interesting path to
0:17:27.549,0:17:35.520
the target by itself. So the basic idea is[br]that the fuzzer will track coverage—the
0:17:35.520,0:17:39.190
parts of the programs that are covered by[br]different inputs by inserting some kind of
0:17:39.190,0:17:43.419
instrumentation. So, for example, here we[br]have this target program that checks if
0:17:43.419,0:17:48.651
the input contains the string "PNG" at the[br]beginning, and if it does, then it does
0:17:48.651,0:17:53.559
something interesting, otherwise it just[br]bails out and fails. So if we track the
0:17:53.559,0:17:58.240
part of the programs that each input[br]executes, the fuzzer can figure out that
0:17:58.240,0:18:03.100
an input that contains "P" will have[br]discovered a different path through the
0:18:03.100,0:18:08.080
program than input that doesn't contain[br]it. And then so on it can, one byte at a
0:18:08.080,0:18:13.360
time, discover that this program expects[br]this magic sequence "PNG" at the start of
0:18:13.360,0:18:19.280
the input. So the way that the fuzzer does[br]this is that every time a new input
0:18:19.280,0:18:23.730
discovers a new path though the target, it[br]is considered interesting and added to a
0:18:23.730,0:18:28.890
corpus of interesting inputs. And every[br]time the fuzzer needs to generate a new
0:18:28.890,0:18:35.610
input, it will select something from the[br]corpus, mutate it randomly, and then use
0:18:35.610,0:18:39.830
it as the new input. So this is like[br]a—this is, like, conceptually pretty
0:18:39.830,0:18:43.150
simple, but in practice it works really[br]well and it really lets the fuzzer
0:18:43.150,0:18:47.740
discover the format that the target[br]expects in an unsupervised way. So as an
0:18:47.740,0:18:53.010
example, this is an experiment that was[br]run by the author of AFL—AFL is the fuzzer
0:18:53.010,0:18:58.049
that sort of popularized this[br]technique—where he was fuzzing a JPEG-
0:18:58.049,0:19:02.160
parsing library, starting from a corpus[br]that only contained the string "hello". So
0:19:02.160,0:19:07.650
now clearly "hello" is not a valid JPEG[br]image and so—but still, like, the fuzzer
0:19:07.650,0:19:12.070
was still able to find—to discover the[br]correct format. So after a while it
0:19:12.070,0:19:17.580
started generating some grayscale images,[br]on the top left, and as it generated more
0:19:17.580,0:19:20.720
and more inputs, it started generating[br]more interesting images, such as some
0:19:20.720,0:19:25.120
grayscale gradients, and later on even[br]some color images. So as you can see, this
0:19:25.120,0:19:30.630
really works, and it allows us to fuzz a[br]program without really teaching the fuzzer
0:19:30.630,0:19:34.600
how the input should look like. So that's[br]it for coverage-guided fuzzing. Now we'll
0:19:34.600,0:19:38.190
talk a bit about sanitizations. As a[br]reminder, the core idea behind
0:19:38.190,0:19:42.330
sanitization is that just looking for[br]crashes is likely to miss some of the
0:19:42.330,0:19:45.919
bugs. So, for example, if you have this[br]out-of-bounds one-byte read, that will
0:19:45.919,0:19:49.590
probably not crash the target, but you[br]would still like to catch it because it
0:19:49.590,0:19:53.080
could be used for an info leak, for[br]example. So one of the most popular
0:19:53.080,0:19:59.030
sanitizers is Address Sanitizer. So[br]Address Sanitizer will instrument all the
0:19:59.030,0:20:04.630
memory accesses in your program and check[br]for memory corruption, which—so, memory
0:20:04.630,0:20:08.809
corruption is a pretty dangerous class of[br]bugs that unfortunately still plagues C
0:20:08.809,0:20:16.770
and C++ programs and unsafe languages in[br]general. And ASan tries to catch it by
0:20:16.770,0:20:21.220
instrumenting the target. It is very[br]popular; it has been used to find
0:20:21.220,0:20:26.900
thousands of bugs in complex software like[br]Chrome and Linux, and even though it has,
0:20:26.900,0:20:31.500
like, a bit of a slowdown—like about 2x—it[br]is still really popular because it lets
0:20:31.500,0:20:37.120
you find many, many more bugs. So how does[br]it work? The basic idea is that ASan will
0:20:37.120,0:20:41.790
insert some special regions of memory[br]called 'red zones' around every object in
0:20:41.790,0:20:47.270
memory. So we have a small example here[br]where we declare a 4-byte array on the
0:20:47.270,0:20:53.700
stack. So ASan will allocate the array[br]"buf" and then add a red zone before it
0:20:53.700,0:20:59.060
and a red zone after it. Whenever the[br]program accesses the red zones, it is
0:20:59.060,0:21:02.660
terminated with a security violation. So[br]the instrumentation just prints a bug
0:21:02.660,0:21:07.419
report and then crashes the target. This[br]is very useful for detecting, for example,
0:21:07.419,0:21:11.400
buffer overflows or underflows and many[br]other kinds of bugs such as use-after-free
0:21:11.400,0:21:16.230
and so on. So, as an example here, we are[br]trying to copy 5 bytes into a 4-byte
0:21:16.230,0:21:22.580
buffer, and ASan will check each of the[br]accesses one by one. And when it sees that
0:21:22.580,0:21:26.810
the last byte writes to a red zone, it[br]detects the violation and crashes the
0:21:26.810,0:21:32.370
program. So this is good for us because[br]this bug might have not been found by
0:21:32.370,0:21:36.120
simply looking for crashes, but it's[br]definitely found if we use ASan. So this
0:21:36.120,0:21:40.750
is something we want for fuzzing. So now[br]that we've covered—briefly covered ASan we
0:21:40.750,0:21:45.970
can talk about instrumenting binaries in[br]the kernel. So Mathias left us with
0:21:45.970,0:21:52.580
RetroWrite, and with RetroWrite we can add[br]both coverage tracking and ASan to
0:21:52.580,0:21:57.410
binaries. So the simple—it's a really[br]simple idea: now that we can rewrite this
0:21:57.410,0:22:02.760
binary and add instructions wherever we[br]want, we can implement both coverage
0:22:02.760,0:22:07.390
tracking and ASan. In order to implement[br]coverage tracking, we simply have to
0:22:07.390,0:22:11.710
identify the start of every basic block[br]and add a little piece of instrumentation
0:22:11.710,0:22:15.789
at the start of the basic block that tells[br]the fuzzer 'hey, we've reached this part
0:22:15.789,0:22:19.400
of the program'—'hey, we've reached this[br]other part of the program'. Then the
0:22:19.400,0:22:25.039
fuzzer can figure out whether that's a new[br]part or not. ASan is also, like, you know,
0:22:25.039,0:22:29.240
it's also somewhat—it can also be[br]implemented in this way by finding all
0:22:29.240,0:22:33.929
memory accesses, and then linking with[br]libASan. libASan is a sort of runtime for
0:22:33.929,0:22:38.820
ASan that takes care of inserting the red[br]zones and instrument—and adding, you know,
0:22:38.820,0:22:43.340
like, keeping around all the metadata that[br]ASan needs to know where the red zones
0:22:43.340,0:22:48.419
are, and detecting whether a memory access[br]is invalid. So, how can we apply all of
0:22:48.419,0:22:52.309
this in the kernel? Well, first of all,[br]fuzzing the kernel is not as easy as
0:22:52.309,0:22:57.920
fuzzing some userspace program. There's[br]some issues here. So first of all, there's
0:22:57.920,0:23:01.950
crash handling. So whenever you're fuzzing[br]a userspace program, you expect crashes,
0:23:01.950,0:23:06.289
well, because that's what we're after. And[br]if a userspace program crashes, then the
0:23:06.289,0:23:11.410
US simply terminates the crash gracefully.[br]And so the fuzzer can detect this, and
0:23:11.410,0:23:16.270
save the input as a crashing input, and so[br]on. And this is all fine. But when you're
0:23:16.270,0:23:19.470
fuzzing the kernel, so—if you were fuzzing[br]the kernel of the machine that you were
0:23:19.470,0:23:23.040
using for fuzzing, after a while, the[br]machine would just go down. Because, after
0:23:23.040,0:23:27.180
all, the kernel runs the machine, and if[br]it starts misbehaving, then all of it can
0:23:27.180,0:23:31.720
go wrong. And more importantly, you can[br]lose your crashes, because the if the
0:23:31.720,0:23:35.450
machine crashes, then the state of the[br]fuzzer is lost and you have no idea what
0:23:35.450,0:23:39.590
your crashing input was. So what most[br]kernel fuzzers have to do is that they
0:23:39.590,0:23:43.419
resort to some kind of VM to keep the[br]system stable. So they fuzz the kernel in
0:23:43.419,0:23:48.500
a VM and then run the fuzzing agent[br]outside the VM. On top of that is tooling.
0:23:48.500,0:23:52.710
So, if you want to fuzz a user space[br]program, you can just download AFL or use
0:23:52.710,0:23:57.540
libfuzzer; there's plenty of tutorials[br]online, it's really easy to set up and
0:23:57.540,0:24:01.200
just, like—compile your program, you start[br]fuzzing and you're good to go. If you want
0:24:01.200,0:24:05.240
to fuzz the kernel, it's already much more[br]complicated. So, for example, if you want
0:24:05.240,0:24:09.390
to fuzz Linux with, say, syzkaller, which[br]is a popular kernel fuzzer, you have to
0:24:09.390,0:24:14.030
compile the kernel, you have to use a[br]special config that supports syzkaller,
0:24:14.030,0:24:20.100
you have way less guides available than[br]for userspace fuzzing, and in general it's
0:24:20.100,0:24:24.940
just much more complex and less intuitive[br]than just fuzzing userspace. And lastly,
0:24:24.940,0:24:29.330
we have the issue of determinism. So in[br]general, if you have a single threaded
0:24:29.330,0:24:32.770
userspace program, unless it uses some[br]kind of random number generator, it is
0:24:32.770,0:24:38.210
more or less deterministic. There's[br]nothing that affects the execution of the
0:24:38.210,0:24:42.299
program. But—and this is really nice if[br]you want to try to reproduce a test case,
0:24:42.299,0:24:46.340
because if you have a non-deterministic[br]test case, then it's really hard to know
0:24:46.340,0:24:50.680
whether this is really a crash or if it's[br]just something that you should ignore, and
0:24:50.680,0:24:56.280
in the kernel this is even harder, because[br]you don't only have concurrency, like
0:24:56.280,0:25:01.200
multi-processing, you also have interrupts.[br]So interrupts can happen at any time, and
0:25:01.200,0:25:05.850
if one time you got an interrupt while[br]executing your test case and the second
0:25:05.850,0:25:09.947
time you didn't, then maybe it only[br]crashes one time - you don't really know,
0:25:09.947,0:25:15.910
it's not pretty. And so again, we[br]have several approaches to fuzzing
0:25:15.910,0:25:20.550
binaries in the kernel. First one is to do[br]black box fuzzing. We don't really
0:25:20.550,0:25:23.677
like this because it doesn't find much,[br]especially in something complex
0:25:23.677,0:25:27.380
like a kernel. Approach 1 is to[br]use dynamic translation,
0:25:27.380,0:25:32.620
so, use something[br]like QEMU or—you name it. This works, and
0:25:32.620,0:25:35.121
people have used it successfully; the[br]problem is that it is really, really,
0:25:35.121,0:25:41.500
really slow. Like, we're talking about[br]10x-plus overhead. And as we said before,
0:25:41.500,0:25:45.570
the more iterations, the more test cases[br]you can execute in the same amount of
0:25:45.570,0:25:50.700
time, the better, because you find more[br]bugs. And on top of that, there's no
0:25:50.700,0:25:57.520
currently available sanitizer for[br]kernel binaries that works—is based on
0:25:57.520,0:26:01.309
this approach. So in userspace you have[br]something like valgrind; in the kernel,
0:26:01.309,0:26:05.071
you don't have anything, at least that we[br]know of. There is another approach, which
0:26:05.071,0:26:09.951
is to use Intel Processor Trace. This has[br]been, like—there's been some research
0:26:09.951,0:26:14.240
papers on this recently, and this is nice[br]because it allows you to collect coverage
0:26:14.240,0:26:18.040
at nearly zero overhead. It's, like,[br]really fast, but the problem is that it
0:26:18.040,0:26:23.020
requires hardware support, so it requires[br]a fairly new x86 CPU, and if you want to
0:26:23.020,0:26:27.159
fuzz something on ARM, say, like, your[br]Android driver, or if you want to use an
0:26:27.159,0:26:32.120
older CPU, then you're out of luck. And[br]what's worse, you cannot really use it for
0:26:32.120,0:26:36.490
sanitization, or at least not the kind of[br]sanitization that ASan does, because it
0:26:36.490,0:26:41.770
just traces the execution; it doesn't[br]allow you to do checks on memory accesses.
0:26:41.770,0:26:47.350
So Approach 3, which is what we will use,[br]is static rewriting. So, we had this very
0:26:47.350,0:26:50.750
nice framework for rewriting userspace[br]binaries, and then we asked ourselves, can
0:26:50.750,0:26:56.659
we make this work in the kernel? So we[br]took the system, the original RetroWrite,
0:26:56.659,0:27:02.650
we modified it, we implemented support for[br]Linux modules, and... it works! So we have
0:27:02.650,0:27:08.110
implemented it—we have used it to fuzz[br]some kernel modules, and it really shows
0:27:08.110,0:27:11.640
that this approach doesn't only work for[br]userspace; it can also be applied to the
0:27:11.640,0:27:18.510
kernel. So as for some implementation, the[br]nice thing about kernel modules is that
0:27:18.510,0:27:22.170
they're always position independent. So[br]you cannot have position—like, fixed-
0:27:22.170,0:27:26.370
position kernel modules because Linux just[br]doesn't allow it. So we sort of get that
0:27:26.370,0:27:32.220
for free, which is nice. And Linux modules[br]are also a special class of ELF files,
0:27:32.220,0:27:35.890
which means that the format is—even though[br]it's not the same as userspace binaries,
0:27:35.890,0:27:40.310
it's still somewhat similar, so we didn't[br]have to change the symbolizer that much,
0:27:40.310,0:27:46.539
which is also nice. And we implemented[br]symbolization with this, and we used it to
0:27:46.539,0:27:54.490
implement both code coverage and binary[br]ASan for kernel binary modules. So for
0:27:54.490,0:27:59.039
coverage: The idea behind the whole[br]RetroWrite project was that we wanted to
0:27:59.039,0:28:03.500
integrate with existing tools. So existing[br]fuzzing tools. We didn't want to force our
0:28:03.500,0:28:08.770
users to write their own fuzzer that is[br]compatible with RetroWrite. So for—in
0:28:08.770,0:28:13.470
userspace we had AFL-style coverage[br]tracking, and binary ASan which is
0:28:13.470,0:28:16.490
compatible with source-based ASan, and we[br]wanted to follow the same principle in the
0:28:16.490,0:28:21.900
kernel. So it turns out that Linux has[br]this built-in coverage-tracking framework
0:28:21.900,0:28:26.529
called kCov that is used by several[br]popular kernel fuzzers like syzkaller, and
0:28:26.529,0:28:31.049
we wanted to use it ourselves. So we[br]designed our coverage instrumentation so
0:28:31.049,0:28:36.590
that it integrates with kCov. The downside[br]is that you need to compile the kernel
0:28:36.590,0:28:40.690
with kCov, but then again, Linux is open[br]source, so you can sort of always do that;
0:28:40.690,0:28:44.279
the kernel usually—it's usually not the[br]kernel, it is a binary blob, but it's
0:28:44.279,0:28:48.929
usually only the modules. So that's just[br]still fine. And the way you do this is—the
0:28:48.929,0:28:53.370
way you implement kCov for binary modules[br]is that you just have to find the start of
0:28:53.370,0:28:58.539
every basic block, and add a call to some[br]function that then stores the collected
0:28:58.539,0:29:02.530
coverage. So here's an example: we have a[br]short snippet of code with three basic
0:29:02.530,0:29:07.620
blocks, and all we have to do is add a[br]call to "trace_pc" to the start of the
0:29:07.620,0:29:11.940
basic block. "trace_pc" is a function that[br]is part of the main kernel image that then
0:29:11.940,0:29:17.230
collects this coverage and makes it[br]available to a userspace fuzzing agent. So
0:29:17.230,0:29:21.210
this is all really easy and it works. And[br]let's now see how we implemented binary
0:29:21.210,0:29:25.600
ASan. So as I mentioned before, when we[br]instrument the program with binary ASan in
0:29:25.600,0:29:29.690
userspace we link with libASan, which[br]takes care of setting up the metadata,
0:29:29.690,0:29:33.880
takes care of putting the red zones around[br]our allocations, and so on. So we had to
0:29:33.880,0:29:37.330
do something similar in the kernel; of[br]course, you cannot link with libASan in
0:29:37.330,0:29:42.630
the kernel, because that doesn't work, but[br]what we can do instead is, again, compile
0:29:42.630,0:29:47.240
the kernel with kASan support. So this[br]instruments the allocator, kmalloc, to add
0:29:47.240,0:29:52.110
the red zones; it allocates space for the[br]metadata, it keeps this metadata around,
0:29:52.110,0:29:56.279
does this all for us, which is really[br]nice. And again, the big advantage of
0:29:56.279,0:30:00.580
using this approach is that we can[br]integrate seamlessly with a kASan-
0:30:00.580,0:30:05.800
instrumented kernel and with fuzzers that[br]rely on kASan such as syzkaller. So we see
0:30:05.800,0:30:11.500
this as more of a plus than, like, a[br]limitation. And how do you implement ASan?
0:30:11.500,0:30:16.561
Well, you have to find every memory access[br]and instrument it to check the—to check
0:30:16.561,0:30:22.370
whether this is accessing a red zone. And[br]if it does then you just call this bug
0:30:22.370,0:30:26.010
report function that produces a stack[br]trace, a bug report, and crashes the
0:30:26.010,0:30:29.649
kernel, so that the fuzzer can detect it.[br]Again, this is compatible with source-
0:30:29.649,0:30:36.990
based kASan, so we're happy. We can simply[br]load the rewritten module with added
0:30:36.990,0:30:40.220
instrumentation into a kernel, as long as[br]you have compiled the kernel with the
0:30:40.220,0:30:44.340
right flags, and we can use a standard[br]kernel fuzzer. Here for the—our
0:30:44.340,0:30:49.910
evaluation, we used syzkaller, a popular[br]kernel fuzzer by some folks at Google, and
0:30:49.910,0:30:55.460
it worked really well. So we've finally[br]reached the end of our journey, and now we
0:30:55.460,0:31:00.470
wanted to present some experiments we did[br]to see if this really works. So for
0:31:00.470,0:31:05.289
userspace, we wanted to compare the[br]performance of our binary ASan with
0:31:05.289,0:31:10.360
source-based ASan and with existing[br]solutions that also work on binaries. So
0:31:10.360,0:31:15.860
for userspace, you can use valgrind[br]memcheck. It's a memory sanitizer that is
0:31:15.860,0:31:20.850
based on binary translation and dynamic[br]binary translation and works on binaries.
0:31:20.850,0:31:25.460
We compared it with source ASan and[br]RetroWrite ASan on the SPEC CPU benchmark
0:31:25.460,0:31:31.100
and saw how fast it was. And for the[br]kernel we decided to fuzz some file
0:31:31.100,0:31:37.519
systems and some drivers with syzkaller[br]using both source-based KASan and kCov and
0:31:37.519,0:31:44.671
kRetroWrite-based KASan and kCov. So these[br]are our results for userspace. So the red
0:31:44.671,0:31:48.990
bar is valgrind. We can see that the[br]execution time of valgrind is the highest.
0:31:48.990,0:31:55.892
It is really, really slow—like, 3, 10, 30x[br]overhead, way too slow for fuzzing. Then
0:31:55.892,0:32:02.580
in green, we have our binary ASan, which[br]is, like, already a large improvement. In
0:32:02.580,0:32:07.059
orange we have source-based ASan. And then[br]finally in blue we have the original code
0:32:07.059,0:32:11.090
without any instrumentation whatsoever. So[br]we can see that source-based ASan has,
0:32:11.090,0:32:16.659
like, 2x or 3x overhead, and binary ASan[br]is a bit higher, like, a bit less
0:32:16.659,0:32:21.312
efficient, but still somewhat close. So[br]that's for userspace, and for the kernel,
0:32:21.312,0:32:25.440
we—these are some preliminary results, so,[br]this is, like—I'm doing this work as part
0:32:25.440,0:32:29.897
of my master's thesis, and so I'm still,[br]like, running the evaluation. Here we can
0:32:29.897,0:32:33.419
see that the overhead is already, like, a[br]bit lower. So the reason for this is that
0:32:33.419,0:32:39.690
SPEC is a pure CPU benchmark; it doesn't[br]interact with the system that much. And so
0:32:39.690,0:32:44.416
any instrumentation that you add is going[br]to massively slow down, or, like,
0:32:44.416,0:32:49.320
considerably slow down the execution. By[br]contrast, when you fuzz a file system with
0:32:49.320,0:32:56.460
syzkaller, not only every test case has to[br]go from the high—the host to the guest and
0:32:56.460,0:33:01.770
then do multiple syscalls and so on, but[br]also every system call has to go through
0:33:01.770,0:33:05.368
several layers of abstraction before it[br]gets to the actual file system. And all
0:33:05.368,0:33:09.610
these—like, all of this takes a lot of[br]time, and so in practice the overhead of
0:33:09.610,0:33:15.581
our instrumentation seems to be pretty[br]reasonable. So, since we know that you
0:33:15.581,0:33:32.838
like demos, we've prepared a small demo of[br]kRetroWrite. So. Let's see. Yep. Okay. All
0:33:32.838,0:33:40.470
right, so we've prepared a small kernel[br]module. And this module is just, like,
0:33:40.470,0:33:45.669
really simple; it contains a[br]vulnerability, and what it does is that it
0:33:45.669,0:33:49.929
creates a character device. So if you're[br]not familiar with this, a character device
0:33:49.929,0:33:55.130
is like a fake file that is exposed by a[br]kernel driver and that it can read to and
0:33:55.130,0:34:01.630
write from. And instead of going to a[br]file, the data that you read—that you, in
0:34:01.630,0:34:05.590
this case, write to the fake file—goes to[br]the driver and is handled by this demo
0:34:05.590,0:34:10.481
write function. So as we can see, this[br]function allocates a buffer, a 16-byte
0:34:10.481,0:34:14.850
buffer on the heap, and then copies some[br]data into it, and then it checks if the
0:34:14.850,0:34:19.970
data contains the string "1337". If it[br]does, then it accesses the buffer out of
0:34:19.970,0:34:23.446
bounds; you can see "alloc[16]" and the[br]buffer is sixteen bytes; this is an out-
0:34:23.446,0:34:27.550
of-bounds read by one byte. And if it[br]doesn't then it just accesses the buffer
0:34:27.550,0:34:33.050
in bounds, which is fine, and it's not a[br]vulnerability. So we can compile this
0:34:33.050,0:34:47.450
driver. OK, um... OK, and then so we have[br]our module, and then we will instrument it
0:34:47.450,0:35:01.495
using kRetroWrite. So, instrument... Yes,[br]please. OK. Right. So kRetroWrite did some
0:35:01.495,0:35:07.329
processing, and it produced an[br]instrumented module with ASan or kASan and
0:35:07.329,0:35:09.770
a symbolized assembly file. We can[br]actually have a look at the symbolized
0:35:09.770,0:35:17.740
assembly file to see what it looks like.[br]Yes. Yes. OK. So, is this big enough?
0:35:17.740,0:35:22.900
Yeah... As you can see, so—we can actually[br]see here the ASan instrumentation. Ah,
0:35:22.900,0:35:29.329
shouldn't—yeah. So, we—this is the ASan[br]instrumentation. The original code loads
0:35:29.329,0:35:33.290
some data from this address. And as you[br]can see, the ASan instrumentation first
0:35:33.290,0:35:38.240
computes the actual address, and then does[br]some checking—basically, this is checking
0:35:38.240,0:35:44.430
some metadata that ASan stores to check if[br]the address is in a red zone or not, and
0:35:44.430,0:35:49.430
then if the fail check fails, then it[br]calls this ASan report which produces a
0:35:49.430,0:35:54.829
stack trace and crashes the kernel. So[br]this is fine. We can actually even look at
0:35:54.829,0:36:17.820
the disassembly of both modules, so...[br]object dump and then demo... Ah, nope. OK,
0:36:17.820,0:36:21.830
so on the left, we have the original[br]module without any instrumentation; on the
0:36:21.830,0:36:27.070
right, we have the module instrumented[br]with ASan. So as you can see, the original
0:36:27.070,0:36:33.160
module has "push r13" and then has this[br]memory load here; on the right in the
0:36:33.160,0:36:38.559
instrumented module, kRetroWrite inserted[br]the ASan instrumentation. So the original
0:36:38.559,0:36:43.940
load is still down here, but between that,[br]between the first instruction and this
0:36:43.940,0:36:47.851
instruction, we have—now have the kASan[br]instrumentation that does our check. So
0:36:47.851,0:36:56.700
this is all fine. Now we can actually test[br]it and see what it does. So we can—we will
0:36:56.700,0:37:02.210
boot a very simple, a very minimal Linux[br]system, and try to target the
0:37:02.210,0:37:05.793
vulnerability first with the non-[br]instrumented module and then with the
0:37:05.793,0:37:10.410
instrumented module. And we can—we will[br]see that in the—with the non-instrumented
0:37:10.410,0:37:14.550
module, the kernel will not crash, but[br]with the instrumented module it will crash
0:37:14.550,0:37:22.434
and produce a bug report. So. Let's see.[br]Yeah, this is a QEMU VM, I have no idea
0:37:22.434,0:37:27.481
why it's taking so long to boot. I'll[br]blame the the demo gods not being kind to
0:37:27.481,0:37:39.730
us. Yeah, I guess we just have to wait.[br]OK. So. All right, so we loaded the
0:37:39.730,0:37:47.334
module. We will see that it has created a[br]fake file character device in /dev/demo.
0:37:47.334,0:37:59.020
Yep. We can write this file. Yep. So this[br]will—this accesses the array in bounds,
0:37:59.020,0:38:04.410
and so this is fine. Then what we can also[br]do is write "1337" to it so it will access
0:38:04.410,0:38:08.968
the array out of bounds. So this is the[br]non instrumented module, so this will not
0:38:08.968,0:38:14.050
crash. It will just print some garbage[br]value. Okay, that's it. Now we can load
0:38:14.050,0:38:25.890
the instrumented module instead... and do[br]the same experiment again. All right. We
0:38:25.890,0:38:31.640
can see that /dev/demo is still here. So[br]the module still works. Let's try to write
0:38:31.640,0:38:38.540
"1234" into it. This, again, doesn't[br]crash. But when we try to write "1337",
0:38:38.540,0:38:47.940
this will produce a bug report.[br]applause
0:38:47.940,0:38:51.129
So this has quite a lot of information. We
0:38:51.129,0:38:55.700
can see, like, the—where the memory was[br]allocated, there's a stack trace for that;
0:38:55.700,0:39:02.150
it wasn't freed, so there's no stack trace[br]for the free. And we see that the cache
0:39:02.150,0:39:06.760
size of the memory, like, it was a 16-byte[br]allocation. We can see the shape of the
0:39:06.760,0:39:10.900
memory. We see that these two zeros means[br]that there's two 8-byte chunks of valid
0:39:10.900,0:39:15.550
memory. And then these "fc fc fc" is[br]the—are the red zones that I was talking
0:39:15.550,0:39:19.980
about before. All right, so that's it for[br]the demo. We will switch back to our
0:39:19.980,0:39:24.630
presentation now. So... hope you enjoyed[br]it.
0:39:24.630,0:39:30.530
gannimo: Cool. So after applying this to a[br]demo module, we also wanted to see what
0:39:30.530,0:39:35.365
happens if we apply this to a real file[br]system. After a couple of hours we
0:39:35.365,0:39:41.390
were—when we came back and checked on the[br]results, we saw a couple of issues popping
0:39:41.390,0:39:48.720
up, including a nice set of use-after-free[br]reads, a set of use-after-free writes, and
0:39:48.720,0:39:56.220
we checked the bug reports and we saw a[br]whole bunch of Linux kernel issues popping
0:39:56.220,0:40:02.640
up one after the other in this nondescript[br]module that we fuzzed. We're in the
0:40:02.640,0:40:06.930
process of reporting it. This will take[br]some time until it is fixed; that's why
0:40:06.930,0:40:13.470
you see the blurry lines. But as you see,[br]there's still quite a bit of opportunity
0:40:13.470,0:40:19.190
in the Linux kernel where you can apply[br]different forms of targeted fuzzing into
0:40:19.190,0:40:26.349
different modules, leverage these modules[br]on top of a kASan instrumented kernel and
0:40:26.349,0:40:31.720
then leveraging this as part of your[br]fuzzing toolchain to find interesting
0:40:31.720,0:40:39.080
kernel 0days that... yeah. You can then[br]develop further, or report, or do whatever
0:40:39.080,0:40:44.766
you want with them. Now, we've shown you[br]how you can take existing binary-only
0:40:44.766,0:40:51.250
modules, think different binary-only[br]drivers, or even existing modules where
0:40:51.250,0:40:55.800
you don't want to instrument a full set of[br]the Linux kernel, but only focus fuzzing
0:40:55.800,0:41:02.130
and exploration on a small different—small[br]limited piece of code and then do security
0:41:02.130,0:41:09.247
tests on those. We've shown you how we can[br]do coverage-based tracking and address
0:41:09.247,0:41:13.500
sanitization. But this is also up to you[br]on what kind of other instrumentation you
0:41:13.500,0:41:17.890
want. Like this is just a tool, a[br]framework that allows you to do arbitrary
0:41:17.890,0:41:23.780
forms of instrumentation. So we've taken[br]you on a journey from instrumenting
0:41:23.780,0:41:29.380
binaries over coverage-guided fuzzing and[br]sanitization to instrumenting modules in
0:41:29.380,0:41:36.692
the kernel and then finding crashes in the[br]kernel. Let me wrap up the talk. So, this
0:41:36.692,0:41:41.581
is one of the the fun pieces of work that[br]we do in the hexhive lab at EPFL. So if
0:41:41.581,0:41:45.740
you're looking for postdoc opportunities[br]or if you're thinking about a PhD, come
0:41:45.740,0:41:51.809
talk to us. We're always hiring. The tools[br]will be released as open source. A large
0:41:51.809,0:41:57.319
chunk of the userspace work is already[br]open source. We're working on a set of
0:41:57.319,0:42:02.350
additional demos and so on so that you can[br]get started faster, leveraging the
0:42:02.350,0:42:07.810
different existing instrumentation that is[br]already out there. The userspace work is
0:42:07.810,0:42:12.139
already available. The kernel work will be[br]available in a couple of weeks. This
0:42:12.139,0:42:16.770
allows you to instrument real-world[br]binaries for fuzzing, leveraging existing
0:42:16.770,0:42:21.200
transformations for coverage tracking to[br]enable fast and effective fuzzing and
0:42:21.200,0:42:26.490
memory checking to detect the actual bugs[br]that exist there. The key takeaway from
0:42:26.490,0:42:32.430
this talk is that RetroWrite and[br]kRetroWrite enables static binary
0:42:32.430,0:42:38.300
rewriting at zero instrumentation cost. We[br]take the limitation of focusing only on
0:42:38.300,0:42:43.240
position-independent code, which is not a[br]real implementation, but we get the
0:42:43.240,0:42:47.800
advantage of being able to symbolize[br]without actually relying on heuristics, so
0:42:47.800,0:42:55.380
we can even symbolize large, complex[br]source—large, complex applications and
0:42:55.380,0:43:01.090
effectively rewrite those aspects and then[br]you can focus fuzzing on these parts.
0:43:01.090,0:43:06.329
Another point I want to mention is that[br]this enables you to reuse existing tooling
0:43:06.329,0:43:10.981
so you can take a binary blob, instrument[br]it, and then reuse, for example, Address
0:43:10.981,0:43:15.966
Sanitizer or existing fuzzing tools, as it[br]integrates really, really nice. As I said,
0:43:15.966,0:43:22.700
all the code is open source. Check it out.[br]Try it. Let us know if it breaks. We're
0:43:22.700,0:43:27.521
happy to fix. We are committed to open[br]source. And let us know if there are any
0:43:27.521,0:43:36.750
questions. Thank you.[br]applause
0:43:36.750,0:43:42.250
Herald: So, thanks, guys, for an[br]interesting talk. We have some time for
0:43:42.250,0:43:47.180
questions, so we have microphones along[br]the aisles. We'll start from question from
0:43:47.180,0:43:51.079
microphone number two.[br]Q: Hi. Thanks for your talk and for the
0:43:51.079,0:43:59.400
demo. I'm not sure about the use-case you[br]showed for the kernel RetroWrite. 'Cause
0:43:59.400,0:44:05.579
you're usually interested in fuzzing[br]binary in kernelspace when you don't have
0:44:05.579,0:44:13.980
source code for the kernel. For example,[br]for IoT or Android and so on. But you just
0:44:13.980,0:44:22.260
reuse the kCov and kASan in the kernel,[br]and you never have the kernel in IoT or
0:44:22.260,0:44:28.599
Android which is compiled with that. So[br]are you—do you have any plans to binary
0:44:28.599,0:44:31.666
instrument the kernel itself, not the[br]modules?
0:44:31.666,0:44:39.390
Nspace: So we thought about that. I think[br]that there's some additional problems that
0:44:39.390,0:44:43.910
we would have to solve in order to be able[br]to instrument the full kernel. So other
0:44:43.910,0:44:47.819
than the fact that it gives us[br]compatibility with, like, existing tools,
0:44:47.819,0:44:51.720
the reason why we decided to go with[br]compiling the kernel with kASan and kCov
0:44:51.720,0:44:56.757
is that building the, like—you would you[br]have to, like, think about it. You
0:44:56.757,0:45:01.540
have to instrument the memory allocator to[br]add red zones, which is, like, already
0:45:01.540,0:45:07.069
somewhat complex. You have to instrument[br]the exception handlers to catch, like, any
0:45:07.069,0:45:12.240
faults that the instrumentation detects.[br]You would have to, like, set up some
0:45:12.240,0:45:17.480
memory for the ASan shadow. So this is,[br]like—I think you should be able to do it,
0:45:17.480,0:45:21.690
but it would require a lot of additional[br]work. So this is, like—this was like four
0:45:21.690,0:45:25.510
months' thesis. So we decided to start[br]small and prove that it works in
0:45:25.510,0:45:30.470
the kernel for modules, and then leave it[br]to future work to actually extend it to
0:45:30.470,0:45:37.558
the full kernel. Also, like, I think for[br]Android—so in the case of Linux, the
0:45:37.558,0:45:42.072
kernel is GPL, right, so if the[br]manufacturers ships a custom kernel, they
0:45:42.072,0:45:44.614
have to release the source code, right?[br]Q: They never do.
0:45:44.614,0:45:47.220
Nspace: They never—well, that's a[br]different issue. Right?
0:45:47.220,0:45:49.009
gannimo: Right.[br]Q: So that's why I ask, because I don't
0:45:49.009,0:45:51.839
see how it just can be used in the real[br]world.
0:45:51.839,0:45:57.122
gannimo: Well, let me try to put this into[br]perspective a little bit as well. Right.
0:45:57.122,0:46:02.030
So there's the—what we did so far is we[br]leveraged existing tools, like kASan or
0:46:02.030,0:46:09.440
kCov, and integrated into these existing[br]tools. Now, doing heap-based allocation is
0:46:09.440,0:46:13.572
fairly simple and replacing those with[br]additional red zones—that instrumentation
0:46:13.572,0:46:20.203
you can carry out fairly well by focusing[br]on the different allocators. Second to
0:46:20.203,0:46:24.972
that, simply oopsing the kernel and[br]printing the stack trace is also fairly
0:46:24.972,0:46:29.250
straightforward. So it's not a lot of[br]additional effort. So it is—it involves
0:46:29.250,0:46:38.471
some engineering effort to port this to[br]non-kASan-compiled kernels. But we think
0:46:38.471,0:46:44.740
it is very feasible. In the interest of[br]time, we focused on kASan-enabled kernels,
0:46:44.740,0:46:50.960
so that some form of ASan is already[br]enabled. But yeah, this is additional
0:46:50.960,0:46:55.660
engineering effort. But there is also a[br]community out there that can help us with
0:46:55.660,0:47:00.960
these kind of changes. So kRetroWrite and[br]RetroWrite themselves are the binary
0:47:00.960,0:47:07.060
rewriting platform that allow you to turn[br]a binary into an assembly file that you
0:47:07.060,0:47:11.619
can then instrument and run different[br]passes on top of it. So another pass would
0:47:11.619,0:47:16.399
be a full ASan pass or kASan pass that[br]somebody could add and then contribute
0:47:16.399,0:47:19.100
back to the community.[br]Q: Yeah, it would be really useful.
0:47:19.100,0:47:20.186
Thanks.[br]gannimo: Cool.
0:47:20.186,0:47:24.260
Angel: Next question from the Internet.[br]Q: Yes, there is a question regarding the
0:47:24.260,0:47:30.890
slide on the SPEC CPU benchmark. The[br]second or third graph from the right had
0:47:30.890,0:47:36.700
an instrumented version that was faster[br]than the original program. Why is that?
0:47:36.700,0:47:42.299
gannimo: Cache effect. Thank you.[br]Angel: Microphone number one.
0:47:42.299,0:47:47.032
Q: Thank you. Thank you for presentation.[br]I have question: how many architecture do
0:47:47.032,0:47:51.210
you support, and if you have support more,[br]what then?
0:47:51.210,0:47:56.400
gannimo: x86_64.[br]Q: Okay. So no plans for ARM or MIPS,
0:47:56.400,0:47:58.130
or...?[br]gannimo: Oh, there are plans.
0:47:58.130,0:48:01.390
Q: Okay.[br]Nspace: Right, so—
0:48:01.390,0:48:05.980
gannimo: Right. Again, there's a finite[br]amount of time. We focused on the
0:48:05.980,0:48:11.778
technology. ARM is high up on the list. If[br]somebody is interested in working on it
0:48:11.778,0:48:17.670
and contributing, we're happy to hear from[br]it. Our list of targets is ARM first and
0:48:17.670,0:48:22.915
then maybe something else. But I think[br]with x86_64 and ARM we've covered a
0:48:22.915,0:48:33.420
majority of the interesting platforms.[br]Q: And second question, did you try to
0:48:33.420,0:48:37.970
fuzz any real closed-source program?[br]Because as I understand from presentation,
0:48:37.970,0:48:44.710
you fuzz, like, just file system, what we[br]can compile and fuzz with syzkaller like
0:48:44.710,0:48:48.570
in the past.[br]Nspace: So for the evaluation, we wanted
0:48:48.570,0:48:52.130
to be able to compare between the source-[br]based instrumentation and the binary-based
0:48:52.130,0:48:57.460
instrumentation, so we focused mostly on[br]open-source filesystem and drivers because
0:48:57.460,0:49:02.058
then we could instrument them with a[br]compiler. We haven't yet tried, but this
0:49:02.058,0:49:05.740
is, like, also pretty high up on the list.[br]We wanted to try to find some closed-
0:49:05.740,0:49:10.609
source drivers—there's lots of them, like[br]for GPUs or anything—and we'll give it a
0:49:10.609,0:49:15.460
try and find some 0days, perhaps.[br]Q: Yes, but with syzkaller, you still have
0:49:15.460,0:49:22.582
a problem. You have to write rules, like,[br]dictionaries. I mean, you have to
0:49:22.582,0:49:24.599
understand the format, have to communicate[br]with the driver.
0:49:24.599,0:49:28.550
Nspace: Yeah, right But there's, for[br]example, closed-source file systems that
0:49:28.550,0:49:33.270
we are looking at.[br]Q: Okay. Thinking.
0:49:33.270,0:49:38.657
Herald: Number two.[br]Q: Hi. Thank you for your talk. So I don't
0:49:38.657,0:49:45.070
know if there are any kCov- or kASan-[br]equivalent solution to Windows, but I was
0:49:45.070,0:49:49.933
wondering if you tried, or are you[br]planning to do it on Windows, the
0:49:49.933,0:49:52.540
framework? Because I know it might be[br]challenging because of the driver
0:49:52.540,0:49:56.849
signature enforcement and PatchGuard, but[br]I wondered if you tried or thought about
0:49:56.849,0:49:59.290
it.[br]gannimo: Yes, we thought about it and we
0:49:59.290,0:50:06.383
decided against it. Windows is incredibly[br]hard and we are academics. The research I
0:50:06.383,0:50:11.800
do in my lab, or we do in my research lab,[br]focuses on predominantly open-source
0:50:11.800,0:50:17.060
software and empowers open-source[br]software. Doing full support for Microsoft
0:50:17.060,0:50:20.780
Windows is somewhat out of scope. If[br]somebody wants to port these tools, we are
0:50:20.780,0:50:24.190
happy to hear it and work with these[br]people. But it's a lot of additional
0:50:24.190,0:50:28.530
engineering effort, versus very[br]additional—very low additional research
0:50:28.530,0:50:33.060
value, so we'll have to find some form of[br]compromise. And, like, if you would be
0:50:33.060,0:50:38.650
willing to fund us, we would go ahead. But[br]it's—yeah, it's a cost question.
0:50:38.650,0:50:42.089
Q: And you're referring both to kernel and[br]user space, right?
0:50:42.089,0:50:45.089
gannimo: Yeah.[br]Q: Okay. Thank you.
0:50:45.089,0:50:48.105
Herald: Number five.[br]Q: Hi, thanks for the talk. This seems
0:50:48.105,0:50:52.400
most interesting if you're looking for[br]vulnerabilities in closed source kernel
0:50:52.400,0:50:58.359
modules, but not giving it too much[br]thought, it seems it's really trivial to
0:50:58.359,0:51:01.920
prevent this if you're writing a closed[br]source module.
0:51:01.920,0:51:07.130
gannimo: Well, how would you prevent this?[br]Q: Well, for starters, you would just take
0:51:07.130,0:51:11.492
a difference between the address of two[br]functions. That's not gonna be IP
0:51:11.492,0:51:15.860
relative, so...[br]Nspace: Right. So we explicitly—like, even
0:51:15.860,0:51:21.589
in the original RetroWrite paper—we[br]explicitly decided to not try to deal with
0:51:21.589,0:51:25.777
obfuscated code, or code that is[br]purposefully trying to defeat this kind of
0:51:25.777,0:51:30.510
rewriting. Because, like, the assumption[br]is that first of all, there are techniques
0:51:30.510,0:51:34.099
to, like, deobfuscate code or remove[br]these, like, checks in some way, but this
0:51:34.099,0:51:39.510
is, like, sort of orthogonal work. And at[br]the same time, I guess most drivers are
0:51:39.510,0:51:43.980
not really compiled with the sort of[br]obfuscation; they're just, like, you know,
0:51:43.980,0:51:47.657
they're compiled with a regular compiler.[br]But yeah, of course, this is, like, a
0:51:47.657,0:51:50.070
limitation.[br]gannimo: They're likely stripped, but not
0:51:50.070,0:51:54.281
necessarily obfuscated. At least from what[br]we've seen when we looked at binary-only
0:51:54.281,0:51:58.980
drivers.[br]Herald: Microphone number two.
0:51:58.980,0:52:04.350
Q: How do you decide where to place the[br]red zones? From what I heard, you talked
0:52:04.350,0:52:10.030
about instrumenting the allocators, but,[br]well, there are a lot of variables on the
0:52:10.030,0:52:13.270
stack, so how do you deal with those?[br]gannimo: Oh, yeah, that's actually super
0:52:13.270,0:52:20.159
cool. I refer to some extent to the paper[br]that is on the GitHub repo as well. If you
0:52:20.159,0:52:26.778
think about it, modern compilers use[br]canaries for buffers. Are you aware of
0:52:26.778,0:52:31.150
stack canaries—how stack canaries work?[br]So, stack canaries—like, if the compiler
0:52:31.150,0:52:34.440
sees there's a buffer that may be[br]overflown, it places a stack canary
0:52:34.440,0:52:39.740
between the buffer and any other data.[br]What we use is we—as part of our analysis
0:52:39.740,0:52:44.750
tool, we find these stack canaries, remove[br]the code that does the stack canary, and
0:52:44.750,0:52:49.420
use this space to place our red zones. So[br]we actually hack the stack in areas,
0:52:49.420,0:52:54.569
remove that code, and add ASan red zones[br]into the empty stack canaries that are now
0:52:54.569,0:52:58.599
there. It's actually a super cool[br]optimization because we piggyback on what
0:52:58.599,0:53:02.630
kind of work the compiler already did for[br]us before, and we can then leverage that
0:53:02.630,0:53:06.780
to gain additional benefits and protect[br]the stack as well.
0:53:06.780,0:53:11.120
Q: Thanks.[br]Angel: Another question from the Internet.
0:53:16.039,0:53:20.920
Q: Yes. Did you consider lifting the[br]binary code to LLVM IR instead of
0:53:20.920,0:53:28.370
generating assembler source?[br]gannimo: Yes. laughter But, so—a little
0:53:28.370,0:53:32.060
bit longer answer. Yes, we did consider[br]that. Yes, it would be super nice to lift
0:53:32.060,0:53:38.710
to LLVM IR. We've actually looked into[br]this. It's incredibly hard. It's
0:53:38.710,0:53:42.270
incredibly complex. There's no direct[br]mapping between the machine code
0:53:42.270,0:53:48.490
equivalent and the LLVM IR. You would[br]still need to recover all the types. So
0:53:48.490,0:53:51.800
it's like this magic dream that you[br]recover full LLVM IR, then do heavyweight
0:53:51.800,0:53:57.470
transformations on top of it. But this is[br]incredibly hard because if you compile
0:53:57.470,0:54:03.570
down from LLVM IR to machine code, you[br]lose a massive amount of information. You
0:54:03.570,0:54:07.150
would have to find a way to recover all of[br]that information, which is pretty much
0:54:07.150,0:54:14.990
impossible and undecidable for many cases.[br]So for example, just as a note, we only
0:54:14.990,0:54:19.420
recover control flow and we only[br]desymbolize control flow. For data
0:54:19.420,0:54:23.030
references—we don't support[br]instrumentation of data references yet
0:54:23.030,0:54:28.839
because there's still an undecidable[br]problem that we are facing with. I can
0:54:28.839,0:54:32.859
talk more about this offline, or there is[br]a note in the paper as well. So this is
0:54:32.859,0:54:37.270
just a small problem. Only if you're[br]lifting to assembly files. If you're
0:54:37.270,0:54:41.700
lifting to LLVM IR, you would have to do[br]full end-to-end type recovery, which is
0:54:41.700,0:54:46.400
massively more complicated. Yes, it would[br]be super nice. Unfortunately, it is
0:54:46.400,0:54:50.530
undecidable and really, really hard. So[br]you can come up with some heuristics, but
0:54:50.530,0:54:55.270
there is no solution that will do this[br]in—that will be correct 100 percent of the
0:54:55.270,0:54:57.490
time.[br]Angel: We'll take one more question from
0:54:57.490,0:55:02.609
microphone number six.[br]Q: Thank you for your talk. What kind of
0:55:02.609,0:55:07.299
disassemblers did you use for RetroWrite,[br]and did you have problems with the wrong
0:55:07.299,0:55:12.880
disassembly? And if so, how did you handle[br]it?
0:55:12.880,0:55:18.790
Nspace: So, RetroWrite—so we used[br]Capstone for the disassembly.
0:55:18.790,0:55:24.150
gannimo: An amazing tool, by the way.[br]Nspace: Yeah. So the idea is that, like,
0:55:24.150,0:55:30.240
we need some kind of—some information[br]about where the functions are. So for the
0:55:30.240,0:55:33.549
kernel modules, this is actually fine[br]because kernel modules come with this sort
0:55:33.549,0:55:37.730
of information because the kernel needs[br]it, to build stack traces, for example.
0:55:37.730,0:55:41.869
For userspace binaries, this is somewhat[br]less common, but you can use another tool
0:55:41.869,0:55:46.170
to try to do function identification. And[br]we do, like—sort of, like, disassemble the
0:55:46.170,0:55:54.500
entire function. So we have run into some[br]issues with, like—in AT&T syntax, because
0:55:54.500,0:55:59.650
like we wanted to use gas, GNU's[br]assembler, for, for...
0:55:59.650,0:56:04.240
gannimo: Reassembling.[br]Nspace: Reassembly, yeah. And some
0:56:04.240,0:56:09.819
instructions are a lot—you can express the[br]same, like, two different instructions,
0:56:09.819,0:56:15.670
like five-byte NOP and six-byte NOP, using[br]the same string of, like, text—a mnemonic,
0:56:15.670,0:56:19.970
an operand string. But the problem is[br]that, like, the kernel doesn't like it and
0:56:19.970,0:56:21.970
crashes. This took me like two days to[br]debug.
0:56:21.970,0:56:27.640
gannimo: So the kernel uses dynamic binary[br]patching when it runs, at runtime, and it
0:56:27.640,0:56:32.980
uses fixed offsets, so if you replace a[br]five-byte NOP with a six-byte NOP or vice
0:56:32.980,0:56:37.830
versa, your offsets change and your kernel[br]just blows up in your face.
0:56:37.830,0:56:43.099
Q: So it was kind of a case-on-case basis[br]where you saw the errors coming out of the
0:56:43.099,0:56:47.920
disassembly and you had to fix it?[br]Nspace: So sorry, can you repeat the
0:56:47.920,0:56:51.030
question?[br]Q: Like, for example, if you—if some
0:56:51.030,0:56:54.910
instruction is not supported by the[br]disassembler, so you saw that it crashed,
0:56:54.910,0:56:58.000
that there's something wrong, and then you[br]fix it by hand?
0:56:58.000,0:57:02.940
Nspace: Yeah, well, if we saw that there[br]was a problem with it, this—like, I don't
0:57:02.940,0:57:06.960
recall having any unknown instructions in[br]the dissasembler. I don't think I've ever
0:57:06.960,0:57:11.290
had a problem with that. But yeah, this[br]was a lot of, like, you know, engineering
0:57:11.290,0:57:14.290
work.[br]gannimo: So let me repeat. The problem was
0:57:14.290,0:57:19.220
not a bug in the disassembler, but an[br]issue with the instruction format—that the
0:57:19.220,0:57:24.530
same mnemonic can be translated into two[br]different instructions, one of which was
0:57:24.530,0:57:29.089
five bytes long, the other one was six[br]bytes long. Both used the exact same
0:57:29.089,0:57:32.880
mnemonic. Right, so this was an issue with[br]assembly encoding.
0:57:32.880,0:57:38.290
Q: But you had no problems with[br]unsupported instructions which couldn't be
0:57:38.290,0:57:41.339
disassembled?[br]Nspace: No, no. Not as far as I know, at
0:57:41.339,0:57:43.339
least.[br]Angel: We have one more minute, so a very
0:57:43.339,0:57:52.069
short question from microphone number two.[br]Q: Does it work? Ah. Is your binary
0:57:52.069,0:58:02.020
instrumentation equally powerful as kernel[br]address space... I mean, kASan? So, does
0:58:02.020,0:58:06.349
it detect all the memory corruptions on[br]stack, heap and globals?
0:58:06.349,0:58:13.050
gannimo: No globals. But heap—it does all[br]of them on the heap. There's some slight
0:58:13.050,0:58:20.150
variation on the stack because we have to[br]piggyback on the canary stuff. As I
0:58:20.150,0:58:23.880
mentioned quickly before, there is no[br]reflowing and full recovery of data
0:58:23.880,0:58:28.990
layouts. So to get anything on the stack,[br]we have to piggyback on existing compiler
0:58:28.990,0:58:36.650
extensions like stack canaries. But—so we[br]don't support intra-object overflows on
0:58:36.650,0:58:40.631
the stack. But we do leverage the stack in[br]areas to get some stack benefits, which
0:58:40.631,0:58:45.490
is, I don't know, 90, 95 percent there[br]because the stack canaries are pretty
0:58:45.490,0:58:51.319
good. For heap, we get the same precision.[br]For globals, we have very limited support.
0:58:51.319,0:58:54.290
Q: Thanks.[br]Angel: So that's all the time we have for
0:58:54.290,0:58:57.600
this talk. You can find the speakers, I[br]think, afterwards offline. Please give
0:58:57.600,0:58:59.820
them a big round of applause for an[br]interesting talk.
0:58:59.820,0:59:03.050
applause
0:59:03.050,0:59:07.360
36c3 postrol music
0:59:07.360,0:59:29.000
Subtitles created by c3subtitles.de[br]in the year 2021. Join, and help us!