0:00:00.000,0:00:14.825
34c3 intro
0:00:14.825,0:00:18.970
Herald: This is Audrey from California and
0:00:18.970,0:00:26.820
she's from the University of California[br]Santa Barbara, security lab, if I'm
0:00:26.820,0:00:38.109
informed correctly, and it is about[br]automated discovery of vulnerabilities in
0:00:38.109,0:00:50.999
the Android bootloader. Wow, not really my[br]problem but definitely Audrey's. So here
0:00:50.999,0:00:57.492
we go. Please let's have a big hand for[br]Audrey Dutcher. Thank you.
0:00:57.492,0:01:07.040
applause
0:01:07.040,0:01:11.050
Audrey: Good evening everybody. Today
0:01:11.050,0:01:17.330
we're talking about Android boot loaders.[br]As a brief aside I didn't actually work on
0:01:17.330,0:01:21.790
this work I just sit across from the[br]people who worked on this work and I was
0:01:21.790,0:01:26.659
the only one who could make it to Germany.[br]I have, do work on some of the stuff that
0:01:26.659,0:01:32.189
it depends on, so this is my field but[br]this is not my project. Just brief
0:01:32.189,0:01:41.010
disclaimer, thanks. So today we're talking[br]about Android boot loaders. Phones are
0:01:41.010,0:01:45.929
complicated, bootloaders are complicated,[br]processors are complicated and trying to
0:01:45.929,0:01:50.740
get at the bottom of these is very[br]difficult subject; if you ever done any
0:01:50.740,0:01:55.299
homebrew kernel dev or homebrew retro-[br]gaming, you know that interacting with
0:01:55.299,0:01:59.530
hardware is really complicated and trying[br]to do this in a phone, a system connected
0:01:59.530,0:02:03.749
to a touchscreen and a modem and lots of[br]complicated, money sensitive things, it's
0:02:03.749,0:02:10.580
really not, it's really complicated and -[br]but every single one of you has one of has
0:02:10.580,0:02:15.840
probably has a phone in your pocket and[br]all of these are immensely valuable
0:02:15.840,0:02:21.500
targets for attacks, so we want to be able[br]to inhales detect bugs in them
0:02:21.500,0:02:27.220
automatically, that's the name of the[br]game. So the bootloader in a device: it
0:02:27.220,0:02:31.810
takes - it's the it's the job of "oh,[br]we've powered on, we need to get
0:02:31.810,0:02:37.890
everything initialized", so we initialize[br]the device and the peripherals and then
0:02:37.890,0:02:43.680
the final the final gasp of breath of the[br]bootloader is to take the kernel and
0:02:43.680,0:02:49.120
execute it. And the kernel obviously needs[br]to be loaded from storage somewhere. For
0:02:49.120,0:02:53.830
Android specifically, this is what we[br]worked, on most Android devices are ARMs,
0:02:53.830,0:02:56.830
there's no particular standard for what an[br]ARM bootloader should look like but the
0:02:56.830,0:02:59.910
ARM people do give you some guidelines.[br]There's an open-source implementation of
0:02:59.910,0:03:04.040
what a secure boot letter should look[br]like. There are in fact several boot
0:03:04.040,0:03:08.660
loaders on ARM, we'll go over this later.[br]But it's some mor- it's a complicated
0:03:08.660,0:03:14.380
affair that needs to preserve several[br]security properties along the way. And
0:03:14.380,0:03:19.010
above all, the whole goal of this process[br]is to make sure that things are secure and
0:03:19.010,0:03:26.050
to make sure the user data is protected.[br]That's what we're trying to do. Like we
0:03:26.050,0:03:31.170
said the phones in your pockets are[br]valuable targets. If you can attack the
0:03:31.170,0:03:34.870
bootloader you can root you can roo- get a[br]rootkit on the device, which is even more
0:03:34.870,0:03:40.451
powerful than just getting root on it. If[br]an attacker were to compromised your phone
0:03:40.451,0:03:45.720
he could brick your device or establi- I[br]talked about rootkits already but but
0:03:45.720,0:03:50.890
additionally you might want to circumvent[br]the security properties of your phone's
0:03:50.890,0:03:56.930
bootloader in order to customize it:[br]rooting, jailbreaking. "Unlocking" is the
0:03:56.930,0:04:04.530
key word in this situation.[br]The Android bootloader establishes
0:04:04.530,0:04:10.650
cryptographic integrity over basically[br]what's happening at all times, so on your
0:04:10.650,0:04:17.450
phone there is a master key and that will,[br]that kno- that knows that it should only
0:04:17.450,0:04:21.399
it should only run some code that has been[br]signed with the key associated with the
0:04:21.399,0:04:24.940
hardware and then the next stage of[br]bootloader has a key that it will verify
0:04:24.940,0:04:28.500
that the next stage of the bootloader[br]hasn't been tampered with. And this is
0:04:28.500,0:04:34.160
where we get the term "chain of trust",[br]where each part establishes "oh, I'm very
0:04:34.160,0:04:38.720
very sure, cryptographically sure,[br]assuming RSA hasn't been broken yet, that
0:04:38.720,0:04:44.570
the next bit of code is going to be doing[br]something that I authorized."
0:04:44.570,0:04:48.940
Circumventing this is valuable, as we've[br]talked about, phones have to have a way to
0:04:48.940,0:04:56.470
buil- to do that built-in, unless you're[br]Apple, and but obviously protecting this
0:04:56.470,0:05:02.600
mechanism from attackers is a point of[br]contention, so really you need to make
0:05:02.600,0:05:10.211
sure that only the real owner of the[br]device can actually unlock the phone. So
0:05:10.211,0:05:16.400
wha- this project is about making is about[br]discovering vulnerabilities that let us
0:05:16.400,0:05:20.110
circumvent this process, so the threat[br]model that we use for this project is that
0:05:20.110,0:05:25.320
there is, the phone is rooted and the[br]attacker has this root control. This is
0:05:25.320,0:05:32.060
pretty out there, no not that out there,[br]root vulnerabilities exist, but it's it's
0:05:32.060,0:05:35.720
enough to make you scoff "Oh what's the[br]point of this", well, the security
0:05:35.720,0:05:39.920
properties of the phone are supposed to[br]extend above the hypervisor level. It's,
0:05:39.920,0:05:42.430
you're supposed to have these guarantees[br]that things should always work, assuming
0:05:42.430,0:05:47.513
the chain of trust works, regardless of[br]what how what's happening in the kernel.
0:05:48.940,0:05:54.650
So today we are going to be talking about[br]BootStomp, which is a tool that
0:05:54.650,0:05:59.600
automatically verifies these properties[br]and discovers bugs. I'm going a little
0:05:59.600,0:06:04.310
slow, I'll speed up.[br]So first of the booting process in Android
0:06:04.310,0:06:09.080
ecosystems is pretty complicated and[br]multi-stage; there is the there's the base
0:06:09.080,0:06:13.250
bootloader BL1, which loads and verifies[br]another bootloader, which loads and
0:06:13.250,0:06:17.260
verifies another bootloader and this is[br]important, because the first on's in a ROM
0:06:17.260,0:06:22.710
and is very small and the second one is[br]probably going, is probably by the
0:06:22.710,0:06:27.260
hardware vendor and the third one is[br]probably by the OS vendor, for example,
0:06:27.260,0:06:33.250
and they all need to do different things.[br]So the important part here is these EL
0:06:33.250,0:06:37.389
things; those are the ARM exception[br]levels, which are basically the global
0:06:37.389,0:06:42.300
permission levels for an android[br]processor. The EL3 is basically the god
0:06:42.300,0:06:45.500
mode. There's EL2 for hypervisors, it[br]isn't in this chart, there's EL1, which is
0:06:45.500,0:06:50.790
the kernel and the EL0, which is user[br]space. So when we boot you're obviously in
0:06:50.790,0:06:53.772
the highest execution level and gradually,[br]as we establish more and more
0:06:53.772,0:06:59.200
initialization of the device, we're going[br]to cede control to less privileged
0:06:59.200,0:07:04.949
components, so the bootloader is operate[br]very privilegededly and one of the things
0:07:04.949,0:07:10.630
they need to do is establish what's, the[br]ARM trust zone, the trusted execution
0:07:10.630,0:07:18.610
environment that lets people do really[br]secure things on Android phones.
0:07:18.610,0:07:25.710
That's, this is something that is set up[br]by built by the BL31 bootloader and in
0:07:25.710,0:07:29.270
secure world you need to do things like[br]establish, initialize hardware and
0:07:29.270,0:07:34.040
peripherals and in the non secure world[br]you're norm- you're running like the
0:07:34.040,0:07:38.570
normal kernel and the normal users apps.[br]And on some phones you actually have a
0:07:38.570,0:07:45.699
final bootloader, which runs in EL1, BL33[br]or the aboot executable and that's the
0:07:45.699,0:07:52.910
that's the one that we're generally[br]targeting for for this stuff. So this is
0:07:52.910,0:07:55.690
what I was talking about the chain of[br]trust: each of those arrows represents
0:07:55.690,0:08:00.360
cryptographic integrity, so the next stage[br]only gets loaded if there's a valid
0:08:00.360,0:08:09.420
signature indicating that we really trust[br]what's going on here. And that's the
0:08:09.420,0:08:13.729
unlocking process that we were talking[br]about; if you, the verified, physical
0:08:13.729,0:08:18.210
owner of the device, wants to you can[br]disable that last bit and cause and allow
0:08:18.210,0:08:21.889
untrusted code to run as the kernel.[br]That's totally fine, if you own the
0:08:21.889,0:08:27.770
device.[br]The unlocking process is supposed to
0:08:27.770,0:08:30.850
really specifically verify these two[br]things: that you have physical access to
0:08:30.850,0:08:37.799
the device and that you actually own it,[br]like you know the pin to it, that's what
0:08:37.799,0:08:46.050
establishes ownership of our device.And so[br]specifically when you when you go through
0:08:46.050,0:08:51.430
that process it does set some specific[br]flags on your persistent storage, saying
0:08:51.430,0:08:57.569
this is an unlocked device now, you can do[br]whatever but making sure that that can
0:08:57.569,0:09:05.009
only happen when it's authorized is the[br]point of contention here. It should, the,
0:09:05.009,0:09:09.449
typically what happens is this security[br]state is itself cryptographically signed,
0:09:09.449,0:09:15.100
so you can't just set unlocked, you have[br]to set unlocked but signed by the people
0:09:15.100,0:09:22.529
that we really trust. And but but[br]generally you probably shouldn't be able
0:09:22.529,0:09:30.899
to write to it just from the normal user[br]space. So the question is: we saw that the
0:09:30.899,0:09:34.940
operating system is separate from the[br]bootloader. So what we want to be able to
0:09:34.940,0:09:40.119
do is get from the Android OS to affecting[br]the, to the bootloader. And can this
0:09:40.119,0:09:48.139
happen? Well, of course, that's why we're[br]here. So the, let's see. Oh I didn't
0:09:48.139,0:09:52.470
realize there were animations on the[br]slides, that's unfortunate. So this is
0:09:52.470,0:09:59.420
sort of the normal flow chart of how these[br]things normally come about.
0:09:59.420,0:10:03.589
You've got the bootloader, which has to[br]read from persistent storage in order to
0:10:03.589,0:10:07.459
initialize the operating system. Like, of[br]course you have to read, for example,
0:10:07.459,0:10:11.139
whether or not the device is unlocked, you[br]have to load the kernel itself. There are
0:10:11.139,0:10:17.009
lots of inputs to the bootloader and[br]intuition is that the bootloader is, these
0:10:17.009,0:10:22.550
just serve as normal inputs to a program,[br]which can be analyzed for vulnerabilities.
0:10:22.550,0:10:30.869
Oh lord, this is a mess. So so from the[br]OS, you're allowed to, you ha- if you have
0:10:30.869,0:10:35.880
root privileges in the operating system[br]you can write to this persistent storage,
0:10:35.880,0:10:46.920
which means that you have that this serves[br]as another input to the bootloader and
0:10:46.920,0:10:53.180
this can cause bad things to happen. So we[br]need some sort of tool, it's the point of
0:10:53.180,0:10:58.189
this project, to automatically verify the[br]safety properties of these boot loaders.
0:10:58.189,0:11:04.481
That's BootStomp. Bootloaders are[br]complicated. There's a lot of stuff, which
0:11:04.481,0:11:08.480
means you have to automate - the analysis[br]has to be automated in order to really
0:11:08.480,0:11:11.999
sift through something as big and[br]complicated as a bootloader, with the care
0:11:11.999,0:11:16.860
necessary to actually find bugs that are[br]sitting there.
0:11:16.860,0:11:20.310
And but these things aren't usually things[br]that you have source code for, so it needs
0:11:20.310,0:11:25.420
to be a binary analysis and furthermore[br]you can't really do a dynamic execution on
0:11:25.420,0:11:29.679
something that needs to run on the highest[br]privilege level of a processor, so you
0:11:29.679,0:11:33.389
have to have your - step back - and it has[br]to be static as well. And furthermore this
0:11:33.389,0:11:37.499
needs to be a fully free-standing analysis[br]that doesn't assume anything other than
0:11:37.499,0:11:42.080
"oh, we're executing code on a system",[br]because there's no known syscalls or API's
0:11:42.080,0:11:46.959
to checkpoint process or say "oh, we know[br]what this means, we don't really have to
0:11:46.959,0:11:56.069
analyze it." So it's a tall order but you[br]can do it with enough work. So BootStomp
0:11:56.069,0:12:02.970
specifically is the tool that we built. It[br]will automatically detect these inputs,
0:12:02.970,0:12:09.870
that we talked about, to the bootloader[br]and then it will determine if these inputs
0:12:09.870,0:12:14.050
can be used to compromise various security[br]properties of the device.
0:12:14.050,0:12:20.050
One such example is if you can use this to[br]just achieve memory corruption for example
0:12:20.050,0:12:27.649
or more abstract forms of vulnerability,[br]such as code flows that will result in
0:12:27.649,0:12:33.319
unwanted data being written by the more[br]privileged bootloader somewhere. And the
0:12:33.319,0:12:39.519
important thing about this analysis is[br]that its results are easily verifiable and
0:12:39.519,0:12:44.970
traceable and it's very easy to like look[br]at the outputs and say "oh, well, this is
0:12:44.970,0:12:48.579
what's happening and this is why I think[br]this happened and therefore I can
0:12:48.579,0:12:59.290
reproduce this, possibly?" This happens[br]through symbolic taint analysis. This is
0:12:59.290,0:13:05.550
the part that I know about, because I work[br]on angr, which is the symbolic execution
0:13:05.550,0:13:11.209
analysis static analysis tool that[br]bootstomp uses in order to do its taint
0:13:11.209,0:13:18.910
analysis. That taint analysis of all[br]execution are kind of loaded words, so
0:13:18.910,0:13:24.259
this is what specifically is meant is that[br]when we discover these sources and sinks
0:13:24.259,0:13:28.790
of behavior, through person particularly[br]static static analysis and some
0:13:28.790,0:13:32.040
heuristics, of course. And then we[br]propagate these taints through symbolic
0:13:32.040,0:13:35.319
execution, while maintaining tractability.[br]And notice wherever
0:13:35.319,0:13:39.850
we meet wherever we can find pause from[br]taint sources to behaviors sinks that we
0:13:39.850,0:13:47.420
think are vulnerable. Specifically, we[br]think these these behavior sinks are
0:13:47.420,0:13:51.889
vulnerable behavior if you can arbitrarily[br]write to memory, or read from memory.
0:13:51.889,0:13:55.040
Like, really arbitrary, if there's a[br]pointer which is controlled by user input
0:13:55.040,0:13:59.980
- memory corruption stuff. And[br]additionally, if you can control loop
0:13:59.980,0:14:04.449
iterations through your input, that[br]indicates the denial of service attack.
0:14:04.449,0:14:11.820
And finally, the unlocking mechanism, the[br]bootloader unlocking mechanism, if there
0:14:11.820,0:14:18.269
is if we can detect specific code paths[br]which indicate bypasses - those are
0:14:18.269,0:14:25.910
valuable. So, yeah, so this is the[br]specific architecture of the tool. There
0:14:25.910,0:14:31.290
are the two main modules one which is[br]written as an IDA analysis. You know, the
0:14:31.290,0:14:35.209
big tool that everyone probably doesn't[br]pay enough money for. And then there's the
0:14:35.209,0:14:41.829
other component written in angr which is[br]the symbolic change analysis. And this is
0:14:41.829,0:14:51.850
probably the point where I break out of[br]here and actually start the live demo.
0:14:51.850,0:14:58.671
That's big enough.[br]Okay, so we're working on a Huawei boot
0:14:58.671,0:15:07.100
image here, the fastboot image. We're[br]going to load it up in IDA real quick. So
0:15:07.100,0:15:14.689
here, IDA has understands, oh this is what[br]the executable is. So if we just sort of
0:15:14.689,0:15:23.819
run the initial script, find taints, it'll[br]think real hard for a little bit. There's
0:15:23.819,0:15:27.199
no real reason this couldn't if it's part[br]couldn't have been done an angr or binary
0:15:27.199,0:15:33.579
ninja or r2 or (???), god forbid. But,[br]this is a collaborative project if you saw
0:15:33.579,0:15:36.970
the huge author list and people write[br]stuff and whatever they're comfortable
0:15:36.970,0:15:41.859
with. So it's IDA in this case.[br]Realistically, because this is just a
0:15:41.859,0:15:46.839
binary blob when you load it in IDA it[br]doesn't immediately know where everything
0:15:46.839,0:15:54.469
is, so you have to sort of nudge it into..[br]oh here's where all the functions are.
0:15:54.469,0:16:05.399
Okay, we finished, and what it's done is: [br]we've got this taint source sync dot txt
0:16:05.399,0:16:12.229
which shows us, "oh, here are all the sources[br]of tainted information, and here's a few of
0:16:12.229,0:16:16.459
the sinks that we established." Obviously[br]you don't need a sink analysis to
0:16:16.459,0:16:20.119
determine if you've got memory corruption[br]or not but we like knowing where the
0:16:20.119,0:16:23.809
writes to persistent storage are and where[br]all the specifically the memcopy functions
0:16:23.809,0:16:35.209
are valuable for analysis. And then, if we[br]run our taint analysis bootloader, taint
0:16:35.209,0:16:38.759
on the - oh this configuration file is[br]real simple. It just says, "oh here's what
0:16:38.759,0:16:42.040
we're analyzing: it's a 64-bit[br]architecture, don't bother analyzing thumb
0:16:42.040,0:16:51.989
mode, etc cetera, simple stuff." And it'll[br]do this for about 20 minutes. Uh, config;
0:16:51.989,0:16:57.629
and it'll do this for about 20 minutes. I[br]hope it finishes before the demo is over.
0:16:57.629,0:17:05.779
If not, I'll do some magic and we'll a[br]pre-prepared solution. But, so, we talked
0:17:05.779,0:17:09.579
about these seeds there used the the seeds[br]for our taint analysis or for our
0:17:09.579,0:17:17.500
persistent storage. And that I used by the[br]unlocking procedure. So the heuristics I
0:17:17.500,0:17:21.369
was talking about - we want to identify[br]the reads from persistent storage through
0:17:21.369,0:17:26.050
log messages, keyword keyword analysis and[br]long distances. So the eMMC is this is a
0:17:26.050,0:17:29.689
specific memory module used by [br]the bootloader for secure purposes. And
0:17:29.689,0:17:34.809
just it's the persistent storage device[br]basically. And you can identify these log
0:17:34.809,0:17:39.500
messages and then we just do a diff -u[br]analysis back from the guard condition on
0:17:39.500,0:17:44.330
that block to its source and you say, "oh[br]that function must be the read." It's
0:17:44.330,0:17:52.149
pretty simple. It works surprisingly[br]often. Of course, if this isn't enough you
0:17:52.149,0:17:56.320
can just manually analyze the firmware and[br]provide, "oh here's where we read from
0:17:56.320,0:17:58.171
persistent storage, here's what you[br]should taint."
0:18:07.900,0:18:11.600
Cool. So the taint
0:18:11.600,0:18:15.200
analysis: our taints are specifically[br]sy-- this is specifically symbolic taint
0:18:15.200,0:18:19.809
analysis so it's not just like what Triton[br]does where you've got a concrete value
0:18:19.809,0:18:25.160
that has metadata attached. This is a real[br]symbol being used for symbolic execution.
0:18:25.160,0:18:29.799
If you're not familiar with symbolic[br]execution, it's a if it's a form of static
0:18:29.799,0:18:38.330
analysis in which you emulate the code,[br]but instead of having the values for some
0:18:38.330,0:18:41.169
of things you can just have symbols. And[br]then when you perform operation on those
0:18:41.169,0:18:46.450
symbols you construct an abstract syntax[br]tree of the behavior. And then when you
0:18:46.450,0:18:52.250
run into branch conditions based on those[br]things you can say, "oh, well, in order to
0:18:52.250,0:19:00.419
get from point A to point B this[br]constraint must be satisfied." And of
0:19:00.419,0:19:05.470
course, now you can just add z3 and stir[br]and you have passed the inputs to generate
0:19:05.470,0:19:12.230
paths to the program. So for the sinks of[br]the taint analysis, we want we wants to
0:19:12.230,0:19:18.960
say, "oh, if tainted data come comes into[br]and is is the argument to memcpy then
0:19:18.960,0:19:23.220
that's a vulnerability." I don't mean[br]like, it's the I don't mean like, the
0:19:23.220,0:19:28.059
taint data is the subject of memcopy, like[br]it's one of the values passed to memcpy.
0:19:28.059,0:19:33.549
That's a memory corruption vulnerability[br]generally. Yeah, we talked about memory
0:19:33.549,0:19:36.409
corruption, and we talked about loop[br]conditions, and we talked about writes to
0:19:36.409,0:19:41.220
persistent storage with the unlocking[br]stuff. Cool. For taint checking
0:19:41.220,0:19:46.429
specifically -- oh this is exactly what I[br]just said. Yeah, and part and what I was
0:19:46.429,0:19:50.480
talking about with the symbolic predicates[br]and trace analysis means
0:19:50.480,0:19:54.130
that when you see something, [br]you automatically have
0:19:54.130,0:20:00.350
the input that will generate that[br]behavior. So the output is inherently
0:20:00.350,0:20:06.169
traceable. Unfortunately, symbolic[br]execution has some issues. I was actually
0:20:06.169,0:20:12.380
at CCC two years ago talking about the[br]exact same problem. You have this problem
0:20:12.380,0:20:18.759
where, oh, you generate paths between[br]different between different states and
0:20:18.759,0:20:24.210
there can be too many of them. It[br]overwhelms your analysis. So you can use
0:20:24.210,0:20:29.269
some heuristics to say, "oh, we don't want[br]to we can; because it's the static
0:20:29.269,0:20:34.050
analysis we have a more powerful step over[br]than what a debugger can do." We don't
0:20:34.050,0:20:37.340
have to actually analyze the function, we[br]can just take the instruction pointer and
0:20:37.340,0:20:43.590
move it over there. And, this does cause[br]some unsoundness, but it's not a problem
0:20:43.590,0:20:49.039
if you like make sure that the arguments[br]aren't tainted, for example. Or sometimes
0:20:49.039,0:20:53.909
you just accept the unsoundness as part of[br]the tractability of the problem. Limit
0:20:53.909,0:20:58.519
loop operation: that's classic technique[br]from static analysis. And the time out, of
0:20:58.519,0:21:05.610
course. So, what are the bugs we found? We[br]evaluated this on four boot loaders and we
0:21:05.610,0:21:15.090
found several bugs. Six of which were zero[br]days. So that's pretty good. It's like,
0:21:15.090,0:21:18.980
okay, so you found some bugs but it could[br]just be you; oh there are some errors and
0:21:18.980,0:21:22.880
an initialization that don't really[br]matter. But on the other hand you can
0:21:22.880,0:21:31.669
crash it 41 41 41. That's pretty serious.[br]So as we saw, some of the bootloader is
0:21:31.669,0:21:35.710
like do work in ARM EL3 so this is pretty[br]significant. You can do whatever you want
0:21:35.710,0:21:39.919
in the device if you actually have[br]sufficient control over it. This is
0:21:39.919,0:21:47.669
rootkit territory. You could break[br]anything you wanted. Then there's another
0:21:47.669,0:21:52.120
component in the analysis that says, "can[br]we'd find bypasses to the unlocking
0:21:52.120,0:21:57.860
procedure." For example, this is this is[br]basically one of the ones that we found:
0:21:57.860,0:22:03.890
it's so it says boots on detected this,[br]this flow from data that was read from the
0:22:03.890,0:22:09.440
device to data that was written to the[br]device, and what this code is supposed to
0:22:09.440,0:22:12.880
do -- do I have animations? yes --it's[br]supposed to,
0:22:12.880,0:22:15.460
like, take some input[br]and verify that it hashes
0:22:15.460,0:22:20.130
to a certain value. And if so, hash[br]that value and write it back to disk, and
0:22:20.130,0:22:27.110
that constitutes the cryptographically[br]secure unlocking thing. However, the thing
0:22:27.110,0:22:32.740
that we write to is compared to the[br]identical to the thing that was read from
0:22:32.740,0:22:39.789
the disk. So you can just; the thing that[br]boots on purported was the code flow from
0:22:39.789,0:22:43.990
the disk backs the disk indicating that if[br]you can read from the disk, you know how
0:22:43.990,0:22:53.190
to produce the thing that will unlock the[br]phone. So this isn't secure. Mitigations.
0:22:53.190,0:22:59.440
So, the thing that Google does in order to[br]prevent attacks of this class is that the
0:22:59.440,0:23:04.419
key ness is the secure encryption key that[br]unlocked, that decrypts the like userland
0:23:04.419,0:23:13.460
data is, has embedded in it the unlock[br]state. So clearly, if you change the
0:23:13.460,0:23:17.460
unlock state you brick the entire phone.[br]Well, not brick, but have to reset it have
0:23:17.460,0:23:27.440
to lose all your data. That's still not[br]really good enough but realistically we
0:23:27.440,0:23:32.950
should probably be using a more trusted[br]form of storage that's not just the normal
0:23:32.950,0:23:36.809
normal partitions in the SD card in order[br]to just sort of store this state. It
0:23:36.809,0:23:41.490
should probably be part of the eMMC, or[br]specifically the replay protected memory
0:23:41.490,0:23:46.789
block which uses cryptographic mechanisms[br]to synchronize the, what's it called,
0:23:46.789,0:23:55.269
synchronize this writes to the memory with[br]the authenticated process. And so that
0:23:55.269,0:23:58.270
would make that would make sure that[br]only the bootloader could unlock it. But
0:23:58.270,0:24:01.789
of course that wouldn't protect against[br]memory corruption vulnerabilities and
0:24:01.789,0:24:04.780
there's nothing really to be said about[br]that other than, "hey, this is a serious
0:24:04.780,0:24:11.970
problem." In conclusion, all these bugs[br]have been reported, most of them have been
0:24:11.970,0:24:17.990
fixed. As far as I'm aware this is the[br]first study to really explore and
0:24:17.990,0:24:22.309
develop analyses for Android boot loaders[br]and in it we developed an automated
0:24:22.309,0:24:26.751
technique to analyze boot loaders with[br]tractable alerts. I found six 0days in
0:24:26.751,0:24:32.409
various boot loaders and our[br]implementation is open source. I will be
0:24:32.409,0:24:34.960
taking questions, thank you for listening.
0:24:34.960,0:24:43.950
applause
0:24:43.950,0:24:46.110
Herald: That was quite amazing.
0:24:49.770,0:24:53.120
Okay we'll be taking some [br]questions from people
0:24:53.120,0:24:56.890
that understood exactly[br]what it was all about. Yes
0:24:56.890,0:24:59.110
I see somebody walking up to microphone[br]one.
0:24:59.110,0:25:02.390
Mic 1: Thank you very much for talk--[br]
0:25:02.390,0:25:04.810
Herald: Are you talking the mic otherwise[br]we can't record it.
0:25:04.810,0:25:06.959
Mic 1: Okay, thank you very much for that
0:25:06.959,0:25:11.659
work, that was really cool. Your mystic[br]investigations didn't include devicing the
0:25:11.659,0:25:16.289
code better. Do you think it's possible to[br]write the code so that your tools can
0:25:16.289,0:25:21.350
analyze it and maybe it would be secure?[br]Or not yet?
0:25:21.350,0:25:28.240
Audrey: Well, there's certainly things to[br]be said for having things in open source,
0:25:28.240,0:25:33.360
because necessarily doing analysis on[br]source code is a much more, a much better
0:25:33.360,0:25:41.250
defined field than the than doing analysis[br]on binary code. Additionally, you can
0:25:41.250,0:25:48.010
write your stuff in languages there is[br]safer than C. I don't know if it's, I
0:25:48.010,0:25:55.519
didn't know if it's safe to talk about[br]rust yet, but rust is cool. Yeah, there's
0:25:55.519,0:26:00.759
lots of things that you can do. I just[br]realized I didn't show off; I didn't show
0:26:00.759,0:26:05.820
off the still running, the analysis: the[br]automated results. It did not finish in
0:26:05.820,0:26:12.820
time so I will run some magic, and now we[br]have some results. Which..
0:26:12.820,0:26:19.429
applause
0:26:19.429,0:26:21.859
So, here's a here's one of the analysis
0:26:21.859,0:26:27.519
results. We found at this location in the[br]program a tainted variable, specifically
0:26:27.519,0:26:33.049
the tainted at offset 261 into the tainted[br]buffer. This variable was used as a
0:26:33.049,0:26:41.019
pointer. And that involved following the[br]path from along along this way. So there
0:26:41.019,0:26:46.320
is a vulnerability that I discovered for you.[br]So we can go on with question sorry that
0:26:46.320,0:26:47.160
was a bit.
0:26:47.160,0:26:48.619
Herald: Any more questions from the
0:26:48.619,0:26:57.780
audience? There is no question from from[br]the internet. Okay, one question, go
0:26:57.780,0:27:00.029
ahead: talk into the mic please.
0:27:00.029,0:27:03.029
Question: You said that the bugs you found
0:27:03.029,0:27:07.789
where responsibly disclosed and fixed.[br]Were they actually fixed in real existing
0:27:07.789,0:27:11.860
devices or did the vendors just say, "oh,[br]we'll fix it in future devices."
0:27:11.860,0:27:16.759
Audrey: I wish I knew the answer to that[br]question. I wasn't on the in the did this.
0:27:16.759,0:27:23.100
Yeah, I can't speak to that. That was just[br]that was just a slide on the slides that I
0:27:23.100,0:27:29.780
was given. I sure hope they were really[br]responsibly disclosed. It's real hard to
0:27:29.780,0:27:36.400
push updates to the bootloader![br]
0:27:36.400,0:27:39.960
Herald: Okay, any more questions? okay so
0:27:39.960,0:27:43.909
let's conclude this talk. People, when you[br]leave the hall, please take all your stuff
0:27:43.909,0:27:49.554
with you. Your bottles, your cups. Don't[br]forget anything, have a last check. Thank
0:27:49.556,0:27:55.053
you very much let's have one final hand[br]for Audrey Dutcher, from California!
0:27:55.053,0:28:00.749
applause
0:28:00.749,0:28:06.145
34c3 outro
0:28:06.145,0:28:23.000
subtitles created by c3subtitles.de[br]in the year 2018. Join, and help us!