1
00:00:00,000 --> 00:00:14,825
34c3 intro
2
00:00:14,825 --> 00:00:18,970
Herald: This is Audrey from California and
3
00:00:18,970 --> 00:00:26,820
she's from the University of California
Santa Barbara, security lab, if I'm
4
00:00:26,820 --> 00:00:38,109
informed correctly, and it is about
automated discovery of vulnerabilities in
5
00:00:38,109 --> 00:00:50,999
the Android bootloader. Wow, not really my
problem but definitely Audrey's. So here
6
00:00:50,999 --> 00:00:57,492
we go. Please let's have a big hand for
Audrey Dutcher. Thank you.
7
00:00:57,492 --> 00:01:07,040
applause
8
00:01:07,040 --> 00:01:11,050
Audrey: Good evening everybody. Today
9
00:01:11,050 --> 00:01:17,330
we're talking about Android boot loaders.
As a brief aside I didn't actually work on
10
00:01:17,330 --> 00:01:21,790
this work I just sit across from the
people who worked on this work and I was
11
00:01:21,790 --> 00:01:26,659
the only one who could make it to Germany.
I have, do work on some of the stuff that
12
00:01:26,659 --> 00:01:32,189
it depends on, so this is my field but
this is not my project. Just brief
13
00:01:32,189 --> 00:01:41,010
disclaimer, thanks. So today we're talking
about Android boot loaders. Phones are
14
00:01:41,010 --> 00:01:45,929
complicated, bootloaders are complicated,
processors are complicated and trying to
15
00:01:45,929 --> 00:01:50,740
get at the bottom of these is very
difficult subject; if you ever done any
16
00:01:50,740 --> 00:01:55,299
homebrew kernel dev or homebrew retro-
gaming, you know that interacting with
17
00:01:55,299 --> 00:01:59,530
hardware is really complicated and trying
to do this in a phone, a system connected
18
00:01:59,530 --> 00:02:03,749
to a touchscreen and a modem and lots of
complicated, money sensitive things, it's
19
00:02:03,749 --> 00:02:10,580
really not, it's really complicated and -
but every single one of you has one of has
20
00:02:10,580 --> 00:02:15,840
probably has a phone in your pocket and
all of these are immensely valuable
21
00:02:15,840 --> 00:02:21,500
targets for attacks, so we want to be able
to inhales detect bugs in them
22
00:02:21,500 --> 00:02:27,220
automatically, that's the name of the
game. So the bootloader in a device: it
23
00:02:27,220 --> 00:02:31,810
takes - it's the it's the job of "oh,
we've powered on, we need to get
24
00:02:31,810 --> 00:02:37,890
everything initialized", so we initialize
the device and the peripherals and then
25
00:02:37,890 --> 00:02:43,680
the final the final gasp of breath of the
bootloader is to take the kernel and
26
00:02:43,680 --> 00:02:49,120
execute it. And the kernel obviously needs
to be loaded from storage somewhere. For
27
00:02:49,120 --> 00:02:53,830
Android specifically, this is what we
worked, on most Android devices are ARMs,
28
00:02:53,830 --> 00:02:56,830
there's no particular standard for what an
ARM bootloader should look like but the
29
00:02:56,830 --> 00:02:59,910
ARM people do give you some guidelines.
There's an open-source implementation of
30
00:02:59,910 --> 00:03:04,040
what a secure boot letter should look
like. There are in fact several boot
31
00:03:04,040 --> 00:03:08,660
loaders on ARM, we'll go over this later.
But it's some mor- it's a complicated
32
00:03:08,660 --> 00:03:14,380
affair that needs to preserve several
security properties along the way. And
33
00:03:14,380 --> 00:03:19,010
above all, the whole goal of this process
is to make sure that things are secure and
34
00:03:19,010 --> 00:03:26,050
to make sure the user data is protected.
That's what we're trying to do. Like we
35
00:03:26,050 --> 00:03:31,170
said the phones in your pockets are
valuable targets. If you can attack the
36
00:03:31,170 --> 00:03:34,870
bootloader you can root you can roo- get a
rootkit on the device, which is even more
37
00:03:34,870 --> 00:03:40,451
powerful than just getting root on it. If
an attacker were to compromised your phone
38
00:03:40,451 --> 00:03:45,720
he could brick your device or establi- I
talked about rootkits already but but
39
00:03:45,720 --> 00:03:50,890
additionally you might want to circumvent
the security properties of your phone's
40
00:03:50,890 --> 00:03:56,930
bootloader in order to customize it:
rooting, jailbreaking. "Unlocking" is the
41
00:03:56,930 --> 00:04:04,530
key word in this situation.
The Android bootloader establishes
42
00:04:04,530 --> 00:04:10,650
cryptographic integrity over basically
what's happening at all times, so on your
43
00:04:10,650 --> 00:04:17,450
phone there is a master key and that will,
that kno- that knows that it should only
44
00:04:17,450 --> 00:04:21,399
it should only run some code that has been
signed with the key associated with the
45
00:04:21,399 --> 00:04:24,940
hardware and then the next stage of
bootloader has a key that it will verify
46
00:04:24,940 --> 00:04:28,500
that the next stage of the bootloader
hasn't been tampered with. And this is
47
00:04:28,500 --> 00:04:34,160
where we get the term "chain of trust",
where each part establishes "oh, I'm very
48
00:04:34,160 --> 00:04:38,720
very sure, cryptographically sure,
assuming RSA hasn't been broken yet, that
49
00:04:38,720 --> 00:04:44,570
the next bit of code is going to be doing
something that I authorized."
50
00:04:44,570 --> 00:04:48,940
Circumventing this is valuable, as we've
talked about, phones have to have a way to
51
00:04:48,940 --> 00:04:56,470
buil- to do that built-in, unless you're
Apple, and but obviously protecting this
52
00:04:56,470 --> 00:05:02,600
mechanism from attackers is a point of
contention, so really you need to make
53
00:05:02,600 --> 00:05:10,211
sure that only the real owner of the
device can actually unlock the phone. So
54
00:05:10,211 --> 00:05:16,400
wha- this project is about making is about
discovering vulnerabilities that let us
55
00:05:16,400 --> 00:05:20,110
circumvent this process, so the threat
model that we use for this project is that
56
00:05:20,110 --> 00:05:25,320
there is, the phone is rooted and the
attacker has this root control. This is
57
00:05:25,320 --> 00:05:32,060
pretty out there, no not that out there,
root vulnerabilities exist, but it's it's
58
00:05:32,060 --> 00:05:35,720
enough to make you scoff "Oh what's the
point of this", well, the security
59
00:05:35,720 --> 00:05:39,920
properties of the phone are supposed to
extend above the hypervisor level. It's,
60
00:05:39,920 --> 00:05:42,430
you're supposed to have these guarantees
that things should always work, assuming
61
00:05:42,430 --> 00:05:47,513
the chain of trust works, regardless of
what how what's happening in the kernel.
62
00:05:48,940 --> 00:05:54,650
So today we are going to be talking about
BootStomp, which is a tool that
63
00:05:54,650 --> 00:05:59,600
automatically verifies these properties
and discovers bugs. I'm going a little
64
00:05:59,600 --> 00:06:04,310
slow, I'll speed up.
So first of the booting process in Android
65
00:06:04,310 --> 00:06:09,080
ecosystems is pretty complicated and
multi-stage; there is the there's the base
66
00:06:09,080 --> 00:06:13,250
bootloader BL1, which loads and verifies
another bootloader, which loads and
67
00:06:13,250 --> 00:06:17,260
verifies another bootloader and this is
important, because the first on's in a ROM
68
00:06:17,260 --> 00:06:22,710
and is very small and the second one is
probably going, is probably by the
69
00:06:22,710 --> 00:06:27,260
hardware vendor and the third one is
probably by the OS vendor, for example,
70
00:06:27,260 --> 00:06:33,250
and they all need to do different things.
So the important part here is these EL
71
00:06:33,250 --> 00:06:37,389
things; those are the ARM exception
levels, which are basically the global
72
00:06:37,389 --> 00:06:42,300
permission levels for an android
processor. The EL3 is basically the god
73
00:06:42,300 --> 00:06:45,500
mode. There's EL2 for hypervisors, it
isn't in this chart, there's EL1, which is
74
00:06:45,500 --> 00:06:50,790
the kernel and the EL0, which is user
space. So when we boot you're obviously in
75
00:06:50,790 --> 00:06:53,772
the highest execution level and gradually,
as we establish more and more
76
00:06:53,772 --> 00:06:59,200
initialization of the device, we're going
to cede control to less privileged
77
00:06:59,200 --> 00:07:04,949
components, so the bootloader is operate
very privilegededly and one of the things
78
00:07:04,949 --> 00:07:10,630
they need to do is establish what's, the
ARM trust zone, the trusted execution
79
00:07:10,630 --> 00:07:18,610
environment that lets people do really
secure things on Android phones.
80
00:07:18,610 --> 00:07:25,710
That's, this is something that is set up
by built by the BL31 bootloader and in
81
00:07:25,710 --> 00:07:29,270
secure world you need to do things like
establish, initialize hardware and
82
00:07:29,270 --> 00:07:34,040
peripherals and in the non secure world
you're norm- you're running like the
83
00:07:34,040 --> 00:07:38,570
normal kernel and the normal users apps.
And on some phones you actually have a
84
00:07:38,570 --> 00:07:45,699
final bootloader, which runs in EL1, BL33
or the aboot executable and that's the
85
00:07:45,699 --> 00:07:52,910
that's the one that we're generally
targeting for for this stuff. So this is
86
00:07:52,910 --> 00:07:55,690
what I was talking about the chain of
trust: each of those arrows represents
87
00:07:55,690 --> 00:08:00,360
cryptographic integrity, so the next stage
only gets loaded if there's a valid
88
00:08:00,360 --> 00:08:09,420
signature indicating that we really trust
what's going on here. And that's the
89
00:08:09,420 --> 00:08:13,729
unlocking process that we were talking
about; if you, the verified, physical
90
00:08:13,729 --> 00:08:18,210
owner of the device, wants to you can
disable that last bit and cause and allow
91
00:08:18,210 --> 00:08:21,889
untrusted code to run as the kernel.
That's totally fine, if you own the
92
00:08:21,889 --> 00:08:27,770
device.
The unlocking process is supposed to
93
00:08:27,770 --> 00:08:30,850
really specifically verify these two
things: that you have physical access to
94
00:08:30,850 --> 00:08:37,799
the device and that you actually own it,
like you know the pin to it, that's what
95
00:08:37,799 --> 00:08:46,050
establishes ownership of our device.And so
specifically when you when you go through
96
00:08:46,050 --> 00:08:51,430
that process it does set some specific
flags on your persistent storage, saying
97
00:08:51,430 --> 00:08:57,569
this is an unlocked device now, you can do
whatever but making sure that that can
98
00:08:57,569 --> 00:09:05,009
only happen when it's authorized is the
point of contention here. It should, the,
99
00:09:05,009 --> 00:09:09,449
typically what happens is this security
state is itself cryptographically signed,
100
00:09:09,449 --> 00:09:15,100
so you can't just set unlocked, you have
to set unlocked but signed by the people
101
00:09:15,100 --> 00:09:22,529
that we really trust. And but but
generally you probably shouldn't be able
102
00:09:22,529 --> 00:09:30,899
to write to it just from the normal user
space. So the question is: we saw that the
103
00:09:30,899 --> 00:09:34,940
operating system is separate from the
bootloader. So what we want to be able to
104
00:09:34,940 --> 00:09:40,119
do is get from the Android OS to affecting
the, to the bootloader. And can this
105
00:09:40,119 --> 00:09:48,139
happen? Well, of course, that's why we're
here. So the, let's see. Oh I didn't
106
00:09:48,139 --> 00:09:52,470
realize there were animations on the
slides, that's unfortunate. So this is
107
00:09:52,470 --> 00:09:59,420
sort of the normal flow chart of how these
things normally come about.
108
00:09:59,420 --> 00:10:03,589
You've got the bootloader, which has to
read from persistent storage in order to
109
00:10:03,589 --> 00:10:07,459
initialize the operating system. Like, of
course you have to read, for example,
110
00:10:07,459 --> 00:10:11,139
whether or not the device is unlocked, you
have to load the kernel itself. There are
111
00:10:11,139 --> 00:10:17,009
lots of inputs to the bootloader and
intuition is that the bootloader is, these
112
00:10:17,009 --> 00:10:22,550
just serve as normal inputs to a program,
which can be analyzed for vulnerabilities.
113
00:10:22,550 --> 00:10:30,869
Oh lord, this is a mess. So so from the
OS, you're allowed to, you ha- if you have
114
00:10:30,869 --> 00:10:35,880
root privileges in the operating system
you can write to this persistent storage,
115
00:10:35,880 --> 00:10:46,920
which means that you have that this serves
as another input to the bootloader and
116
00:10:46,920 --> 00:10:53,180
this can cause bad things to happen. So we
need some sort of tool, it's the point of
117
00:10:53,180 --> 00:10:58,189
this project, to automatically verify the
safety properties of these boot loaders.
118
00:10:58,189 --> 00:11:04,481
That's BootStomp. Bootloaders are
complicated. There's a lot of stuff, which
119
00:11:04,481 --> 00:11:08,480
means you have to automate - the analysis
has to be automated in order to really
120
00:11:08,480 --> 00:11:11,999
sift through something as big and
complicated as a bootloader, with the care
121
00:11:11,999 --> 00:11:16,860
necessary to actually find bugs that are
sitting there.
122
00:11:16,860 --> 00:11:20,310
And but these things aren't usually things
that you have source code for, so it needs
123
00:11:20,310 --> 00:11:25,420
to be a binary analysis and furthermore
you can't really do a dynamic execution on
124
00:11:25,420 --> 00:11:29,679
something that needs to run on the highest
privilege level of a processor, so you
125
00:11:29,679 --> 00:11:33,389
have to have your - step back - and it has
to be static as well. And furthermore this
126
00:11:33,389 --> 00:11:37,499
needs to be a fully free-standing analysis
that doesn't assume anything other than
127
00:11:37,499 --> 00:11:42,080
"oh, we're executing code on a system",
because there's no known syscalls or API's
128
00:11:42,080 --> 00:11:46,959
to checkpoint process or say "oh, we know
what this means, we don't really have to
129
00:11:46,959 --> 00:11:56,069
analyze it." So it's a tall order but you
can do it with enough work. So BootStomp
130
00:11:56,069 --> 00:12:02,970
specifically is the tool that we built. It
will automatically detect these inputs,
131
00:12:02,970 --> 00:12:09,870
that we talked about, to the bootloader
and then it will determine if these inputs
132
00:12:09,870 --> 00:12:14,050
can be used to compromise various security
properties of the device.
133
00:12:14,050 --> 00:12:20,050
One such example is if you can use this to
just achieve memory corruption for example
134
00:12:20,050 --> 00:12:27,649
or more abstract forms of vulnerability,
such as code flows that will result in
135
00:12:27,649 --> 00:12:33,319
unwanted data being written by the more
privileged bootloader somewhere. And the
136
00:12:33,319 --> 00:12:39,519
important thing about this analysis is
that its results are easily verifiable and
137
00:12:39,519 --> 00:12:44,970
traceable and it's very easy to like look
at the outputs and say "oh, well, this is
138
00:12:44,970 --> 00:12:48,579
what's happening and this is why I think
this happened and therefore I can
139
00:12:48,579 --> 00:12:59,290
reproduce this, possibly?" This happens
through symbolic taint analysis. This is
140
00:12:59,290 --> 00:13:05,550
the part that I know about, because I work
on angr, which is the symbolic execution
141
00:13:05,550 --> 00:13:11,209
analysis static analysis tool that
bootstomp uses in order to do its taint
142
00:13:11,209 --> 00:13:18,910
analysis. That taint analysis of all
execution are kind of loaded words, so
143
00:13:18,910 --> 00:13:24,259
this is what specifically is meant is that
when we discover these sources and sinks
144
00:13:24,259 --> 00:13:28,790
of behavior, through person particularly
static static analysis and some
145
00:13:28,790 --> 00:13:32,040
heuristics, of course. And then we
propagate these taints through symbolic
146
00:13:32,040 --> 00:13:35,319
execution, while maintaining tractability.
And notice wherever
147
00:13:35,319 --> 00:13:39,850
we meet wherever we can find pause from
taint sources to behaviors sinks that we
148
00:13:39,850 --> 00:13:47,420
think are vulnerable. Specifically, we
think these these behavior sinks are
149
00:13:47,420 --> 00:13:51,889
vulnerable behavior if you can arbitrarily
write to memory, or read from memory.
150
00:13:51,889 --> 00:13:55,040
Like, really arbitrary, if there's a
pointer which is controlled by user input
151
00:13:55,040 --> 00:13:59,980
- memory corruption stuff. And
additionally, if you can control loop
152
00:13:59,980 --> 00:14:04,449
iterations through your input, that
indicates the denial of service attack.
153
00:14:04,449 --> 00:14:11,820
And finally, the unlocking mechanism, the
bootloader unlocking mechanism, if there
154
00:14:11,820 --> 00:14:18,269
is if we can detect specific code paths
which indicate bypasses - those are
155
00:14:18,269 --> 00:14:25,910
valuable. So, yeah, so this is the
specific architecture of the tool. There
156
00:14:25,910 --> 00:14:31,290
are the two main modules one which is
written as an IDA analysis. You know, the
157
00:14:31,290 --> 00:14:35,209
big tool that everyone probably doesn't
pay enough money for. And then there's the
158
00:14:35,209 --> 00:14:41,829
other component written in angr which is
the symbolic change analysis. And this is
159
00:14:41,829 --> 00:14:51,850
probably the point where I break out of
here and actually start the live demo.
160
00:14:51,850 --> 00:14:58,671
That's big enough.
Okay, so we're working on a Huawei boot
161
00:14:58,671 --> 00:15:07,100
image here, the fastboot image. We're
going to load it up in IDA real quick. So
162
00:15:07,100 --> 00:15:14,689
here, IDA has understands, oh this is what
the executable is. So if we just sort of
163
00:15:14,689 --> 00:15:23,819
run the initial script, find taints, it'll
think real hard for a little bit. There's
164
00:15:23,819 --> 00:15:27,199
no real reason this couldn't if it's part
couldn't have been done an angr or binary
165
00:15:27,199 --> 00:15:33,579
ninja or r2 or (???), god forbid. But,
this is a collaborative project if you saw
166
00:15:33,579 --> 00:15:36,970
the huge author list and people write
stuff and whatever they're comfortable
167
00:15:36,970 --> 00:15:41,859
with. So it's IDA in this case.
Realistically, because this is just a
168
00:15:41,859 --> 00:15:46,839
binary blob when you load it in IDA it
doesn't immediately know where everything
169
00:15:46,839 --> 00:15:54,469
is, so you have to sort of nudge it into..
oh here's where all the functions are.
170
00:15:54,469 --> 00:16:05,399
Okay, we finished, and what it's done is:
we've got this taint source sync dot txt
171
00:16:05,399 --> 00:16:12,229
which shows us, "oh, here are all the sources
of tainted information, and here's a few of
172
00:16:12,229 --> 00:16:16,459
the sinks that we established." Obviously
you don't need a sink analysis to
173
00:16:16,459 --> 00:16:20,119
determine if you've got memory corruption
or not but we like knowing where the
174
00:16:20,119 --> 00:16:23,809
writes to persistent storage are and where
all the specifically the memcopy functions
175
00:16:23,809 --> 00:16:35,209
are valuable for analysis. And then, if we
run our taint analysis bootloader, taint
176
00:16:35,209 --> 00:16:38,759
on the - oh this configuration file is
real simple. It just says, "oh here's what
177
00:16:38,759 --> 00:16:42,040
we're analyzing: it's a 64-bit
architecture, don't bother analyzing thumb
178
00:16:42,040 --> 00:16:51,989
mode, etc cetera, simple stuff." And it'll
do this for about 20 minutes. Uh, config;
179
00:16:51,989 --> 00:16:57,629
and it'll do this for about 20 minutes. I
hope it finishes before the demo is over.
180
00:16:57,629 --> 00:17:05,779
If not, I'll do some magic and we'll a
pre-prepared solution. But, so, we talked
181
00:17:05,779 --> 00:17:09,579
about these seeds there used the the seeds
for our taint analysis or for our
182
00:17:09,579 --> 00:17:17,500
persistent storage. And that I used by the
unlocking procedure. So the heuristics I
183
00:17:17,500 --> 00:17:21,369
was talking about - we want to identify
the reads from persistent storage through
184
00:17:21,369 --> 00:17:26,050
log messages, keyword keyword analysis and
long distances. So the eMMC is this is a
185
00:17:26,050 --> 00:17:29,689
specific memory module used by
the bootloader for secure purposes. And
186
00:17:29,689 --> 00:17:34,809
just it's the persistent storage device
basically. And you can identify these log
187
00:17:34,809 --> 00:17:39,500
messages and then we just do a diff -u
analysis back from the guard condition on
188
00:17:39,500 --> 00:17:44,330
that block to its source and you say, "oh
that function must be the read." It's
189
00:17:44,330 --> 00:17:52,149
pretty simple. It works surprisingly
often. Of course, if this isn't enough you
190
00:17:52,149 --> 00:17:56,320
can just manually analyze the firmware and
provide, "oh here's where we read from
191
00:17:56,320 --> 00:17:58,171
persistent storage, here's what you
should taint."
192
00:18:07,900 --> 00:18:11,600
Cool. So the taint
193
00:18:11,600 --> 00:18:15,200
analysis: our taints are specifically
sy-- this is specifically symbolic taint
194
00:18:15,200 --> 00:18:19,809
analysis so it's not just like what Triton
does where you've got a concrete value
195
00:18:19,809 --> 00:18:25,160
that has metadata attached. This is a real
symbol being used for symbolic execution.
196
00:18:25,160 --> 00:18:29,799
If you're not familiar with symbolic
execution, it's a if it's a form of static
197
00:18:29,799 --> 00:18:38,330
analysis in which you emulate the code,
but instead of having the values for some
198
00:18:38,330 --> 00:18:41,169
of things you can just have symbols. And
then when you perform operation on those
199
00:18:41,169 --> 00:18:46,450
symbols you construct an abstract syntax
tree of the behavior. And then when you
200
00:18:46,450 --> 00:18:52,250
run into branch conditions based on those
things you can say, "oh, well, in order to
201
00:18:52,250 --> 00:19:00,419
get from point A to point B this
constraint must be satisfied." And of
202
00:19:00,419 --> 00:19:05,470
course, now you can just add z3 and stir
and you have passed the inputs to generate
203
00:19:05,470 --> 00:19:12,230
paths to the program. So for the sinks of
the taint analysis, we want we wants to
204
00:19:12,230 --> 00:19:18,960
say, "oh, if tainted data come comes into
and is is the argument to memcpy then
205
00:19:18,960 --> 00:19:23,220
that's a vulnerability." I don't mean
like, it's the I don't mean like, the
206
00:19:23,220 --> 00:19:28,059
taint data is the subject of memcopy, like
it's one of the values passed to memcpy.
207
00:19:28,059 --> 00:19:33,549
That's a memory corruption vulnerability
generally. Yeah, we talked about memory
208
00:19:33,549 --> 00:19:36,409
corruption, and we talked about loop
conditions, and we talked about writes to
209
00:19:36,409 --> 00:19:41,220
persistent storage with the unlocking
stuff. Cool. For taint checking
210
00:19:41,220 --> 00:19:46,429
specifically -- oh this is exactly what I
just said. Yeah, and part and what I was
211
00:19:46,429 --> 00:19:50,480
talking about with the symbolic predicates
and trace analysis means
212
00:19:50,480 --> 00:19:54,130
that when you see something,
you automatically have
213
00:19:54,130 --> 00:20:00,350
the input that will generate that
behavior. So the output is inherently
214
00:20:00,350 --> 00:20:06,169
traceable. Unfortunately, symbolic
execution has some issues. I was actually
215
00:20:06,169 --> 00:20:12,380
at CCC two years ago talking about the
exact same problem. You have this problem
216
00:20:12,380 --> 00:20:18,759
where, oh, you generate paths between
different between different states and
217
00:20:18,759 --> 00:20:24,210
there can be too many of them. It
overwhelms your analysis. So you can use
218
00:20:24,210 --> 00:20:29,269
some heuristics to say, "oh, we don't want
to we can; because it's the static
219
00:20:29,269 --> 00:20:34,050
analysis we have a more powerful step over
than what a debugger can do." We don't
220
00:20:34,050 --> 00:20:37,340
have to actually analyze the function, we
can just take the instruction pointer and
221
00:20:37,340 --> 00:20:43,590
move it over there. And, this does cause
some unsoundness, but it's not a problem
222
00:20:43,590 --> 00:20:49,039
if you like make sure that the arguments
aren't tainted, for example. Or sometimes
223
00:20:49,039 --> 00:20:53,909
you just accept the unsoundness as part of
the tractability of the problem. Limit
224
00:20:53,909 --> 00:20:58,519
loop operation: that's classic technique
from static analysis. And the time out, of
225
00:20:58,519 --> 00:21:05,610
course. So, what are the bugs we found? We
evaluated this on four boot loaders and we
226
00:21:05,610 --> 00:21:15,090
found several bugs. Six of which were zero
days. So that's pretty good. It's like,
227
00:21:15,090 --> 00:21:18,980
okay, so you found some bugs but it could
just be you; oh there are some errors and
228
00:21:18,980 --> 00:21:22,880
an initialization that don't really
matter. But on the other hand you can
229
00:21:22,880 --> 00:21:31,669
crash it 41 41 41. That's pretty serious.
So as we saw, some of the bootloader is
230
00:21:31,669 --> 00:21:35,710
like do work in ARM EL3 so this is pretty
significant. You can do whatever you want
231
00:21:35,710 --> 00:21:39,919
in the device if you actually have
sufficient control over it. This is
232
00:21:39,919 --> 00:21:47,669
rootkit territory. You could break
anything you wanted. Then there's another
233
00:21:47,669 --> 00:21:52,120
component in the analysis that says, "can
we'd find bypasses to the unlocking
234
00:21:52,120 --> 00:21:57,860
procedure." For example, this is this is
basically one of the ones that we found:
235
00:21:57,860 --> 00:22:03,890
it's so it says boots on detected this,
this flow from data that was read from the
236
00:22:03,890 --> 00:22:09,440
device to data that was written to the
device, and what this code is supposed to
237
00:22:09,440 --> 00:22:12,880
do -- do I have animations? yes --it's
supposed to,
238
00:22:12,880 --> 00:22:15,460
like, take some input
and verify that it hashes
239
00:22:15,460 --> 00:22:20,130
to a certain value. And if so, hash
that value and write it back to disk, and
240
00:22:20,130 --> 00:22:27,110
that constitutes the cryptographically
secure unlocking thing. However, the thing
241
00:22:27,110 --> 00:22:32,740
that we write to is compared to the
identical to the thing that was read from
242
00:22:32,740 --> 00:22:39,789
the disk. So you can just; the thing that
boots on purported was the code flow from
243
00:22:39,789 --> 00:22:43,990
the disk backs the disk indicating that if
you can read from the disk, you know how
244
00:22:43,990 --> 00:22:53,190
to produce the thing that will unlock the
phone. So this isn't secure. Mitigations.
245
00:22:53,190 --> 00:22:59,440
So, the thing that Google does in order to
prevent attacks of this class is that the
246
00:22:59,440 --> 00:23:04,419
key ness is the secure encryption key that
unlocked, that decrypts the like userland
247
00:23:04,419 --> 00:23:13,460
data is, has embedded in it the unlock
state. So clearly, if you change the
248
00:23:13,460 --> 00:23:17,460
unlock state you brick the entire phone.
Well, not brick, but have to reset it have
249
00:23:17,460 --> 00:23:27,440
to lose all your data. That's still not
really good enough but realistically we
250
00:23:27,440 --> 00:23:32,950
should probably be using a more trusted
form of storage that's not just the normal
251
00:23:32,950 --> 00:23:36,809
normal partitions in the SD card in order
to just sort of store this state. It
252
00:23:36,809 --> 00:23:41,490
should probably be part of the eMMC, or
specifically the replay protected memory
253
00:23:41,490 --> 00:23:46,789
block which uses cryptographic mechanisms
to synchronize the, what's it called,
254
00:23:46,789 --> 00:23:55,269
synchronize this writes to the memory with
the authenticated process. And so that
255
00:23:55,269 --> 00:23:58,270
would make that would make sure that
only the bootloader could unlock it. But
256
00:23:58,270 --> 00:24:01,789
of course that wouldn't protect against
memory corruption vulnerabilities and
257
00:24:01,789 --> 00:24:04,780
there's nothing really to be said about
that other than, "hey, this is a serious
258
00:24:04,780 --> 00:24:11,970
problem." In conclusion, all these bugs
have been reported, most of them have been
259
00:24:11,970 --> 00:24:17,990
fixed. As far as I'm aware this is the
first study to really explore and
260
00:24:17,990 --> 00:24:22,309
develop analyses for Android boot loaders
and in it we developed an automated
261
00:24:22,309 --> 00:24:26,751
technique to analyze boot loaders with
tractable alerts. I found six 0days in
262
00:24:26,751 --> 00:24:32,409
various boot loaders and our
implementation is open source. I will be
263
00:24:32,409 --> 00:24:34,960
taking questions, thank you for listening.
264
00:24:34,960 --> 00:24:43,950
applause
265
00:24:43,950 --> 00:24:46,110
Herald: That was quite amazing.
266
00:24:49,770 --> 00:24:53,120
Okay we'll be taking some
questions from people
267
00:24:53,120 --> 00:24:56,890
that understood exactly
what it was all about. Yes
268
00:24:56,890 --> 00:24:59,110
I see somebody walking up to microphone
one.
269
00:24:59,110 --> 00:25:02,390
Mic 1: Thank you very much for talk--
270
00:25:02,390 --> 00:25:04,810
Herald: Are you talking the mic otherwise
we can't record it.
271
00:25:04,810 --> 00:25:06,959
Mic 1: Okay, thank you very much for that
272
00:25:06,959 --> 00:25:11,659
work, that was really cool. Your mystic
investigations didn't include devicing the
273
00:25:11,659 --> 00:25:16,289
code better. Do you think it's possible to
write the code so that your tools can
274
00:25:16,289 --> 00:25:21,350
analyze it and maybe it would be secure?
Or not yet?
275
00:25:21,350 --> 00:25:28,240
Audrey: Well, there's certainly things to
be said for having things in open source,
276
00:25:28,240 --> 00:25:33,360
because necessarily doing analysis on
source code is a much more, a much better
277
00:25:33,360 --> 00:25:41,250
defined field than the than doing analysis
on binary code. Additionally, you can
278
00:25:41,250 --> 00:25:48,010
write your stuff in languages there is
safer than C. I don't know if it's, I
279
00:25:48,010 --> 00:25:55,519
didn't know if it's safe to talk about
rust yet, but rust is cool. Yeah, there's
280
00:25:55,519 --> 00:26:00,759
lots of things that you can do. I just
realized I didn't show off; I didn't show
281
00:26:00,759 --> 00:26:05,820
off the still running, the analysis: the
automated results. It did not finish in
282
00:26:05,820 --> 00:26:12,820
time so I will run some magic, and now we
have some results. Which..
283
00:26:12,820 --> 00:26:19,429
applause
284
00:26:19,429 --> 00:26:21,859
So, here's a here's one of the analysis
285
00:26:21,859 --> 00:26:27,519
results. We found at this location in the
program a tainted variable, specifically
286
00:26:27,519 --> 00:26:33,049
the tainted at offset 261 into the tainted
buffer. This variable was used as a
287
00:26:33,049 --> 00:26:41,019
pointer. And that involved following the
path from along along this way. So there
288
00:26:41,019 --> 00:26:46,320
is a vulnerability that I discovered for you.
So we can go on with question sorry that
289
00:26:46,320 --> 00:26:47,160
was a bit.
290
00:26:47,160 --> 00:26:48,619
Herald: Any more questions from the
291
00:26:48,619 --> 00:26:57,780
audience? There is no question from from
the internet. Okay, one question, go
292
00:26:57,780 --> 00:27:00,029
ahead: talk into the mic please.
293
00:27:00,029 --> 00:27:03,029
Question: You said that the bugs you found
294
00:27:03,029 --> 00:27:07,789
where responsibly disclosed and fixed.
Were they actually fixed in real existing
295
00:27:07,789 --> 00:27:11,860
devices or did the vendors just say, "oh,
we'll fix it in future devices."
296
00:27:11,860 --> 00:27:16,759
Audrey: I wish I knew the answer to that
question. I wasn't on the in the did this.
297
00:27:16,759 --> 00:27:23,100
Yeah, I can't speak to that. That was just
that was just a slide on the slides that I
298
00:27:23,100 --> 00:27:29,780
was given. I sure hope they were really
responsibly disclosed. It's real hard to
299
00:27:29,780 --> 00:27:36,400
push updates to the bootloader!
300
00:27:36,400 --> 00:27:39,960
Herald: Okay, any more questions? okay so
301
00:27:39,960 --> 00:27:43,909
let's conclude this talk. People, when you
leave the hall, please take all your stuff
302
00:27:43,909 --> 00:27:49,554
with you. Your bottles, your cups. Don't
forget anything, have a last check. Thank
303
00:27:49,556 --> 00:27:55,053
you very much let's have one final hand
for Audrey Dutcher, from California!
304
00:27:55,053 --> 00:28:00,749
applause
305
00:28:00,749 --> 00:28:06,145
34c3 outro
306
00:28:06,145 --> 00:28:23,000
subtitles created by c3subtitles.de
in the year 2018. Join, and help us!