0:00:20.400,0:00:21.600
36C3 preroll music
0:00:21.600,0:00:24.840
Herald Angel: OK. Welcome to our next[br]talk. It's called flipping bits from
0:00:24.840,0:00:30.090
software without Row hammer, small[br]reminder Row hammer used, still is a
0:00:30.090,0:00:34.020
software based fault attack. It was[br]published in 2015. There were
0:00:34.020,0:00:39.660
countermeasures developed and we are still[br]in the process of deploying these
0:00:39.660,0:00:45.690
everywhere. And now our two speakers are[br]going to talk about a new software based
0:00:45.690,0:00:56.250
fault attack to execute commands inside[br]the SGX environment. Our speakers,
0:00:56.250,0:01:05.000
Professor Daniel Gruss from the University[br]of Graz and Kit Murdoch researching at the
0:01:05.000,0:01:10.750
University of Birmingham. The content of[br]this talk is actually in her first
0:01:10.750,0:01:17.030
published paper published at IEEE, no[br]accepted at IEEE Security and Privacy next
0:01:17.030,0:01:21.210
year. In case you do not come from the[br]academic world, if this is your this is
0:01:21.210,0:01:22.980
always a big deal. If this is your first[br]paper, it even more is, please welcome
0:01:22.980,0:01:28.000
them, both of you get a round of applause[br]and enjoy the talk.
0:01:28.000,0:01:31.190
Applause
0:01:31.190,0:01:38.030
Kit Murdoch: Thank you. Hello. Let's get[br]started. This is my favorite recent
0:01:38.030,0:01:45.270
attack. It's called Clockscrew. And the[br]reason that it's my favorite is it created
0:01:45.270,0:01:50.140
a new class of fault attacks. Daniel[br]Gruss: Fault attacks. I, I know that.
0:01:50.140,0:01:53.670
Fault attacks, you take these[br]oscilloscopes and check the voltage line
0:01:53.670,0:01:58.340
and then you drop the voltage for a f....[br]Kit: No, you see, this is why this one is
0:01:58.340,0:02:04.810
cool because you don't need any equipment[br]at all. Adrian Tang. He created this
0:02:04.810,0:02:09.700
wonderful attack that uses DVFS. What is[br]that?
0:02:09.700,0:02:13.400
Daniel: DVFS ? I don't know, don't[br]violate format specifications.
0:02:13.400,0:02:19.230
Kit: I asked my boyfriend this morning[br]what he thought DVFS stood for and he said
0:02:19.230,0:02:22.230
Darth Vader fights Skywalker.[br]Laughter
0:02:22.230,0:02:26.290
Kit: I'm also wearing his t-shirt[br]specially for him as well.
0:02:26.290,0:02:30.340
Daniel: Maybe, maybe this is more[br]technical, maybe dazzling volt for
0:02:30.340,0:02:34.590
security like SGX.[br]Kit: No, it's not that either. Mine was,
0:02:34.590,0:02:39.650
the one I came up this morning was: Drink[br]vodka feel silly.
0:02:39.650,0:02:42.930
Laughter[br]Kit: It's not that either. It stands for
0:02:42.930,0:02:48.590
dynamic voltage and frequency scaling. And[br]what that means really simply is changing
0:02:48.590,0:02:53.081
the voltage and changing the frequency of[br]your CPU. Why do you want to do this? Why
0:02:53.081,0:02:58.269
would anyone want to do this? Well, gamers[br]want fast computers. I am sure there are a
0:02:58.269,0:03:02.860
few people out here who will want a really[br]fast computer. Cloud Servers want high
0:03:02.860,0:03:07.750
assurance and low running costs. And what[br]do you do if your hardware gets hot?
0:03:07.750,0:03:13.040
You're going to need to modify them. And[br]actually finding a voltage and frequency
0:03:13.040,0:03:17.810
that work together is pretty difficult.[br]And so what the manufacturers have done to
0:03:17.810,0:03:23.230
make this easier, is they've created a way[br]to do this from software. They created
0:03:23.230,0:03:29.409
memory mapped registers. You modify this[br]from software and it has an impact on the
0:03:29.409,0:03:35.069
hardware. And that's what this wonderful[br]clockscrew attack did. But they found
0:03:35.069,0:03:41.939
something else out, which is you may have[br]heard of: trust zone. Trust zone is in an
0:03:41.939,0:03:47.850
enclave in ARM chips that should be able[br]to protect your data. But if you can
0:03:47.850,0:03:52.360
modify the frequency and voltage of the[br]whole core, then you can modify it for
0:03:52.360,0:03:59.219
both trust zone and normal code. And this[br]is their attack. In software they modified
0:03:59.219,0:04:05.290
the frequency to make it outside of the[br]normal operating range. And they induced
0:04:05.290,0:04:12.459
faults. And so in an arm chip running on a[br]mobile phone, they managed to get out an
0:04:12.459,0:04:17.511
AES key from within trust zone. They[br]should not be able to do that. They were
0:04:17.511,0:04:22.710
able to trick trust zone into loading a[br]self-signed app. You should not be able to
0:04:22.710,0:04:31.900
do that. That made this ARM attack really[br]interesting. This year another attack came
0:04:31.900,0:04:39.879
out called volt jockey. This also attacked[br]ARM chips. But instead of looking at
0:04:39.879,0:04:49.460
frequency on ARM chips, they were looking[br]at voltage on ARM chips. We're thinking,
0:04:49.460,0:04:57.270
what about Intel?[br]Daniel: OK, so Intel. Actually, I know
0:04:57.270,0:05:02.060
something about Intel because I had this[br]nice laptop from HP. I really liked it,
0:05:02.060,0:05:06.520
but it had this problem that it was going[br]too hot all the time and I couldn't even
0:05:06.520,0:05:12.909
work without it shutting down all the time[br]because of the heat problem. So what I did
0:05:12.909,0:05:17.639
was I undervolted the CPU and actually[br]this worked for me for several years. I
0:05:17.639,0:05:21.530
used this undervolted for several years.[br]You can also see this, I just took this
0:05:21.530,0:05:27.020
from somewhere on the Internet and they[br]compared with undervolting and without
0:05:27.020,0:05:31.930
undervolting. And you can see that the[br]benchmark score improves by undervolting
0:05:31.930,0:05:38.879
because you don't run into the thermal[br]throttling that often. So there are
0:05:38.879,0:05:43.840
different tools to do that. On Windows you[br]could use RMClock, there's also
0:05:43.840,0:05:47.789
Throttlestop. On Linux there's the Linux-[br]intel-undervolt GitHub repository.
0:05:47.789,0:05:52.960
Kit: And there's one more, actually.[br]Adrian Tang, who I don't know if you know
0:05:52.960,0:05:58.889
a bit of a fan. He was the lead author on[br]Clocks Screw. He wrote his PhD Thesis and
0:05:58.889,0:06:03.210
in the appendix he talked about[br]undervolting on Intel machines and how you
0:06:03.210,0:06:07.550
do it. And I wish I'd read that before I[br]started the paper. That would have saved
0:06:07.550,0:06:12.409
an awful lot of time. But thank you to the[br]people on the Internet for making my life
0:06:12.409,0:06:17.980
a lot easier, because what we discovered[br]was there is this magic module specific
0:06:17.980,0:06:26.880
register and it's called Hex 150. And this[br]enables you to change the voltage the
0:06:26.880,0:06:31.229
people on the Internet did the work for[br]me. So I know how it works. You first of
0:06:31.229,0:06:37.039
all tell it the plain RDX, what it is you[br]want to, raise the voltage or lower the
0:06:37.039,0:06:43.099
voltage. We discovered that the core and[br]the cache are on the same plane. So you
0:06:43.099,0:06:46.509
have to modify them both. But it has no[br]effect, they're together. I guess in the
0:06:46.509,0:06:50.750
future they'll be separate. Then you[br]modify the offset to say, I want to raise
0:06:50.750,0:06:57.080
it by this much or lower it by this much.[br]So I thought, let's have a go. Let's write
0:06:57.080,0:07:05.599
a little bit of code. Here is the code.[br]The smart people amongst you may have
0:07:05.599,0:07:15.539
noticed something. I suspect even my[br]appalling C, even I would recognize that
0:07:15.539,0:07:20.810
that loop should never exit. I'm just[br]multiplying the same thing again and again
0:07:20.810,0:07:25.499
and again and again and again and[br]expecting it to exit. That shouldn't
0:07:25.499,0:07:32.439
happen. But let's look at what happened.[br]So I'm gonna show you what I did. Oh..
0:07:32.439,0:07:41.620
There we go. So the first thing I'm gonna[br]do is I'm going to set the frequency to be
0:07:41.620,0:07:45.749
one thing because I'm gonna play with[br]voltage and if I'm gonna play with
0:07:45.749,0:07:51.210
voltage, I want the frequency to be[br]set. So, It's quite easy using cpupower,
0:07:51.210,0:07:56.530
you set the maximum and the minimum to be[br]1 gigahertz. And now my machine is running
0:07:56.530,0:08:01.169
at exactly 1 gigahertz. Now we'll look at[br]the bit of code that you need to
0:08:01.169,0:08:05.091
undervolt, again I didn't do the work,[br]thank you to the people on the internet
0:08:05.091,0:08:12.199
for doing this. You put the MSR into the[br]kernel and let's have a look at the code.
0:08:12.199,0:08:21.030
Does that look right? Oh, it does, looks[br]much better up there. Yes, it's that one
0:08:21.030,0:08:27.061
line of code. That is the one line of code[br]you need to open and then we're going to
0:08:27.061,0:08:33.140
write to it. And again, oh why is it doing[br]that? We have a touch sensitive screen
0:08:33.140,0:08:52.670
here. Might touch it again. That's the[br]line of code that's gonna open it and
0:08:52.670,0:08:55.970
that's how you write to it. And again, the[br]people on the Internet did the work for me
0:08:55.970,0:08:59.030
and told me how I had to write that. So[br]what I can do here is I'm just going to
0:08:59.030,0:09:04.250
undervolt and I'm gonna undervolt,[br]multiplying deadbeef by this really big
0:09:04.250,0:09:08.660
number. I'm starting at minus two hundred[br]and fifty two millivolts. And we're just
0:09:08.660,0:09:11.140
going to see if I ever get out of this[br]loop.
0:09:11.140,0:09:14.020
Daniel: But surely the system would just[br]crash, right?
0:09:14.020,0:09:21.880
Kit: You'd hope so, wouldn't you? Let's[br]see, there we go! We got a fault. I was a
0:09:21.880,0:09:25.070
bit gobsmacked when that happened because[br]the system didn't crash.
0:09:25.070,0:09:29.790
Daniel: So that doesn't look too good. So[br]the question now is, what is the... So you
0:09:29.790,0:09:33.050
show some voltage here, some undervolting.[br]Kit: Yeah
0:09:33.050,0:09:36.690
Daniel: What undervolting is actually[br]required to get a bit flip?
0:09:36.690,0:09:40.760
Kit: We did a lot of tests. We didn't just[br]multiply by deadbeef. We also multiplied
0:09:40.760,0:09:44.860
by random numbers. So here I'm going to[br]just generate two random numbers. One is
0:09:44.860,0:09:50.210
going up to f f f f f f one is going up to[br]ff. I'm just going to try different, again
0:09:50.210,0:09:57.450
I'm going to try undervolting to see if I[br]get different bit flips. And again, I got
0:09:57.450,0:10:03.620
the same bit flipped, so I'm getting the[br]same one single bit flip there. Okay, so
0:10:03.620,0:10:08.000
maybe it's only ever going to be one bit[br]flip. Ah, I got a different bit flip and
0:10:08.000,0:10:12.210
again a different bit flip and it's,[br]you'll notice they always appear to be
0:10:12.210,0:10:17.060
bits together next to one another. So to[br]answer Daniel's question, I pressed my
0:10:17.060,0:10:22.980
machine a lot in the process of doing[br]this, but I wanted to know what were good
0:10:22.980,0:10:29.330
values to undervolt at. And here they are.[br]We tried for all the frequencies. We tried
0:10:29.330,0:10:33.290
what was the base voltage? And then when[br]was the point at which we got the first
0:10:33.290,0:10:37.530
fault? And once we'd done that, it made[br]everything really easy. We just made sure
0:10:37.530,0:10:41.430
we didn't go under that and ended up with[br]a kernel panic or the machine crashing.
0:10:41.430,0:10:47.160
Daniel: So this is already great. I think[br]this looks like it is exploitable and the
0:10:47.160,0:10:53.910
first thing that you need when you are[br]working on a vulnerability is the name and
0:10:53.910,0:11:00.821
the logo and maybe a Website. Everything[br]like that. And real people on the Internet
0:11:00.821,0:11:05.690
agree with me. Like this tweet.[br]Laughter
0:11:05.690,0:11:12.160
Daniel: Yes. So we need a name and a logo.[br]Kit: No, no, we don't need it. Come on.
0:11:12.160,0:11:15.121
then. Go on then. What is your idea?[br]Daniel: So I thought this is like, it's
0:11:15.121,0:11:20.920
like Row hammer. We are flipping bits, but[br]with voltage. So I called it Volt hammer
0:11:20.920,0:11:25.370
and I already have a logo for it.[br]Kit: We're not, we're not giving it a
0:11:25.370,0:11:27.580
logo.[br]Daniel: No, I think we need a logo because
0:11:27.580,0:11:34.880
people can relate more to the images[br]there, to the logo that we have. Reading a
0:11:34.880,0:11:39.140
word is much more complicated than seeing[br]a logo somewhere. It's better for
0:11:39.140,0:11:45.480
communication. You make it easier to talk[br]about your vulnerability. Yeah? And the
0:11:45.480,0:11:50.070
name, same thing. How, how would you like[br]to call it? Like undervolting on Intel to
0:11:50.070,0:11:54.350
induce flips in multiplications to then[br]run an exploit? No, that's not a good
0:11:54.350,0:12:02.250
vulnerability name. And speaking of the[br]name, if we choose a fancy name, we might
0:12:02.250,0:12:05.550
even make it into TV shows like Row[br]hammer.
0:12:05.550,0:12:11.740
Video Clip 1A: The hacker used a DRAM Row[br]hammer exploit to gain kernel privileges.
0:12:11.740,0:12:15.050
Video Clip 1B: HQ, yeah we've got[br]something.
0:12:15.050,0:12:20.690
Daniel: So this was in designated Survivor[br]in March 2018 and this guy just got shot.
0:12:20.690,0:12:25.601
So hopefully we won't get shot but[br]actually we have also been working. So my
0:12:25.601,0:12:32.830
group has been working on Row hammer and[br]presented this in 2015 here at CCC, in
0:12:32.830,0:12:37.500
Hamburg back then. It was Row hammer JS[br]and we called it root privileges for web
0:12:37.500,0:12:40.661
apps because we showed that you can do[br]this from JavaScript in a browser. Looks
0:12:40.661,0:12:44.170
pretty much like this, we hammered the[br]memory a bit and then we see a bit flips
0:12:44.170,0:12:49.690
in the memory. So how does this work?[br]Because maybe for another fault attack,
0:12:49.690,0:12:52.800
software based fault attack, the only[br]other software based fault attack that we
0:12:52.800,0:12:59.370
know. So, these are related to DFS and[br]this is a different effect. So what do we
0:12:59.370,0:13:03.870
do here is we look at the DRAM and the[br]DRAM is organized in multiple rows and we
0:13:03.870,0:13:10.050
will access these rows. These rows consist[br]of so-called cells, which are capacitors
0:13:10.050,0:13:14.450
and transistors each. And they store one[br]bit of information each. And the row
0:13:14.450,0:13:18.320
buffer, the row size usually is something[br]like eight kilobytes. And then when you
0:13:18.320,0:13:21.970
read something, you copy it to the row[br]buffer. So it works pretty much like this:
0:13:21.970,0:13:25.820
You read from a row, you copy it to the[br]row buffer. The problem now is, these
0:13:25.820,0:13:31.000
capacitors leak over time so you need to[br]refresh them frequently. And they have
0:13:31.000,0:13:37.660
also a maximum refresh interval defined in[br]a standard to guarantee data integrity.
0:13:37.660,0:13:43.150
Now the problem is that cells leak fast[br]upon proximate accesses, and that means if
0:13:43.150,0:13:49.450
you access two locations in proximity to a[br]third location, then the third location
0:13:49.450,0:13:54.110
might flip a bit without accessing it. And[br]this has been exploited in different
0:13:54.110,0:13:58.710
exploits. So the usual strategies is[br]maybe, maybe we can use some of them. So
0:13:58.710,0:14:03.370
the usual strategies here are searching[br]for a page with a bit flip. So you search
0:14:03.370,0:14:08.230
for it and then you find some. Ah, There[br]is a flip here. Then you release the page
0:14:08.230,0:14:13.180
with the flip in the next step. Now this[br]memory is free and now you allocate a lot
0:14:13.180,0:14:17.710
of target pages, for instance, page[br]tables, and then you hope that the target
0:14:17.710,0:14:22.460
page is placed there. If it's a page[br]table, for instance, like this and you
0:14:22.460,0:14:26.650
induce a bit flip. So before it was[br]pointing to User page, then it was
0:14:26.650,0:14:32.540
pointing to no page at all because we[br]maybe unmapped it. And the page that we
0:14:32.540,0:14:37.850
use the bit flip now is actually the one[br]storing all of the PTEs here. So the one
0:14:37.850,0:14:42.990
in the middle is stored down there. And[br]this one now has a bit flip and then our
0:14:42.990,0:14:49.650
pointer to our own user page changes due[br]to the big flip and points to hopefully
0:14:49.650,0:14:54.990
another page table because we filled that[br]memory with page tables. Another direction
0:14:54.990,0:15:01.840
that we could go here is flipping bits in[br]code. For instance, if you think about a
0:15:01.840,0:15:07.370
password comparison, you might have a jump[br]equal check here and the jump equal check
0:15:07.370,0:15:13.190
if you flip one bit, it transforms into a[br]different instruction. And fortunately, oh
0:15:13.190,0:15:18.290
this already looks interesting. Ah,[br]Perfect. Changing the password check nto a
0:15:18.290,0:15:25.670
password incorrect check. I will always be[br]root. And yeah, that's basically it. So
0:15:25.670,0:15:30.700
these are two directions that we might[br]look at for Row hammer. That's also maybe
0:15:30.700,0:15:35.030
a question for Row hammer, why would we[br]even care about other fault attacks?
0:15:35.030,0:15:39.820
Because Row hammer works on DDR 3, it[br]works on DDR 4, it works on ECC memory.
0:15:39.820,0:15:47.840
Kit: Does it, how does it deal with SGX?[br]Daniel: Ahh yeah, yeah SGX. Ehh, yes. So
0:15:47.840,0:15:51.420
maybe we should first explain what SGX is.[br]Kit: Yeah, go for it.
0:15:51.420,0:15:56.530
Daniel: SGX is a so-called TEE trusted[br]execution environment on Intel processors
0:15:56.530,0:16:01.660
and Intel designed it this way that you[br]have an untrusted part and this runs on
0:16:01.660,0:16:05.880
top of an operating system, inside an[br]application. And inside the application
0:16:05.880,0:16:10.660
you can now create an enclave and the[br]enclave runs in a trusted part, which is
0:16:10.660,0:16:16.790
supported by the hardware. The hardware is[br]the trust anchor for this trusted enclave
0:16:16.790,0:16:20.040
and the enclave, now you can from the[br]untrusted part, you can call into the
0:16:20.040,0:16:24.910
enclave via a Callgate pretty much like a[br]system call. And in there you execute a
0:16:24.910,0:16:31.670
trusted function. Then you return to this[br]untrusted part and then you can continue
0:16:31.670,0:16:35.330
doing other stuff. And the operating[br]system has no direct access to this
0:16:35.330,0:16:40.020
trusted part. This is also protected[br]against all kinds of other attacks. For
0:16:40.020,0:16:44.290
instance, physical attacks. If you look at[br]the memory that it uses, maybe I have 16
0:16:44.290,0:16:50.100
gigabytes of RAM. Then there is a small[br]region for the EPC, the enclave page
0:16:50.100,0:16:55.040
cache, the memory that enclaves use and[br]it's encrypted and integrity protected and
0:16:55.040,0:16:59.500
I can't tamper with it. So for instance,[br]if I want to mount a cold boot attack,
0:16:59.500,0:17:04.350
pull out the DRAM, put it in another[br]machine and read out what content it has.
0:17:04.350,0:17:07.970
I can't do that because it's encrypted.[br]And I don't have the key. The key is in
0:17:07.970,0:17:14.939
the processor quite bad. So, what happens[br]if we have bit flips in the EPC? Good
0:17:14.939,0:17:21.839
question. We tried that. The integrity[br]check fails. It locks up the memory
0:17:21.839,0:17:27.280
controller, which means no further memory[br]accesses whatsoever run through this
0:17:27.280,0:17:33.990
system. Everything stays where it is and[br]the system halts basically. It's no
0:17:33.990,0:17:41.420
exploit, it's just denial of service.[br]Kit: Huh. So maybe SGX can save us. So
0:17:41.420,0:17:47.360
what I want to know is, Row Hammer clearly[br]failed because of the integrity check. Is
0:17:47.360,0:17:51.830
my attack where I can flip bits. Is this[br]gonna work inside SGX?
0:17:51.830,0:17:55.040
Daniel: I don't think so because they[br]have integrity protection, right?
0:17:55.040,0:17:59.540
Kit: So what I'm gonna do is run the same[br]thing in the right hand side is user
0:17:59.540,0:18:03.750
space. In the left hand side is the[br]enclave. As you can see, I'm running at
0:18:03.750,0:18:12.280
minus 261 millivolts. No error minus 262.[br]No error minus 2... fingers crossed we
0:18:12.280,0:18:20.920
don't get a kernel panic. Do you see that[br]thing at the bottom? That's a bit flip
0:18:20.920,0:18:24.760
inside the enclave. Oh, yeah.[br]Daniel: That's bad.
0:18:24.760,0:18:29.910
Applause[br]Kit: Thank you. Yeah and it's the same
0:18:29.910,0:18:33.920
bit flip that I was getting in user space[br], that is also really interesting.
0:18:33.920,0:18:38.251
Daniel: I have an idea. So, it's[br]surprising that it works right. But I have
0:18:38.251,0:18:45.080
an idea. This is basically doing the same[br]thing as clocks group. But on SGX, right?
0:18:45.080,0:18:47.320
Kit: Yeah.[br]Daniel: And I thought maybe you didn't
0:18:47.320,0:18:51.570
like the previous logo, maybe it was just[br]too much. So I came up with something more
0:18:51.570,0:18:52.800
simple...[br]Kit: You've come up with a new... He's
0:18:52.800,0:18:55.790
come up with a new name.[br]Daniel: Yes, SGX Screw. How do you like
0:18:55.790,0:18:59.001
it?[br]Kit: No, we don't even have an attack. We
0:18:59.001,0:19:02.150
can't have a logo before we have an[br]attack.
0:19:02.150,0:19:07.350
Daniel: The logo is important, right? I[br]mean, how would you present this on a
0:19:07.350,0:19:08.670
website[br]without a logo?
0:19:08.670,0:19:11.770
Kit: Well, first of all, I need an attack.[br]What am I going to attack with this?
0:19:11.770,0:19:15.060
Daniel: I have an idea what we could[br]attack. So, for instance, we could attack
0:19:15.060,0:19:22.300
crypto, RSA. RSA is a crypto algorithm.[br]It's a public key crypto algorithm. And
0:19:22.300,0:19:28.280
you can encrypt or sign messages. You can[br]send this over an untrusted channel. And
0:19:28.280,0:19:35.560
then you can also verify. So this is[br]actually a typo which should be decrypt...
0:19:35.560,0:19:43.230
there, encrypt verifying messages with a[br]public key or decrypt sign messages with a
0:19:43.230,0:19:53.590
private key. So how does this work? Yeah,[br]basically it's based on exponention modulo a
0:19:53.590,0:20:01.270
number and this number is computed from[br]two prime numbers. So you, for the
0:20:01.270,0:20:09.360
signature part, which is similar to the[br]decryption basically, you take the hash of
0:20:09.360,0:20:17.760
the message and then take it to the power[br]of d modulo n, the public modulus, and
0:20:17.760,0:20:26.390
then you have the signature and everyone[br]can verify that this is actually, later on
0:20:26.390,0:20:34.430
can verify this because the exponent part[br]is public. So n is also public so we can
0:20:34.430,0:20:39.880
later on do this. Now there is one[br]optimization which is quite nice, which is
0:20:39.880,0:20:44.541
Chinese remainder theorem. And this part[br]is really expensive. It takes a long time.
0:20:44.541,0:20:51.000
So it's a lot faster, if you split this in[br]multiple parts. For instance, if you split
0:20:51.000,0:20:56.320
it in two parts, you do two of those[br]exponentations, but with different
0:20:56.320,0:21:02.100
numbers, with smaller numbers and then it's[br]cheaper. It takes fewer rounds. And if you
0:21:02.100,0:21:06.880
do that, you of course have to adapt the[br]formula up here to compute the signature
0:21:06.880,0:21:12.510
because, you now put it together out of[br]the two pieces of the signature that you
0:21:12.510,0:21:19.390
compute. OK, so this looks quite[br]complicated, but the point is we want to
0:21:19.390,0:21:26.690
mount a fault attack on this. So what[br]happens if we fault this? Let's assume we
0:21:26.690,0:21:36.130
have two signatures which are not[br]identical. Right, S and S', and we
0:21:36.130,0:21:41.120
basically only need to know that in one of[br]them, a fault occurred. So the first is
0:21:41.120,0:21:45.140
something, the other is something else. We[br]don't care. But what you see here is that
0:21:45.140,0:21:51.510
both are multiplied by Q plus s2. And if[br]you subtract one from the other, what do
0:21:51.510,0:21:56.970
you get? You get something multiplied with[br]Q. There is something else that is
0:21:56.970,0:22:03.480
multiplied with Q, which is P and n is[br]public. So what we can do now is we can
0:22:03.480,0:22:09.640
compute the greatest common divisor of[br]this and n and get q.
0:22:09.640,0:22:14.730
Kit: Okay. So I'm interested to see if...[br]I didn't understand a word of that, but
0:22:14.730,0:22:19.890
I'm interested to see if I can use this to[br]mount an attack. So how am I going to do
0:22:19.890,0:22:25.690
this? Well, I'll write a little RSA[br]decrypt program and what I'll do is I use
0:22:25.690,0:22:32.330
the same bit of multiplication that I've[br]been using before. And when I get a bit
0:22:32.330,0:22:39.280
flip, then I'll do the decryption. All[br]this is happening inside SGX, inside the
0:22:39.280,0:22:44.141
enclave. So let's have a look at this.[br]First of all, I'll show you the code that
0:22:44.141,0:22:51.580
I wrote, again copied from the Internet.[br]Thank you. So there it is, I'm going to
0:22:51.580,0:22:56.380
trigger the fault.I'm going to wait for[br]the triggered fault, then I'm going to do
0:22:56.380,0:23:00.870
a decryption. Well, let's have a quick[br]look at the code, which should be exactly
0:23:00.870,0:23:04.970
the same as it was right at the very[br]beginning when we started this. Yeah.
0:23:04.970,0:23:10.240
There's my deadbeef written slightly[br]differently. But there is my deadbeef. So,
0:23:10.240,0:23:13.730
now this is ever so slightly messy on the[br]screen, but I hope you're going to see
0:23:13.730,0:23:22.850
this. So minus 239. Fine. Still fine.[br]Still fine. I'll just pause there. You can
0:23:22.850,0:23:27.360
see at the bottom I've written meh - all[br]fine., If you're wondering. So what we're
0:23:27.360,0:23:33.059
looking at here is a correct decryption[br]and you can see inside the enclave, I'm
0:23:33.059,0:23:38.340
initializing p and I'm initializing q. And[br]those are part of the private key. I
0:23:38.340,0:23:43.960
shouldn't be able to get those. So 239[br]isn't really working. Let's try going up
0:23:43.960,0:23:49.309
to minus 240. Oh oh oh oh! RSA error, RSA[br]error. Exciting!
0:23:49.309,0:23:51.680
Daniel: Okay, So this should work for the[br]attack then.
0:23:51.680,0:23:57.370
Kit: So let's have a look, again. I copied[br]somebodys attack on the Internet where
0:23:57.370,0:24:04.210
they very kindly, It's called the lenstra[br]attack. And again, I got I got an output.
0:24:04.210,0:24:08.150
I don't know what it is because I didn't[br]understand any of that crypto stuff.
0:24:08.150,0:24:09.620
Daniel: Me neither.[br]Kit: But let me have a look at the source
0:24:09.620,0:24:15.690
code and see if that exists anywhere in[br]the source code inside the enclave. It
0:24:15.690,0:24:22.180
does. I found p. And if I found p, I can[br]find q. So just to summarise what I've
0:24:22.180,0:24:31.830
done, from a bit flip I have got the[br]private key out of the SGX enclave and I
0:24:31.830,0:24:36.130
shouldn't be able to do that.[br]Daniel: Yes, yes and I think I have an
0:24:36.130,0:24:39.760
idea. So you didn't like the previous...[br]Kit: Ohh, I know where this is going. Yes.
0:24:39.760,0:24:45.980
Daniel: ...didn't like the previous name.[br]So I came up with something more cute and
0:24:45.980,0:24:52.740
relatable, maybe. So I thought, this is an[br]attack on RSA. So I called it Mufarsa.
0:24:52.740,0:24:57.520
Laughter[br]Daniel: My Undervolting Fault Attack On
0:24:57.520,0:24:59.700
RSA.[br]Kit: That's not even a logo. That's just a
0:24:59.700,0:25:02.260
picture of a lion.[br]Daniel: Yeah, yeah it's, it's sort of...
0:25:02.260,0:25:04.660
Kit: Disney are not going to let us use[br]that.
0:25:04.660,0:25:07.429
Laughter[br]Kit: Well it's not, is it Star Wars? No,
0:25:07.429,0:25:10.690
I don't know. OK. OK, so Daniel, I really[br]enjoyed it.
0:25:10.690,0:25:13.670
Daniel: I don't think you will like any of[br]the names I suggest.
0:25:13.670,0:25:17.940
Kit: Probably not. But I really enjoyed[br]breaking RSA. So what I want to know is
0:25:17.940,0:25:19.110
what else can I break?[br]Daniel: Well...
0:25:19.110,0:25:22.750
Kit: Give me something else I can break.[br]Daniel: If you don't like the RSA part, we
0:25:22.750,0:25:28.300
can also take other crypto. I mean there[br]is AES for instance, AES is a symmetric
0:25:28.300,0:25:33.540
key crypto algorithm. Again, you encrypt[br]messages, you transfer them over a public
0:25:33.540,0:25:40.000
channel, this time with both sides having[br]the key. You can also use that for
0:25:40.000,0:25:47.830
storage. AES internally uses a 4x4 state[br]matrix for 4x4 bytes and it runs through
0:25:47.830,0:25:54.390
ten rounds which are S-box, which[br]basically replaces a byte by another byte,
0:25:54.390,0:25:59.030
some shifting of rows in this matrix, some[br]mixing of the columns, and then the round
0:25:59.030,0:26:03.150
keys is added which is computed from the[br]AES key that you provided to the
0:26:03.150,0:26:08.680
algorithm. And if we look at the last[br]three rounds because we want to, again,
0:26:08.680,0:26:12.090
mount a fault attack, and there are[br]different differential fault attacks on
0:26:12.090,0:26:18.410
AES. If you look at the last rounds,[br]because the way of this algorithm works is
0:26:18.410,0:26:22.870
it propagates, changes, differences[br]through this algorithm. If you'd look at
0:26:22.870,0:26:28.300
the state matrix, which only has a[br]difference in the top left corner, then
0:26:28.300,0:26:33.830
this is how the state will propagate[br]through the 9th and 10th round. And you
0:26:33.830,0:26:42.470
can put up formulas to compute possible[br]values for the state up there. If you have
0:26:42.470,0:26:47.760
different, if you have encryption, which[br]only have a difference there in exactly
0:26:47.760,0:26:57.350
that single state byte. Now, how does this[br]work in practice? Well, today everyone is
0:26:57.350,0:27:02.200
using AES-NI because that's super fast.[br]That's, again, an instruction set
0:27:02.200,0:27:07.510
extension by Intel and it's super fast.[br]Kit: Oh okay, I want to have a go. Right,
0:27:07.510,0:27:11.970
so let me have a look if I can break some[br]of these AES-NI instructions. So I'm to
0:27:11.970,0:27:16.040
come at this slightly differently. Last[br]time I waited for a multiplication fault,
0:27:16.040,0:27:19.710
I'm going to do something slightly[br]different. What I'm going to do is put in
0:27:19.710,0:27:26.680
a loop two AES encryptions. And I wrote[br]this using Intel's code, I should say I we
0:27:26.680,0:27:32.760
wrote this using Intel's code, example[br]code. This should never fault. And we know
0:27:32.760,0:27:36.580
what we're looking for. What we're looking[br]for is a fault in the eighth round. So
0:27:36.580,0:27:42.370
let's see if we get faults with this. So[br]the first thing is I'm going to start at
0:27:42.370,0:27:47.510
minus 262 millivolt. What's interesting is[br]that you have to undervolt more when it's
0:27:47.510,0:27:57.350
cold so you can tell at what time of day I[br]ran these. Oh I got a fault, I got a fault.
0:27:57.350,0:28:01.950
Well, unfortunately. Where did that?[br]That's actually in the fourth round. I'm
0:28:01.950,0:28:04.480
I'm obviously, eh fifth round, okay.[br]Daniel: You can't do anything with that.
0:28:04.480,0:28:09.530
Kit: You can't do anything, again in the[br]fifth round. Can't do anything with that,
0:28:09.530,0:28:14.800
fifth round again. Oh! Oh we got one. We[br]got one in the eighth round. And so it
0:28:14.800,0:28:20.710
means I can take these two ciphertext and[br]I can use the differential fault attack. I
0:28:20.710,0:28:26.620
actually ran this twice in order to get[br]two pairs of faulty output because it made
0:28:26.620,0:28:30.650
it so much easier. And again, thank you to[br]somebody on the Internet for having
0:28:30.650,0:28:34.750
written a differential fault analysis[br]attack for me. You don't, you don't need
0:28:34.750,0:28:39.470
two, but it just makes it easy for the[br]presentation. So I'm now going to compare.
0:28:39.470,0:28:44.690
Let me just pause that a second, I used[br]somebody else's differential fault attack
0:28:44.690,0:28:49.600
and it gave me in one, for the first pair[br]it gave me 500 possible keys and for the
0:28:49.600,0:28:54.470
second it gave me 200 possible keys. I'm[br]overlapping them. And there was only one
0:28:54.470,0:28:59.860
key that matched both. And that's the key[br]that came out. And let's just again check
0:28:59.860,0:29:05.970
inside the source code, does that key[br]exist? What is the key? And yeah, that is
0:29:05.970,0:29:09.590
the key. So, again what I've...[br]Daniel: That is not a very good key,
0:29:09.590,0:29:14.210
though.[br]Kit: No, Ehhh... I think, if you think
0:29:14.210,0:29:17.640
about randomness, it's as good as any[br]other. Anyway, ehhh...
0:29:17.640,0:29:21.470
Laughter[br]Kit: What have I done? I have flipped a
0:29:21.470,0:29:29.370
bit inside SGX to create a fault in AES[br]New Instruction set that has enabled me to
0:29:29.370,0:29:33.870
get the AES key out of SGX. You shouldn't[br]be able to do that.
0:29:33.870,0:29:40.070
Daniel: So. So now that we have multiple[br]attacks, we should think about a logo and
0:29:40.070,0:29:43.280
a name, right?[br]Kit: This one better be good because the
0:29:43.280,0:29:46.960
other one wasn't very good.[br]Daniel: No, seriously, we are already
0:29:46.960,0:29:47.960
soon...[br]Kit: Okay.
0:29:47.960,0:29:51.430
Daniel: We are, we will write this out.[br]Send this to a conference. People will
0:29:51.430,0:29:56.510
like it, right. This is and I already have[br]a name and a logo for it. Kit: Come on
0:29:56.510,0:29:59.350
then.[br]Daniel: Crypto Vault Screw Hammer.
0:29:59.350,0:30:02.540
Laughter[br]Daniel: It's like, we attack crypto in a
0:30:02.540,0:30:07.299
vault, SGX, and it's like a, like the[br]Clock screw and like Row hammer. And
0:30:07.299,0:30:11.610
like...[br]Kit: I don't think that's very catchy. But
0:30:11.610,0:30:19.840
let me tell you, it's not just crypto. So[br]we're faulting multiplication. So surely
0:30:19.840,0:30:23.780
there's another use for this other than[br]crypto. And this is where something really
0:30:23.780,0:30:27.890
interesting happens. For those of you who[br]are really good at C you can come and
0:30:27.890,0:30:33.870
explain this to me later. This is a really[br]simple bit of C. All I'm doing is getting
0:30:33.870,0:30:39.280
an offset of an array and taking the[br]address of that and putting it into a
0:30:39.280,0:30:43.929
pointer. Why is this interesting? Hmmm,[br]It's interesting because I want to know
0:30:43.929,0:30:47.800
what the compiler does with that. So I am[br]going to wave my magic wand and what the
0:30:47.800,0:30:53.030
compiler is going to do is it's going to[br]make this. Why is that interesting?
0:30:53.030,0:30:58.160
Daniel: Simple pointer arithmetic?[br]Kit: Hmmm. Well. we know that we can fault
0:30:58.160,0:31:02.290
multiplications. So we're no longer[br]looking at crypto. We're now looking at
0:31:02.290,0:31:08.860
just memory. So let's see if I can use[br]this as an attack. So let me try and
0:31:08.860,0:31:12.580
explain what's going on here. On the right[br]hand side, you can see the undervolting.
0:31:12.580,0:31:16.240
I'm going to create an enclave and I've[br]put it in debug mode so that I can see
0:31:16.240,0:31:20.360
what's going on. You can see the size of[br]the enclave because we've got the base and
0:31:20.360,0:31:28.750
the limit of it. And if we look at that in[br]a diagram, what that's saying is here. If
0:31:28.750,0:31:34.780
I can write anything at the top above[br]that, that will no longer be encrypted,
0:31:34.780,0:31:41.720
that will be unencrypted. Okay, let's[br]carry on with that. So, let's just write
0:31:41.720,0:31:46.450
that one statement again and again, that[br]pointer arithmetic again and again and
0:31:46.450,0:31:53.059
again whilst I'm undervolting and see what[br]happens. Oh, suddenly it changed and if
0:31:53.059,0:31:57.560
you look at where it's mapped it to, it[br]has mapped that pointer to memory that is
0:31:57.560,0:32:05.560
no longer inside SGX, it has put it into[br]untrusted memory. So we're just doing the
0:32:05.560,0:32:10.420
same statement again and again whilst[br]undervolting. Besh, we've written
0:32:10.420,0:32:14.630
something that was in the enclave out of[br]the enclave. And I'm just going to display
0:32:14.630,0:32:19.350
the page of memory that we've got there to[br]show you what it was. And there's the one
0:32:19.350,0:32:24.580
line, it's deadbeef And again, I'm just[br]going to look in my source code to see
0:32:24.580,0:32:30.030
what it was. Yeah, it's, you know you[br]know, endianness blah, blah, blah. I have
0:32:30.030,0:32:36.270
now not even used crypto. I have purely[br]used pointer arithmetic to take something
0:32:36.270,0:32:43.140
that was stored inside Intel's SGX and[br]moved it into user space where anyone can
0:32:43.140,0:32:46.380
read it.[br]Daniel: So, yes, I get your point. It's
0:32:46.380,0:32:48.750
more than just crypto, right?[br]Kit: Yeah.
0:32:48.750,0:32:57.490
Daniel: It's way beyond that. So we, we[br]leaked RSA keys. We leaked AES keys.
0:32:57.490,0:33:01.260
Kit: Go on... Yeah, we did not just that[br]though we did memory corruption.
0:33:01.260,0:33:06.340
Daniel: Okay, so. Yeah. Okay. Crypto Vault[br]Screw Hammer, point taken, is not the
0:33:06.340,0:33:10.980
ideal name, but maybe you could come up[br]with something. We need a name and a logo.
0:33:10.980,0:33:14.250
Kit: So pressures on me then. Right, here[br]we go. So it's got to be due to
0:33:14.250,0:33:20.710
undervolting because we're undervolting.[br]Maybe we can get a pun on vault and volt
0:33:20.710,0:33:26.370
in there somewhere. We're stealing[br]something, aren't we? We're corrupting
0:33:26.370,0:33:30.590
something. Maybe. Maybe we're plundering[br]something.
0:33:30.590,0:33:31.880
Daniel: Yeah?[br]Kit: I know.
0:33:31.880,0:33:32.880
[br]Daniel: No?
0:33:32.880,0:33:37.250
Kit: Let's call it plunder volt.[br]Daniel: Oh, no, no, no. That's not it.
0:33:37.250,0:33:38.309
That's not a good nane.[br]Kit: What?
0:33:38.309,0:33:42.710
Daniel: That, no. We need something...[br]That's really not a good name. People will
0:33:42.710,0:33:51.080
hate this name.[br]Kit: Wait, wait, wait, wait, wait.
0:33:51.080,0:33:53.870
Daniel: No...[br]Laughter
0:33:53.870,0:33:57.049
Kit: You can read this if you like,[br]Daniel.
0:33:57.049,0:34:01.410
Daniel: Okay. I, I think I get it. I, I[br]think I get it.
0:34:01.410,0:34:16.730
Kit: No, no, I haven't finished.[br]Laughter
0:34:16.730,0:34:35.329
Daniel: Okay. Yeah, this is really also a[br]very nice comment. Yes. The quality of the
0:34:35.329,0:34:37.659
videos, I think you did a very good job[br]there.
0:34:37.659,0:34:40.879
Kit: Thank you.[br]Daniel: Also, the website really good job
0:34:40.879,0:34:42.619
there.[br]Kit: So, just to summarize, what we've
0:34:42.619,0:34:52.539
done with plunder volt is: It's a new type[br]of attack, it breaks the integrity of SGX.
0:34:52.539,0:34:57.059
It's within SGX. We're doing stuff we[br]shouldn't be able to.
0:34:57.059,0:35:01.050
Daniel: Like AES keys, we leak AES keys,[br]yeah.
0:35:01.050,0:35:06.319
Kit: And we are retrieving the RSA[br]signature key.
0:35:06.319,0:35:11.109
Daniel: Yeah. And yes, we induced memory[br]corruption in bug free code.
0:35:11.109,0:35:20.019
Kit: And we made the Enclave write Secrets[br]to untrusted memory. This is the paper,
0:35:20.019,0:35:27.609
that's been accepted next year. It is my[br]first paper, so thank you very much. Kit,
0:35:27.609,0:35:29.930
that's me.[br]Applause
0:35:29.930,0:35:38.950
Kit: Thank you. David Oswald, Flavio[br]Garcia, Jo Van Bulck and of course, the
0:35:38.950,0:35:46.411
infamous and Frank Piessens. So all that[br]really remains for me to do is to say,
0:35:46.411,0:35:49.499
thank you very much for coming...[br]Daniel: Wait a second, wait a second.
0:35:49.499,0:35:53.440
There's one more thing, I think you[br]overlooked one of the tweets I added it
0:35:53.440,0:35:56.509
here. You didn't see this slide yet?[br]Kit: I haven't seen this one.
0:35:56.509,0:36:00.900
Daniel: This one, I really like it.[br]Kit: It's a slightly ponderous pun on
0:36:00.900,0:36:06.329
Thunderbolt... pirate themed logo.[br]Daniel: A pirate themed logo. I really
0:36:06.329,0:36:13.079
like it. And if it's a pirate themed logo,[br]don't you think there should be a pirate
0:36:13.079,0:36:16.210
themed song?[br]Laughter
0:36:16.210,0:36:25.349
Kit: Daniel, have you written a pirate[br]theme song? Go on then, play it. Let's,
0:36:25.349,0:36:37.220
let's hear the pirate theme song.[br]music -- see screen --
0:36:37.220,0:37:09.229
Music: ...Volt down me enclaves yo ho. Aye[br]but it's fixed with a microcode patch.
0:37:09.229,0:37:30.369
Volt down me enclaves yo ho.[br]Daniel: Thanks to...
0:37:30.369,0:37:43.869
Applause[br]Daniel: Thanks to Manuel Weber and also to
0:37:43.869,0:37:47.480
my group at Theo Graz for volunteering for[br]the choir.
0:37:47.480,0:37:51.980
Laughter[br]Daniel: And then, I mean, this is now the
0:37:51.980,0:37:58.727
last slide. Thank you for your attention.[br]Thank you for being here. And we would
0:37:58.727,0:38:02.369
like to answer questions in the Q&A
0:38:02.369,0:38:07.079
Applause
0:38:07.079,0:38:13.789
Herald: Thank you for your great talk. And[br]thank you some more for the song. If you
0:38:13.789,0:38:18.720
have questions, please line up on the[br]microphones in the room. First question
0:38:18.720,0:38:22.640
goes to the signal angel, any question[br]from the Internet?
0:38:22.640,0:38:26.979
Signal-Angel: Not as of now, no.[br]Herald: All right. Then, microphone number
0:38:26.979,0:38:29.800
4, your question please.[br]Microphone 4: Hi. Thanks for the great
0:38:29.800,0:38:34.809
talk. So, why does this happen now? I[br]mean, thanks for the explanation for wrong
0:38:34.809,0:38:38.440
number, but it wasn't clear. What's going[br]on there?
0:38:38.440,0:38:46.890
Daniel: So, too, if you look at circuits[br]for the signal to be ready at the output,
0:38:46.890,0:38:53.729
they need, electrons have to travel a bit.[br]If you increase the voltage, things will
0:38:53.729,0:39:00.430
go faster. So they will, you will have the[br]output signal ready at an earlier point in
0:39:00.430,0:39:05.089
time. Now the frequency that you choose[br]for your processor should be related to
0:39:05.089,0:39:08.599
that. So if you choose the frequency too[br]high, the outputs will not be ready yet at
0:39:08.599,0:39:13.319
this circuit. And this is exactly what[br]happens, if you reduce the voltage the
0:39:13.319,0:39:17.489
outputs are not ready yet for the next[br]clock cycle.
0:39:17.489,0:39:22.720
Kit: And interestingly, we couldn't fault[br]really short instructions. So anything
0:39:22.720,0:39:26.400
like an add or an xor, it was basically[br]impossible to fault. So they had to be
0:39:26.400,0:39:30.859
complex instructions that probably weren't[br]finishing by the time the next clock tick
0:39:30.859,0:39:31.950
arrived.[br]Daniel: Yeah.
0:39:31.950,0:39:35.580
Microphone 4: Thank you.[br]Herald: Thanks for your answer. Microphone
0:39:35.580,0:39:38.960
number 4 again.[br]Microphone 4: Hello. It's a very
0:39:38.960,0:39:45.160
interesting theoretical approach I think.[br]But you were capable to break these crypto
0:39:45.160,0:39:53.049
mechanisms, for example, because you could[br]do zillions of iterations and you are sure
0:39:53.049,0:39:57.930
to trigger the fault. But in practice,[br]say, as someone is having a secure
0:39:57.930,0:40:03.859
conversation, is it practical, even close[br]to a possible too to break it with that?
0:40:03.859,0:40:08.210
Daniel: It totally depends on your threat[br]model. So what can you do with the
0:40:08.210,0:40:12.789
enclave? If you, we are assuming that we[br]are running with root privileges here and
0:40:12.789,0:40:17.461
a root privileged attacker can certainly[br]run the enclave with certain inputs, again
0:40:17.461,0:40:21.970
and again. If the enclave doesn't have any[br]protection against replay, then certainly
0:40:21.970,0:40:25.759
we can mount an attack like that. Yes.[br]Microphone 4: Thank you.
0:40:25.759,0:40:30.640
Herald: Signal-Angel your question.[br]Signal: Somebody asked if the attack only
0:40:30.640,0:40:33.980
applies to Intel or to AMD or other[br]architectures as well.
0:40:33.980,0:40:37.900
Kit: Oh, good question, I suspect right[br]now there are people trying this attack on
0:40:37.900,0:40:41.599
AMD in the same way that when clock screw[br]came out, there were an awful lot of
0:40:41.599,0:40:46.759
people starting to do stuff on Intel as[br]well. We saw the clock screw attack on ARM
0:40:46.759,0:40:52.460
with frequency. Then we saw ARM with[br]voltage. Now we've seen Intel with
0:40:52.460,0:40:57.369
voltage. And someone else has done similar[br]Volt pwn has done something very similar
0:40:57.369,0:41:01.799
to us. And I suspect AMD is the next one.[br]I guess, because it's not out there as
0:41:01.799,0:41:06.789
much. We've tried to do them in the order[br]of, you know, scaring people.
0:41:06.789,0:41:10.130
Laughter[br]Kit: Scaring as many people as possible as
0:41:10.130,0:41:13.789
quickly as possible.[br]Herald: Thank you for the explanation.
0:41:13.789,0:41:18.319
Microphone number 4.[br]Microphone 4: Hi. Hey, great. Thanks for
0:41:18.319,0:41:25.339
the representation. Can you get similar[br]results by Harrower? I mean by tweaking
0:41:25.339,0:41:28.309
the voltage that you provide to the CPU[br]or...
0:41:28.309,0:41:32.680
Kit: Well, I refer you to my earlier[br]answer. I know for a fact that there are
0:41:32.680,0:41:37.099
people doing this right now with physical[br]hardware, seeing what they can do. Yes,
0:41:37.099,0:41:40.569
and I think it will not be long before[br]that paper comes out.
0:41:40.569,0:41:46.519
Microphone 4: Thank you.[br]Herald: Thanks. Microphone number one.
0:41:46.519,0:41:51.150
Your question. Sorry, microphone 4 again,[br]sorry.
0:41:51.150,0:41:57.920
Microphone 4: Hey, thanks for the talk.[br]Two small questions. One, why doesn't
0:41:57.920,0:42:07.789
anything break inside SGX when you do[br]these tricks? And second one, why when you
0:42:07.789,0:42:14.539
write outside the enclaves memory, their[br]value is not encrypted.
0:42:14.539,0:42:21.839
Kit: So the enclave is an encrypted area[br]of memory. So when it points to an
0:42:21.839,0:42:24.260
unencrypted, it's just[br]going to write it to the unencrypted
0:42:24.260,0:42:28.650
memory. Does that make sense?[br]Daniel: From the enclaves perspective,
0:42:28.650,0:42:33.079
none of the memory is encrypted. This is[br]just transparent to the enclave. So if the
0:42:33.079,0:42:36.680
enclave will write to another memory[br]location. Yes, it just won't be encrypted.
0:42:36.680,0:42:40.609
Kit Yeah. And what's happening is we're[br]getting flips in the registers. Which is
0:42:40.609,0:42:44.079
why I think we're not getting an integrity[br]check because the enclave is completely
0:42:44.079,0:42:48.150
unaware that anything's even gotten wrong.[br]It's got a value in its memory and it's
0:42:48.150,0:42:51.230
gonna use it.[br]Daniel: Yeah. The integrity check is only
0:42:51.230,0:42:55.210
on the on the memory that you logged from[br]RAM. Yeah.
0:42:55.210,0:43:02.589
Herald: Okay, microphone number 7.[br]Microphone 7: Yeah. Thank you. Interesting
0:43:02.589,0:43:11.950
work. I was wondering, you showed us the[br]example of the code that wrote outside the
0:43:11.950,0:43:17.229
Enclave Memory using simple pointer[br]arithmetics. Have you been able to talk to
0:43:17.229,0:43:23.559
Intel why this memory access actually[br]happens? I mean, you showed us the output
0:43:23.559,0:43:28.569
of the program. It crashes, but[br]nevertheless, it writes the result to the
0:43:28.569,0:43:34.469
resulting memory address. So there must be[br]something wrong, like the attack that
0:43:34.469,0:43:39.979
happened two years ago at the Congress[br]about, you know, all that stuff.
0:43:39.979,0:43:46.030
Daniel: So generally enclaves can read and[br]write any memory location in their host
0:43:46.030,0:43:52.819
application. We have also published papers[br]that basically argued that this might not
0:43:52.819,0:44:00.140
be a good idea, good design decision. But[br]that's the current design. And the reason
0:44:00.140,0:44:04.849
is that this makes interaction with the[br]enclave very easy. You can just place your
0:44:04.849,0:44:09.279
payload somewhere in the memory. Hand the[br]pointer to the enclave and the enclave can
0:44:09.279,0:44:13.810
use the data from there, maybe copy it[br]into the enclave memory if necessary, or
0:44:13.810,0:44:19.579
directly work on the data. So that's why[br]this memory access to the normal memory
0:44:19.579,0:44:24.500
region is not illegal.[br]Kit: And if you want to know more, you can
0:44:24.500,0:44:29.450
come and find Daniel afterwards.[br]Herald: Okay. Thanks for the answer.
0:44:29.450,0:44:32.730
Signal-Angel, the questions from the[br]Internet.
0:44:32.730,0:44:39.140
Signal-Angel: Yes. The question came up. If, how[br]stable the system you're attacking with
0:44:39.140,0:44:42.150
the hammering[br]is while you're performing their attack.
0:44:42.150,0:44:46.180
Kit: It's really stable. Once I've been[br]through three months of crashing the
0:44:46.180,0:44:49.720
computer. I got to a point where I had a[br]really, really good frequency voltage
0:44:49.720,0:44:55.520
combination. And we did discover on all[br]Intel chips, it was different. So even, on
0:44:55.520,0:44:59.280
what looked like and we bought almost an[br]identical little nook, we bought one with
0:44:59.280,0:45:05.670
exactly the same spec and it had a[br]different sort of frequency voltage model.
0:45:05.670,0:45:09.719
But once we'd done this sort of[br]benchmarking, you could pretty much do any
0:45:09.719,0:45:14.509
attack without it crashing at all.[br]Daniel: But without this benchmarking,
0:45:14.509,0:45:17.729
it's true. We would often reboot.[br]Kit: That was a nightmare yeah, I wish I'd
0:45:17.729,0:45:20.440
done that the beginning. It would've saved[br]me so much time.
0:45:20.440,0:45:25.019
Herald: Thanks again for answering.[br]Microphone number 4 your question.
0:45:25.019,0:45:29.260
Microphone 4: Can Intel fix this with a[br]microcode update?
0:45:29.260,0:45:36.549
Daniel: So, there are different approaches[br]to this. Of course, the quick fix is to
0:45:36.549,0:45:41.690
remove the access to the MSR, which is of[br]course inconvenient because you can't
0:45:41.690,0:45:45.240
undervolt your system anymore. So maybe[br]you want to choose whether you want to use
0:45:45.240,0:45:50.660
SGX or want to have a gaming computer[br]where you undervolt the system or control
0:45:50.660,0:45:56.219
the voltage from software. But is this a[br]real fix? I don't know. I think there are
0:45:56.219,0:45:58.729
more vectors, right?[br]Kit: Yeah.But, well I'll be interested to
0:45:58.729,0:46:01.210
see what they're going to do with the next[br]generation of chips.
0:46:01.210,0:46:04.609
Daniel: Yeah.[br]Herald: All right. Microphone number 7,
0:46:04.609,0:46:08.859
what's your question?[br]Microphone 7: Yes, similarly to the other
0:46:08.859,0:46:14.170
question, is there a way you can prevent[br]such attacks when writing code that runs
0:46:14.170,0:46:17.820
in the secure enclave?[br]Kit: Well, no. That's the interesting
0:46:17.820,0:46:22.739
thing, it's really hard to do. Because we[br]weren't writing code with bugs, we were
0:46:22.739,0:46:26.999
just writing normal pointer arithmetic.[br]Normal crypto. If anywhere in your code,
0:46:26.999,0:46:29.549
you're using a multiplication. It can be[br]attacked.
0:46:29.549,0:46:34.750
Daniel: But of course, you could use fault[br]resistant implementations inside the
0:46:34.750,0:46:39.160
enclave. Whether that is a practical[br]solution is yet to be determined
0:46:39.160,0:46:41.859
Kit: Oh yes, yea, right, you could write[br]duplicate code and do comparison things
0:46:41.859,0:46:46.829
like that. But if, yeah.[br]Herald: Okay. Microphone number 3. What's
0:46:46.829,0:46:47.829
your question?
0:46:47.829,0:46:53.390
Microphone 3: Hi. I can't imagine Intel[br]being very happy about this and recently
0:46:53.390,0:46:57.450
they were under fire for how they were[br]handling a coordinated disclosure. So can
0:46:57.450,0:47:01.299
you summarize experience?[br]Kit: They were... They were really nice.
0:47:01.299,0:47:06.380
They were really nice. We disclosed really[br]early, like before we had all of the
0:47:06.380,0:47:08.960
attacks.[br]Daniel: We just had a POC at that point.
0:47:08.960,0:47:11.239
Kit: Yeah.[br]Daniel: Yeah, Simply POC. Very simple.
0:47:11.239,0:47:14.890
Kit: They've been really nice. They wanted[br]to know what we were doing. They wanted to
0:47:14.890,0:47:18.660
see all our attacks. I found them lovely.[br]Daniel: Yes.
0:47:18.660,0:47:21.880
Kit: Am I allowed to say that?[br]Laughter
0:47:21.880,0:47:24.859
Daniel: I mean, they also have interest[br]in...
0:47:24.859,0:47:26.950
Kit: Yeah.[br]Daniel ...making these processes smooth.
0:47:26.950,0:47:30.279
So that vulnerability researchers also[br]report to them.
0:47:30.279,0:47:32.039
Kit: Yeah.[br]Daniel: Because if everyone says, oh this
0:47:32.039,0:47:37.700
was awful, then they will also not get a[br]lot of reports. But if they do their job
0:47:37.700,0:47:39.849
well and they did in our case.[br]Kit: Yeah.
0:47:39.849,0:47:44.450
Daniel: Then of course, it's nice.[br]Herald: Okay. Microphone number 4...
0:47:44.450,0:47:48.499
Danie: We even got a bug bounty.[br]Kit: We did get a bug bounty. I didn't
0:47:48.499,0:47:51.499
want to mention that because I haven't[br]told my university yet.
0:47:51.499,0:47:55.430
Laughter[br]Microphone 4: Thank you. Thank you for the
0:47:55.430,0:48:01.799
funny talk. If I understood, you're right,[br]it means to really be able to exploit
0:48:01.799,0:48:07.249
this. You need to do some benchmarking on[br]the machine that you want to exploit. Do
0:48:07.249,0:48:15.239
you see any way to convert this to a[br]remote exploit? I mean, that to me, it
0:48:15.239,0:48:19.650
seems you need physical access right now[br]because you need to reboot the machine.
0:48:19.650,0:48:23.859
Kit: If you've done benchmarking on an[br]identical machine, I don't think you would
0:48:23.859,0:48:27.039
have to have physical access.[br]Daniel: But you would have to make sure
0:48:27.039,0:48:29.549
that it's really an identical machine.[br]Kit: Yeah.
0:48:29.549,0:48:33.499
Daniel: But in the cloud you will find a[br]lot of identical machines.
0:48:33.499,0:48:41.119
Laughter[br]Herald: Okay, microphone number 4 again.
0:48:41.119,0:48:46.059
Daniel: Also, as we said, like the[br]temperature plays an important role.
0:48:46.059,0:48:47.650
Kit: Yeah.[br]Daniel: You will also in the cloud find a
0:48:47.650,0:48:52.390
lot of machines at similar temperatures[br]Kit: And there was, there is obviously
0:48:52.390,0:48:55.569
stuff that we didn't show you. We did[br]start measuring the total amount of clock
0:48:55.569,0:49:00.259
ticks it took to do maybe 10 RSA[br]encryption. And then we did start doing
0:49:00.259,0:49:03.820
very specific timing attacks. But[br]obviously it's much easier to just do
0:49:03.820,0:49:10.452
10000 of them and hope that one faults.[br]Herald: All right. Seems there are no
0:49:10.452,0:49:13.940
further questions. Thank you very much for[br]your talk. For your research and for
0:49:13.940,0:49:15.140
answering all the questions.[br]Applause
0:49:15.140,0:49:18.529
Kit: Thank you.[br]Daniel: Thank you.
0:49:18.529,0:49:22.479
postroll music
0:49:22.479,0:49:48.000
subtitles created by c3subtitles.de[br]in the year 20??. Join, and help us!