Return to Video

36C3 - TrustZone-M(eh): Breaking ARMv8-M's security

  • 0:00 - 0:15
    36C3 preroll music
  • 0:15 - 0:26
    Herald: Our next speaker is way is paved
    with broken trust zones. He's no stranger
  • 0:26 - 0:32
    to breaking ARM's, equipment or crypto
    wallets or basically anything he touches.
  • 0:32 - 0:40
    It just dissolves in his fingers. He's one
    of Forbes, 30 under 30s in tech. And
  • 0:40 - 0:43
    please give a warm round of applause to
    Thomas Roth.
  • 0:43 - 0:48
    Applause.
  • 0:48 - 0:55
    Thomas: Test, okay. Wonderful. Yeah.
    Welcome to my talk. TrustZone-M: Hardware
  • 0:55 - 1:01
    attacks on ARMv8-M security features. My
    name is Thomas Roth. You can find me on
  • 1:01 - 1:06
    Twitter. I'm @stacksmashing and I'm a
    security researcher, consultant and
  • 1:06 - 1:11
    trainer affiliated with a couple of
    companies. And yeah, before we can start,
  • 1:11 - 1:16
    I need to to thank some people. So first
    off, Josh Datko and Dimitri Nedospasov
  • 1:16 - 1:20
    who've been super helpful at anytime I was
    stuck somewhere, or just wanted some
  • 1:20 - 1:26
    feedback. They immediately helped me. And
    also Colin O'Flynn, who gave me constant
  • 1:26 - 1:31
    feedback and helped me with some troubles,
    gave me tips and so on. And so without
  • 1:31 - 1:37
    these people and many more who paved the
    way towards this research, I wouldn't be
  • 1:37 - 1:42
    here. Also, thanks to NXP and Microchip
    who I had to work with as part of this
  • 1:42 - 1:48
    talk. And it was awesome. I had a lot of
    very bad vendor experiences, but these two
  • 1:48 - 1:55
    were really nice to work with. Also some
    prior work. So Colin O'Flynn and Alex
  • 1:55 - 1:59
    Dewar released a paper, I guess last year
    or this year "On-Device Power Analysis
  • 1:59 - 2:04
    Across Hardware Security Domains". And
    they basically looked at TrustZone from a
  • 2:04 - 2:10
    differential power analysis viewpoint and
    otherwise TrustZone-M is pretty new, but
  • 2:10 - 2:16
    lots of work has been done on the big or
    real TrustZone and also lots and lots of
  • 2:16 - 2:21
    works on fault injection would be far too
    much to list here. So just google fault
  • 2:21 - 2:26
    injection and you'll see what I mean.
    Before we start, what is TrustZone-M? So
  • 2:26 - 2:32
    TrustZone-M is the small TrustZone. It's
    basically a simplified version of the big
  • 2:32 - 2:36
    TrustZone that you find on Cortex-A
    processors. So basically if you have an
  • 2:36 - 2:40
    Android phone, chances are very high that
    your phone actually runs TrustZone and
  • 2:40 - 2:45
    that, for example, your key store of
    Android is backed by TrustZone. And
  • 2:45 - 2:51
    TrustZone basically splits the CPU into a
    secure and a non-secure world. And so, for
  • 2:51 - 2:54
    example, you can say that a certain
    peripheral should only be available to the
  • 2:54 - 2:58
    secure world. So, for example, if you have
    a crypto accelerator, you might only want
  • 2:58 - 3:04
    to use it in the secure world. It also, if
    you're wondering what's the difference to
  • 3:04 - 3:11
    an MPU - it also comes with two MPUs.
    Sorry, not MMUs, MPUs. And so last year we
  • 3:11 - 3:15
    gave a talk on bitcoin wallets. And so
    let's take those as an example on a
  • 3:15 - 3:20
    bitcoin wallet you often have different
    apps, for example, for Bitcoin, Dogecoin
  • 3:20 - 3:25
    or Monaro, and then underneath you have an
    operating system. The problem is kind of
  • 3:25 - 3:29
    this operating system is very complex
    because it has to handle graphics
  • 3:29 - 3:33
    rendering and so on and so forth. And
    chances are high that it gets compromised.
  • 3:33 - 3:38
    And if it gets compromised, all your funds
    are gone. And so with TrustZone, you could
  • 3:38 - 3:43
    basically have a second operating system
    separated from your normal one that
  • 3:43 - 3:47
    handles all the important stuff like
    firmware update, key store attestation and
  • 3:47 - 3:53
    so on and reduces your attack surface. And
    the reason I actually looked at
  • 3:53 - 3:57
    TrustZone-M is we got a lot of requests
    for consulting on TrustZone-M. So
  • 3:57 - 4:03
    basically, after our talk last year, a lot
    of companies reached out to us and said,
  • 4:03 - 4:07
    okay, we want to do this, but more
    securely. And a lot of them try to use
  • 4:07 - 4:12
    TrustZone-M for this. And so far there's
    been, as far as I know, little public
  • 4:12 - 4:17
    research into TrustZone-M and whether it's
    protected against certain types of
  • 4:17 - 4:21
    attacks. And we also have companies that
    start using them as secure chips. So, for
  • 4:21 - 4:25
    example, in the automotive industry, I
    know somebody who was thinking about
  • 4:25 - 4:29
    putting them into car keys. I know about
    some people in the payment industry
  • 4:29 - 4:35
    evaluating this. And as said, hardware
    wallets. And one of the terms that come up
  • 4:35 - 4:40
    again and again is this is a secure chip.
    But I mean, what is the secure chip
  • 4:40 - 4:45
    without a threat model? There's no such
    thing as a secure chip because there are
  • 4:45 - 4:49
    so many attacks and you need to have a
    threat model to understand what are you
  • 4:49 - 4:53
    actually protecting against. So, for
    example, a chip might have software
  • 4:53 - 4:59
    features or hardware features that make
    the software more secure, such as NX bit
  • 4:59 - 5:03
    and so on and so forth. And on the other
    hand, you have hardware attacks, for
  • 5:03 - 5:08
    example, debug port side channel attacks
    and fault injection. And often the
  • 5:08 - 5:14
    description of a chip doesn't really tell
    you what it's protecting you against. And
  • 5:14 - 5:19
    often I would even say it's misleading in
    some cases. And so you will see, oh, this
  • 5:19 - 5:23
    is a secure chip and you ask marketing and
    they say, yeah, it has the most modern
  • 5:23 - 5:28
    security features. But it doesn't really
    specify whether they are, for example,
  • 5:28 - 5:32
    protecting against fault injection attacks
    or whether they consider this out of
  • 5:32 - 5:38
    scope. In this talk, we will exclusively
    look at hardware attacks and more
  • 5:38 - 5:42
    specifically, we will look at fault
    injection attacks on TrustZone-M. And so
  • 5:42 - 5:47
    all of the attacks we're going to see are
    local to the device only you need to have
  • 5:47 - 5:52
    it in your hands. And there's no chance,
    normally, of remotely exploiting them.
  • 5:52 - 5:59
    Yeah. So this will be our agenda. We will
    start with a short introduction of
  • 5:59 - 6:02
    TrustZone-M, which will have a lot of
    theory on like memory layouts and so on.
  • 6:02 - 6:06
    We will talk a bit about the fault-
    injection setup and then we will start
  • 6:06 - 6:13
    attacking real chips. These 3, as you will
    see. So on a Cortex-M processor you have a
  • 6:13 - 6:17
    flat memory map. You don't have a memory
    management unit and all your peripherals,
  • 6:17 - 6:22
    your flash, your ram, it's all mapped to a
    certain address in memory and TrustZone-M
  • 6:22 - 6:28
    allows you to partition your flash or your
    ram into secure and non secure parts. And
  • 6:28 - 6:31
    so, for example, you could have a tiny
    secure area because your secret code is
  • 6:31 - 6:37
    very small and a big non secure area. The
    same is true for Ram and also for the
  • 6:37 - 6:43
    peripherals. So for example, if you have a
    display and a crypto engine and so on. You
  • 6:43 - 6:49
    can decide whether these peripherals
    should be secure or non secure. And so
  • 6:49 - 6:53
    let's talk about these two security
    states: secure and non secure. Well, if
  • 6:53 - 6:58
    you have code running in secure flash or
    you have secure code running, it can call
  • 6:58 - 7:03
    anywhere into the non secure world. It's
    basically the highest privilege level you
  • 7:03 - 7:08
    can have. And so there's no protection
    there. However, the opposite, if we tried
  • 7:08 - 7:12
    to go from the non secure world and to the
    secure world would be insecure because,
  • 7:12 - 7:15
    for example, you could jump to the parts
    of the code that are behind certain
  • 7:15 - 7:20
    protections and so on. And so that's why,
    if you tried to jump from an unsecured
  • 7:20 - 7:27
    code into a secure code, it will cause an
    exception. And to handle that, there's a
  • 7:27 - 7:32
    third memory state which is called non
    secure callable. And as the name implies,
  • 7:32 - 7:38
    basically you're non secure code can call
    into the non secure callable code. More
  • 7:38 - 7:43
    specifically, it can only call to non
    secure callable code addresses where
  • 7:43 - 7:50
    there's an SG instruction which stands for
    Secure Gateway. And the idea behind the
  • 7:50 - 7:54
    secure gateway is that if you have a non
    secure kernel running, you probably also
  • 7:54 - 7:58
    have a secure kernel of running. And
    somehow this secure kernel will expose
  • 7:58 - 8:03
    certain system calls, for example. And so
    we want to somehow call from the non
  • 8:03 - 8:09
    secure kernel into these system calls, but
    as I've just mentioned, we can't do that
  • 8:09 - 8:15
    because this will unfortunately cause an
    exception. And so the way this is handled
  • 8:15 - 8:20
    on TrustZone-M is that you create so-
    called secure gateway veneer functions.
  • 8:20 - 8:25
    These are very short functions in the non
    secure callable area. And so if we want,
  • 8:25 - 8:30
    for example, to call the load key system
    call, we would call the load key veneer
  • 8:30 - 8:35
    function, which in turn would call the
    real load key function. And these veneer
  • 8:35 - 8:40
    functions are super short. So if you look
    at the disassembly of them, it's like two
  • 8:40 - 8:44
    instructions. It's the secure gateway
    instruction and then a branch instruction
  • 8:44 - 8:52
    to what's your real function. And so if we
    combine this, we end up with this diagram
  • 8:52 - 8:57
    secure can call into non secure, non
    secure, can call into NSC and NSC can call
  • 8:57 - 9:04
    into your secure world. But how do we
    manage these memory states? How do we know
  • 9:04 - 9:09
    what security state does an address have?
    And so for this in TrustZone-M, we use
  • 9:09 - 9:14
    something called attribution units and
    there're by default there are two
  • 9:14 - 9:19
    attribution units available. The first one
    is the SAU the Security Attribution Unit,
  • 9:19 - 9:24
    which is standard across chips. It's
    basically defined by ARM how you use this.
  • 9:24 - 9:29
    And then there's the IDAU. The
    Implementation Defined Attribution Unit,
  • 9:29 - 9:34
    which is basically custom to the silicon
    vendor, but can also be the same across
  • 9:34 - 9:41
    several chips. And to get the security
    state of an address, the security
  • 9:41 - 9:47
    attribution of both the SAU and the IDAU
    are combined and whichever one has the
  • 9:47 - 9:53
    higher privilege level will basically win.
    And so let's say our SAU says this address
  • 9:53 - 9:59
    is secure and our IDAU says this address
    is non secure the SAU wins because it's
  • 9:59 - 10:06
    the highest privilege level. And basically
    our address would be considered secure.
  • 10:06 - 10:12
    This is a short table. If both the SAU and
    the IDAU agree, we will be non secure if
  • 10:12 - 10:17
    both say, hey, this is secure, it will be
    secure. However, if they disagree and the
  • 10:17 - 10:23
    SAU says, hey, this address is secure the
    IDAU says it's non secure, it will still
  • 10:23 - 10:27
    be secure because secure is to have
    privilege level. The opposite is true. And
  • 10:27 - 10:34
    with even with non secure callable, secure
    is more privileged than NSC. And so secure
  • 10:34 - 10:41
    will win. But if we mix NS and NSC, we get
    non-secular callable. Okay. My initial
  • 10:41 - 10:46
    hypothesis when I read all of this was if
    we break or disable the attribution units,
  • 10:46 - 10:52
    we probably break the security. And so to
    break these, we have to understand them.
  • 10:52 - 10:58
    And so let's look at the SAU the security
    attribution unit. It's standardized by
  • 10:58 - 11:02
    ARM. It's not available on all chips. And
    it basically allows you to create memory
  • 11:02 - 11:09
    regions with different security states.
    So, for example, if the SAU is turned off,
  • 11:09 - 11:13
    everything will be considered secure. And
    if we turn it on, but no regions are
  • 11:13 - 11:17
    configured, still, everything will be
    secure. We can then go and add, for
  • 11:17 - 11:24
    example, address ranges and make them NSC
    or non secure and so on. And this is done
  • 11:24 - 11:29
    very, very easily. You basically have
    these five registers. You have the SAU
  • 11:29 - 11:35
    control register where you basically can
    turn it on or off. You have the SAU type,
  • 11:35 - 11:38
    which gives you the number of supported
    regions on your platform because this can
  • 11:38 - 11:43
    be different across different chips. And
    then we have the region number register,
  • 11:43 - 11:46
    which you use to select the region you
    want to configure and then you set the
  • 11:46 - 11:50
    base address and the limit address. And
    that's basically it. So, for example, if
  • 11:50 - 11:57
    we want to set region zero, we simply set
    the RNR register to zero. Then we set the
  • 11:57 - 12:06
    base address to 0x1000. We set the limit
    address to 0x1FE0, which is identical to
  • 12:06 - 12:09
    1FFF because there are some other bits
    behind there that we don't care about
  • 12:09 - 12:15
    right now. And then we turn on the
    security attribution unit and now our
  • 12:15 - 12:19
    memory range is marked as secure if you
    want to create a second region we simply
  • 12:19 - 12:26
    change RNR to, for example, 1 again insert
    some nice addresses. Turn on the SAU and
  • 12:26 - 12:34
    we have a second region this time from
    4000 to 5FFF. So to summarize, we have
  • 12:34 - 12:40
    three memory security states. We have S
    secure and we have NSC non secure callable
  • 12:40 - 12:46
    and we have NS non secure. We also have
    the two attribution units, the SAU
  • 12:46 - 12:53
    standard by ARM and the IDAU which is
    potentially custom we will use SAU and
  • 12:53 - 13:00
    IDAU a lot. So this was very important.
    Cool. Let's talk about fault injection. So
  • 13:00 - 13:06
    as I've mentioned, we want to use fault
    injection to compromise TrustZone. And the
  • 13:06 - 13:11
    idea behind fault injection or as it's
    also called glitching is to introduce
  • 13:11 - 13:15
    faults into a chip. So, for example, you
    cut the power for a very short amount of
  • 13:15 - 13:19
    time while you change the period of the
    clock signal or even you could go and
  • 13:19 - 13:24
    inject electromagnetic shocks in your
    chip. You can also shoot at it with a
  • 13:24 - 13:29
    laser and so on and so forth. Lots of ways
    to do this. And the goal of this is to is
  • 13:29 - 13:34
    to cause undefined behavior. And in this
    talk, we will specifically look at
  • 13:34 - 13:40
    something called voltage glitching. And so
    the idea behind voltage glitching is that
  • 13:40 - 13:45
    we cut the power to the chip for very,
    very short amount of time at a very
  • 13:45 - 13:49
    precisely timed moment. And this will
    cause some interesting behavior. So
  • 13:49 - 13:57
    basically, if you would look at this on an
    oscilloscope, we would basically have a
  • 13:57 - 14:03
    stable voltage, stable voltage, stable
    voltage, and then suddenly it drops and
  • 14:03 - 14:08
    immediately returns. And this drop will
    only be a couple of nanoseconds long. And
  • 14:08 - 14:13
    so, for example, you can have glitches
    that are 10 nanoseconds long or 15
  • 14:13 - 14:19
    nanoseconds long and so on. Depends on
    your chip. And yeah. And this allows you
  • 14:19 - 14:24
    to do different things. So, for example, a
    glitch can allow you to skip instructions.
  • 14:24 - 14:29
    It can corrupt flash reads or flash
    writes. It can corrupt memory registers or
  • 14:29 - 14:35
    register reads and writes. And skipping
    instructions for me is always the most
  • 14:35 - 14:40
    interesting one, because it allows you to
    directly go from disassembly to
  • 14:40 - 14:45
    understanding what you can potentially
    jump over. So, for example, if we have
  • 14:45 - 14:51
    some code, this would be a basic firmware
    boot up code. We have an initialized
  • 14:51 - 14:55
    device function. Then we have a function
    that basically verifies the firmware
  • 14:55 - 15:00
    that's in flash and then we have this
    boolean check whether our firmware was
  • 15:00 - 15:05
    valid. And now if we glitch at just the
    right time, we might be able to glitch
  • 15:05 - 15:13
    over this check and boot our potentially
    compromised firmware, which is super nice.
  • 15:13 - 15:19
    So how does this relate to TrustZone?
    Well, if we manage to glitch over enable
  • 15:19 - 15:26
    TrustZone, we might be able to break
    TrustZone. So how do you actually do this?
  • 15:26 - 15:31
    Well, we need something to wait for a
    certain delay and generate a pulse at just
  • 15:31 - 15:36
    the right time with very high precision.
    We are talking about nano seconds here,
  • 15:36 - 15:40
    and we also need something to drop the
    power to the target. And so if you need
  • 15:40 - 15:46
    precise timing and so on, what works very
    well is an FPGA. And so, for example, the
  • 15:46 - 15:52
    code that was released as part of this all
    runs on the Lattice iCEstick, which is
  • 15:52 - 15:57
    roughly 30 bucks and you need a cheap
    MOSFET and so together this is like thirty
  • 15:57 - 16:02
    one dollars of equipment. And on a setup
    side, this looks something like this. You
  • 16:02 - 16:07
    would have your FPGA, which has a trigger
    input. And so, for example, if you want to
  • 16:07 - 16:10
    glitch something doing the boot up of a
    chip, you could connect this to the reset
  • 16:10 - 16:15
    line of the chip. And then we have an
    output for the glitch pulse. And then if
  • 16:15 - 16:21
    we hook this all up, we basically have our
    power supply to the chip run over a
  • 16:21 - 16:27
    MOSFET. And then if the glitch pulls goes
    high, we drop the power to ground and the
  • 16:27 - 16:33
    chip doesn't get power for a couple of
    nanoseconds. Let's talk about this power
  • 16:33 - 16:39
    supply, because a chip has a lot of
    different things inside of it. So, for
  • 16:39 - 16:45
    example, a microcontroller has a CPU core.
    We have a Wi-Fi peripheral. We have GPIO.
  • 16:45 - 16:51
    We might have Bluetooth and so on. And
    often these peripherals run at different
  • 16:51 - 16:57
    voltages. And so while our microcontroller
    might just have a 3.3 volt input,
  • 16:57 - 17:00
    internally there are a lot of different
    voltages at play. And the way these
  • 17:00 - 17:05
    voltages are generated often is using
    in-chip regulators. And basically these
  • 17:05 - 17:11
    regulators connect with the 3.3 voltage in
    and then generate the different voltages
  • 17:11 - 17:17
    for the CPU core and so on. But what's
    nice is that on a lot of chips there are
  • 17:17 - 17:22
    behind the core regulator, so called
    bypass capacitors, and these external
  • 17:22 - 17:26
    capacitors are basically there to
    stabilize the voltage because regulators
  • 17:26 - 17:32
    tend to have a very noisy output and you
    use the capacitor to make it more smooth.
  • 17:32 - 17:37
    But if you look at this, this also gives
    us direct access to the CPU core power
  • 17:37 - 17:42
    supply. And so if we just take a heat gun
    and remove the capacitor, we actually kind
  • 17:42 - 17:47
    of change the pin out of the processor
    because now we have a 3.3 voltage in, we
  • 17:47 - 17:53
    have a point to input the core voltage and
    we have ground. So we basically gained
  • 17:53 - 18:00
    direct access to the internal CPU core
    voltage rails. The only problem is these
  • 18:00 - 18:05
    capacitors are for a reason. And so if we
    remove them, then your chip might stop
  • 18:05 - 18:10
    working. But very easy solution. You just
    hook up a power supply to it, set it to
  • 18:10 - 18:15
    1.2 volts or whatever, and then suddenly
    it works. And this also allows you to
  • 18:15 - 18:23
    glitch very easily. You just glitch on
    your power rail towards the chip. And so
  • 18:23 - 18:27
    this is our current setup. So we have the
    Lattice iCEstick. We also use a
  • 18:27 - 18:31
    multiplexer as an analog switch to cut the
    power to the entire device. If we want to
  • 18:31 - 18:37
    reboot everything, we have the MOSFET and
    we have a power supply. Now hooking this
  • 18:37 - 18:42
    all up on a bread board is fun the first
    time, it's okay the second time. But the
  • 18:42 - 18:47
    third time it begins to really, really
    suck. And as soon as something breaks with
  • 18:47 - 18:52
    like 100 jumper wires on your desk, the
    only way to debug is to start over. And so
  • 18:52 - 18:57
    that's why I decided to design a small
    hardware platform that combines all of
  • 18:57 - 19:03
    these things. So it has an FPGA on it. It
    has analog input and it has a lot of
  • 19:03 - 19:08
    glitch circuitry and it's called the Mark
    Eleven. If you've read William Gibson, you
  • 19:08 - 19:13
    might know where this is from. And it
    contains a Lattice iCE40, which has a
  • 19:13 - 19:18
    fully open source toolchain, thanks to
    Clifford Wolf and so. And this allows us
  • 19:18 - 19:23
    to very, very quickly develop new
    triggers, develop new glitched code and so
  • 19:23 - 19:27
    on. And it makes compilation and
    everything really really fast. It also
  • 19:27 - 19:32
    comes with three integrated power
    supplies. So we have a 1.2 watt power
  • 19:32 - 19:38
    supply, 3.3, 5 volts and so on, and you
    can use it for DPA. And this is based
  • 19:38 - 19:43
    around some existing devices. So, for
    example, the FPGA part is based on the
  • 19:43 - 19:49
    1BitSquared iCEBreaker. The analog front
    end, thanks to Colin O'Flynn, is based on
  • 19:49 - 19:54
    the ChipWhisperer Nano. And then the
    glitch circuit is basically what we've
  • 19:54 - 19:59
    been using on bread boards for quite a
    while. Just combined on a single device.
  • 19:59 - 20:03
    And so unfortunately, as always with
    hardware production takes longer than you
  • 20:03 - 20:07
    might assume. But if you drop me a message
    on Twitter, I'm happy to send you a PCB as
  • 20:07 - 20:13
    soon as they work well. And the BOM is
    around 50 bucks. Cool. So now that we are
  • 20:13 - 20:20
    ready to have to actually attack chips,
    let's look at an example. So the very
  • 20:20 - 20:25
    first chip that I encountered that used
    TrustZone-M was the Microchip SAM 11. And
  • 20:25 - 20:32
    so this chip was released in June 2018.
    And it's kind of it's a small, slow chip.
  • 20:32 - 20:38
    It's runs at 32 megahertz. It has up to 64
    kilobytes of flash and 16 kilobytes of
  • 20:38 - 20:44
    SRAM, but it's super cheap. It's like one
    dollar eighty at quantity one. And so it's
  • 20:44 - 20:50
    really nice, really affordable. And we had
    people come up to us and suggest, hey, I
  • 20:50 - 20:55
    want to build a TPM on top of this or I
    want to build a hardware wallet on top of
  • 20:55 - 21:01
    this. And so on and so forth. And if we
    look at the website of this chip. It has a
  • 21:01 - 21:07
    lot of security in it, so it's the best
    contribution to IOT security winner of
  • 21:07 - 21:15
    2018. And if you just type secure into the
    word search, you get like 57 hits. So this
  • 21:15 - 21:24
    chip is 57 secure. laughter And even on
    the website itself, you have like chip
  • 21:24 - 21:29
    level security. And then if you look at
    the first of the descriptions, you have a
  • 21:29 - 21:34
    robust chip level security include chip
    level, tamper resistance, active shield
  • 21:34 - 21:38
    protects against physical attacks and
    resists micro probing attacks. And even in
  • 21:38 - 21:42
    the datasheet, where I got really worried
    because I said I do a lot with a core
  • 21:42 - 21:48
    voltage it has a brown-out detector that
    has been calibrated in production and must
  • 21:48 - 21:54
    not be changed and so on. Yeah. To be
    fair, when I talked to my microchip, they
  • 21:54 - 21:58
    mentioned that they absolutely want to
    communicate that this chip is not hardened
  • 21:58 - 22:04
    against hardware attacks, but I can see
    how somebody who looks at this would get
  • 22:04 - 22:11
    the wrong impression given all the terms
    and so on. Anyway, so let's talk about the
  • 22:11 - 22:17
    TrustZone in this chip. So the SAM L11
    does not have a security attribution unit.
  • 22:17 - 22:21
    Instead, it only has the implementation
    defined attribution unit. And the
  • 22:21 - 22:26
    configuration for this implementation
    defined attribution unit is stored in the
  • 22:26 - 22:30
    user row, which is basically the
    configuration flash. It's also called
  • 22:30 - 22:34
    fuses in the data sheet sometimes, but
    it's really I think it's flash based. I
  • 22:34 - 22:37
    haven't checked, but I am pretty sure it
    is because you can read it, write it,
  • 22:37 - 22:42
    change it and so on. And then the IDAU,
    once you've configured it, will be
  • 22:42 - 22:49
    configured by the boot ROM during the
    start of the chip. And the idea behind the
  • 22:49 - 22:54
    IDAU is that all your flash is partitioned
    into two parts of the bootloader part and
  • 22:54 - 23:00
    the application part, and both of these
    can be split into secure, non secure
  • 23:00 - 23:05
    callable and non secure. So you can have a
    bootloader, a secure and a non secure one,
  • 23:05 - 23:10
    and you can have an application, a secure
    and a non secure one. And the size of
  • 23:10 - 23:14
    these regions is controlled by these five
    registers. And for example, if we want to
  • 23:14 - 23:19
    change our non secure application to be
    bigger and make our secure application a
  • 23:19 - 23:24
    bit smaller, we just fiddle with these
    registers and the sizes will adjust and
  • 23:24 - 23:31
    the same with the bootloader. So this is
    pretty simple. How do we attack it? My
  • 23:31 - 23:37
    goal initially was I want to somehow read
    data from the secure world while running
  • 23:37 - 23:42
    code in the non secret world. So jump the
    security gap. My code in non secure should
  • 23:42 - 23:47
    be able to, for example, extract keys from
    the secure world and my attack path for
  • 23:47 - 23:53
    that was well, I glitched the boot ROM
    code that loads the IDAU you
  • 23:53 - 23:57
    configuration. But before we can actually
    do this, we need to understand, is this
  • 23:57 - 24:02
    chip actually glitchable and can we? Is it
    susceptible to glitches or do we
  • 24:02 - 24:07
    immediately get get thrown out? And so I
    used a very simple setup where just had a
  • 24:07 - 24:13
    firmware and tried to glitch out of the
    loop and enable an LED. And I had success
  • 24:13 - 24:19
    in less than five minutes and super stable
    glitches almost immediately. Like when I
  • 24:19 - 24:23
    saw this, I was 100 percent sure that I
    messed up my setup or that the compiler
  • 24:23 - 24:29
    optimized out my loop or that I did
    something wrong because I never glitch to
  • 24:29 - 24:34
    chip in five minutes. And so this was
    pretty awesome, but I also spend another
  • 24:34 - 24:42
    two hours verifying my setup. So. OK.
    Cool, we know that ship is glitchable, so
  • 24:42 - 24:47
    let's glitch it. What do we glitch? Well,
    if we think about it somewhere during the
  • 24:47 - 24:53
    boot ROM, these registers are red from
    flash and then some hardware is somehow
  • 24:53 - 24:58
    configured. We don't know how because we
    can't dump the boot from we don't know
  • 24:58 - 25:02
    what's going on in the chip. And the
    datasheet has a lot of pages. And I'm a
  • 25:02 - 25:09
    millennial. So, yeah, I read what I have
    to read and that's it. But my basic idea
  • 25:09 - 25:14
    is if we somehow manage to glitch the
    point where it tries to read the value of
  • 25:14 - 25:19
    the AS Register, we might be able to set
    it to zero because most chip peripherals
  • 25:19 - 25:25
    will initialize to zero. And if we glitch
    with the instruction that reads AS, maybe
  • 25:25 - 25:30
    we can make our non secure application
    bigger so that we, that actually we can
  • 25:30 - 25:39
    read the secure application data because
    now it's considered non secure. But.
  • 25:39 - 25:44
    Problem 1 The boot ROM is not dumpable. So
    we cannot just disassemble it and figure
  • 25:44 - 25:51
    out when does it roughly do this. And the
    problem 2 is that we don't know when
  • 25:51 - 25:55
    exactly this read occures and our glitch
    needs to be instruction precise. We need
  • 25:55 - 26:01
    to hit just the right instruction to make
    this work. And the solution is brute
  • 26:01 - 26:08
    force. But I mean like nobody has time for
    that. Right? So if the chip boots for 2
  • 26:08 - 26:13
    milliseconds. That's a long range we have
    to search for glitches. And so very easy
  • 26:13 - 26:17
    solution power analysis. And it turns out
    that, for example, a riscure has done this
  • 26:17 - 26:23
    before where basically they tried to
    figure out where in time a JTAG lock is
  • 26:23 - 26:30
    set by comparing the power consumption.
    And so the idea is, we basically write
  • 26:30 - 26:36
    different values to the AS register, then
    we collect a lot of power traces and then
  • 26:36 - 26:41
    we look for the differences. And this is
    relatively simple to do. If you have a
  • 26:41 - 26:46
    ChipWhisperer. So. This was my rough
    setup. So we just have the ChipWhisperer-
  • 26:46 - 26:52
    Lite. We have a breakout with the chip we
    want to attack and a programmer. And then
  • 26:52 - 26:57
    we basically collect a couple of traces.
    And in my case, even just 20 traces are
  • 26:57 - 27:02
    enough, which takes, I don't know, like
    half a second to run. And if you have 20
  • 27:02 - 27:07
    traces in unsecure mode, 20 traces in
    secure mode and you compare them, you can
  • 27:07 - 27:11
    see that there are clear differences in
    the power consumption starting at a
  • 27:11 - 27:15
    certain point. And so I wrote a script
    that does some more statistics on it and
  • 27:15 - 27:21
    so on. And that basically told me the best
    glitch candidate starts at 2.18
  • 27:21 - 27:25
    milliseconds. And this needs to be so
    precise because I said we're in the milli
  • 27:25 - 27:31
    and the nano seconds range. And so we want
    to make sure that we at the right point in
  • 27:31 - 27:38
    time. Now, how do you actually configure?
    How do you build the setup where you
  • 27:38 - 27:44
    basically you get a success indication
    once you broke this? For this, I needed to
  • 27:44 - 27:50
    write a firmware that basically attempts
    to read secure data. And then if it's
  • 27:50 - 27:54
    successful, enabled the GPIO. And if it
    fails, it does nothing. And I just reset
  • 27:54 - 27:59
    and try again. And so I know I knew my
    rough delay and I was triggering of the
  • 27:59 - 28:05
    reset of the chip that I just tried. Any
    delay after it and tried different glitch
  • 28:05 - 28:11
    pulse length and so on. And eventually I
    had a success. And these glitches you will
  • 28:11 - 28:16
    see with the glitcher which we released a
    while back is super easy to write because
  • 28:16 - 28:22
    all you have is like 20 lines of Python.
    You basically set up a loop delay from
  • 28:22 - 28:28
    delay to your setup, the pulse length. You
    iterate over a range of pulses. And then
  • 28:28 - 28:34
    in this case you just check whether your
    GPIO is high or low. That's all it takes.
  • 28:34 - 28:38
    And then once you have this running in a
    stable fashion, it's amazing how fast it
  • 28:38 - 28:43
    works. So this is now a recorded video of
    a life glitch, of a real glitch,
  • 28:43 - 28:50
    basically. And you can see we have like 20
    attempts per second. And after a couple of
  • 28:50 - 28:57
    seconds, we actually get a success
    indication we just broke a chip. Sweet.
  • 28:57 - 29:02
    But one thing I moved to a part of Germany
    to the very south is called the
  • 29:02 - 29:10
    Schwabenland. And I mean, 60 bucks. We are
    known to be very cheap and 60 bucks
  • 29:10 - 29:15
    translates to like six beers at
    Oktoberfest. Just to convert this to the
  • 29:15 - 29:24
    local currency, that's like 60 Club Mate.
    Unacceptable. We need to go cheaper, much
  • 29:24 - 29:34
    cheaper, and so.
    laughter and applause
  • 29:34 - 29:40
    What if we take a chip that is 57 secure
    and we tried to break it with the smallest
  • 29:40 - 29:47
    chip. And so this is an ATTiny which
    costs, I don't know, a a euro or two euro.
  • 29:47 - 29:53
    We combine it with a MOSFET to keep the
    comparison that's roughly 3 Club Mate and
  • 29:53 - 29:58
    we hook it all up on a jumper board and
    turns out: This works like you can have a
  • 29:58 - 30:03
    relatively stable glitch, a glitcher with
    like 120 lines of assembly running all the
  • 30:03 - 30:07
    ATTiny and this will glitch your chip
    successfully and can break TrustZone on
  • 30:07 - 30:14
    the SAM L11. The problem is chips are very
    complex and it's always very hard to do an
  • 30:14 - 30:18
    attack on a chip that you configured
    yourself because as you will see, chances
  • 30:18 - 30:21
    are very high that you messed up the
    configuration and for example, missed a
  • 30:21 - 30:26
    security bit, forgot to set something and
    so on and so forth. But luckily, in the
  • 30:26 - 30:32
    case of the SAM L11, there's a version of
    this chip which is already configured and
  • 30:32 - 30:40
    only ships in non secure mode. And so this
    is called the SAM L11 KPH. And so it comes
  • 30:40 - 30:44
    pre provisioned with a key and it comes
    pre provisioned with a trusted execution
  • 30:44 - 30:50
    environment already loaded into the secure
    part of the chips and ships completely
  • 30:50 - 30:55
    secured and the customer can write and
    debug non secure code only. And also you
  • 30:55 - 31:00
    can download the SDK for it and write your
    own trustlets and so on. But I couldn't
  • 31:00 - 31:04
    because it requires you to agree to their
    terms and conditions so which exclude
  • 31:04 - 31:09
    reverse engineering. So no chance,
    unfortunately. But anyway, this is the
  • 31:09 - 31:15
    perfect example to test our attack. You
    can buy these chips on DigiKey and then
  • 31:15 - 31:19
    try to break into the secure world because
    these chips are hopefully decently secured
  • 31:19 - 31:25
    and have everything set up and so on. And
    yeah. So this was the setup. We designed
  • 31:25 - 31:30
    our own breakout port for the SAM L11,
    which makes it a bit more accessible, has
  • 31:30 - 31:35
    JTAG and has no capacitors in the way. So
    you get access to all the core voltages
  • 31:35 - 31:42
    and so on and you have the FPGA on the top
    left the super cheap 20 bucks power supply
  • 31:42 - 31:47
    and the programmer. And then we just
    implemented a simple function that uses
  • 31:47 - 31:53
    openOCD to try to read an address that we
    normally can't read. So we basically we
  • 31:53 - 31:59
    glitch. Then we start OpenOCD, which uses
    the JTAG adapter to try to read secure
  • 31:59 - 32:10
    memory. And so I hooked it all up, wrote a
    nice script and let it rip. And so after a
  • 32:10 - 32:17
    while or in well, a couple of seconds
    immediately again got successful, I got a
  • 32:17 - 32:20
    successful attack on the chip and more and
    more. And you can see just how stable you
  • 32:20 - 32:27
    can get these glitches and how well you
    can attack this. Yeah. So sweet hacked. We
  • 32:27 - 32:31
    can compromise the root of trust and the
    trusted execution environment. And this is
  • 32:31 - 32:36
    perfect for supply chain attacks. Right.
    Because if you can compromise a part of
  • 32:36 - 32:42
    the chip that the customer will not be
    able to access, he will never find you.
  • 32:42 - 32:46
    But the problem with supply chain attacks
    is, they're pretty hard to scale and they
  • 32:46 - 32:51
    are only for sophisticated actors normally
    and far too expensive is what most people
  • 32:51 - 32:59
    will tell you, except if you hack the
    distributor. And so as I guess last year
  • 32:59 - 33:04
    or this year, I don't know, I actually
    found a vulnerability in DigiKey, which
  • 33:04 - 33:09
    allowed me to access any invoice on
    DigiKey, including the credentials you
  • 33:09 - 33:17
    need to actually change the invoice. And
    so basically the bug is that they did not
  • 33:17 - 33:21
    check when you basically requested an
    invoice, they did not check whether you
  • 33:21 - 33:26
    actually had permission to access it. And
    you have the web access id on top and the
  • 33:26 - 33:30
    invoice number. And that's all you need to
    call DigiKey and change the delivery,
  • 33:30 - 33:37
    basically. And so this also is all data
    that you need to reroute the shipment. I
  • 33:37 - 33:41
    disclosed this. It's fixed. It's been
    fixed again afterwards. And now hopefully
  • 33:41 - 33:46
    this should be fine. So I feel good to
    talk about it. And so let's walk through
  • 33:46 - 33:52
    the scenarios. We have Eve and we have
    DigiKey and Eve builds this new super
  • 33:52 - 33:58
    sophisticated IOT toilet and she needs a
    secure chip. So she goes to DigiKey and
  • 33:58 - 34:07
    orders some SAM L11 KPHs and Mallory.
    Mallory scans all new invoices on DigiKey.
  • 34:07 - 34:13
    And as soon as somebody orders a SAM L11,
    they talk to DigiKey with the API or via a
  • 34:13 - 34:18
    phone call to change the delivery address.
    And because you know who the chips are
  • 34:18 - 34:23
    going to, you can actually target this
    very, very well. So now the chips get
  • 34:23 - 34:30
    delivered to Mallory Mallory backdoors the
    chips. And then sends the backdoored chips
  • 34:30 - 34:34
    to Eve who is none the wiser, because it's
    the same carrier, it's the same, it looks
  • 34:34 - 34:38
    the same. You have to be very, very
    mindful of these types of attack to
  • 34:38 - 34:43
    actually recognize them. And even if they
    open the chips and they say they open the
  • 34:43 - 34:49
    package and they try the chip, they scan
    everything they can scan the backdoor will
  • 34:49 - 34:54
    be in the part of the chip that they
    cannot access. And so we just supply chain
  • 34:54 - 35:02
    attacked whoever using an UPS envelope,
    basically. So, yeah. Interesting attack
  • 35:02 - 35:07
    vector. So I talked to microchip and it's
    been great. They've been super nice. It
  • 35:07 - 35:13
    was really a pleasure. I also talked to
    Trustonic, who were very open to this and
  • 35:13 - 35:20
    wanted to understand it. And so it was
    great. And they explicitly state that this
  • 35:20 - 35:24
    chip only protects against software
    attacks while it has some hardware
  • 35:24 - 35:30
    features like tamper ressistant RAM. It is
    not built to withstand fault injection
  • 35:30 - 35:34
    attacks. And even if you compare it now,
    different revisions of the data sheet, you
  • 35:34 - 35:39
    can see that some data sheets, the older
    ones they mention some fault injection
  • 35:39 - 35:43
    resistance and it's now gone from the data
    sheet. And they are also asking for
  • 35:43 - 35:47
    feedback on making it more clear what this
    chip protects against, which I think is a
  • 35:47 - 35:53
    noble goal because we all know marketing
    versus technicians is always an
  • 35:53 - 36:01
    interesting fight. Let's say, cool first
    chip broken time for the next one, right?
  • 36:01 - 36:07
    So the next chip I looked at was the
    Nuvoton NuMicro M2351 rolls off the
  • 36:07 - 36:14
    tongue. It's a Cortex-M23 processor. It
    has TrustZone-M. And I was super excited
  • 36:14 - 36:20
    because this finally has an SAU, a
    security attribution unit and an IDAU and
  • 36:20 - 36:23
    also I talked to the marketing. It
    explicitly protects against fault
  • 36:23 - 36:32
    injection. So that's awesome. I was
    excited. Let's see how that turns out.
  • 36:32 - 36:37
    Let's briefly talk about the TrustZone in
    the Nuvoton chip. So as I've mentioned
  • 36:37 - 36:45
    before, the SAU if it's turned off or
    turned on without regions will be to fully
  • 36:45 - 36:50
    secure. And no matter what the IDAU is,
    the most privileged level always wins. And
  • 36:50 - 36:55
    so if our entire security attribution unit
    is secure, our final security state will
  • 36:55 - 37:01
    also be secure. And so if we now add some
    small regions, the final state will also
  • 37:01 - 37:08
    have the small, non secure regions. I
    mean, I saw this looked at how this this
  • 37:08 - 37:15
    code works. And you can see that at the
    very bottom SAU control is set to 1 simple
  • 37:15 - 37:19
    right. We glitch over the SAU enabling and
    all our code will be secure and we'll just
  • 37:19 - 37:26
    run our code in secret mode, no problem -
    is what I fought. And so basically the
  • 37:26 - 37:31
    secure bootloader starts execution of non
    secure code. We disable the SAU by
  • 37:31 - 37:36
    glitching over the instruction and now
    everything is secure. So our code runs in
  • 37:36 - 37:44
    a secure world. It's easy except read the
    fucking manual. So turns out these
  • 37:44 - 37:50
    thousands of pages of documentation
    actually contain useful information and
  • 37:50 - 37:55
    you need a special instruction to
    transition from secure to non secure state
  • 37:55 - 38:02
    which is called BLXNS, which stands for
    branch optionally linked and exchange to
  • 38:02 - 38:08
    non secure. This is exactly made to
    prevent this. It prevents accidentally
  • 38:08 - 38:13
    jumping into non secure code. It will
    cause a secure fault if you try to do it.
  • 38:13 - 38:19
    And what's interesting is that even if you
    use this instruction, it will not always
  • 38:19 - 38:25
    transition. The state depends on the last
    bit in the destination address, whether
  • 38:25 - 38:30
    the status transition and the way the
    bootloader will actually get these
  • 38:30 - 38:34
    addresses it jumps to is from what's
    called the reset table, which is basically
  • 38:34 - 38:39
    where your reset handlers are, where your
    stack pointer, your initial stack pointer
  • 38:39 - 38:44
    is and so on. And you will notice that the
    last bit is always set. And if the last
  • 38:44 - 38:50
    bit is set, it will jump to secure code.
    So somehow they managed to branch to this
  • 38:50 - 38:57
    address and run it into non secure. So how
    do they do this? They use an explicit bit
  • 38:57 - 39:03
    clear instruction. What do we know about
    instructions? We can glitch over them. And
  • 39:03 - 39:09
    so basically we can with two glitches, we
    can glitch over the SAU control enable now
  • 39:09 - 39:16
    our entire memory is secure and then we
    glitch over the bitclear instruction and
  • 39:16 - 39:24
    then branch linked ex non secure again
    rolls off the tongue will run secure code.
  • 39:24 - 39:29
    And now our normal world code is running
    in secure mode. The problem is it works,
  • 39:29 - 39:34
    but it's very hard to get stable. So, I
    mean, this was I somehow got it working,
  • 39:34 - 39:41
    but it was not very stable and it was a
    big pain to to actually make use of. So I
  • 39:41 - 39:45
    wanted a different vulnerability. And I
    read up on the implementation defined
  • 39:45 - 39:52
    attribution unit of the M2351. And it
    turns out that each flash RAM peripheral
  • 39:52 - 40:00
    and so on is mapped twice into memory. And
    so basically once as secure as the address
  • 40:00 - 40:09
    0x2000 and once as non secure at the
    address 0x3000. And so you have the flash
  • 40:09 - 40:15
    twice and you have the the RAM twice. This
    is super important. This is the same
  • 40:15 - 40:22
    memory. And so I came up with an attack
    that I called CroeRBAR because a
  • 40:22 - 40:28
    vulnerability basically doesn't exist if
    it doesn't have a fancy name. And the
  • 40:28 - 40:32
    basic point of this is that the security
    of the system relies on the region
  • 40:32 - 40:37
    configuration of the SAU. What if we
    glitch this initialization combined with
  • 40:37 - 40:43
    this IDAU layout again with the IDAU
    mirrors the memory. Has it once a secure
  • 40:43 - 40:48
    and once it's not secure. Now let's say we
    have at the very bottom of our flash. We
  • 40:48 - 40:55
    have a secret which is in the secure area.
    It will also be in the mirror of this
  • 40:55 - 41:01
    memory. But again, because our SAU
    configuration is fine, it will not be
  • 41:01 - 41:06
    accessible by the non secure region.
    However, the start of this non secret area
  • 41:06 - 41:14
    is configured by the RBAR register. And so
    maybe if we glitch this RBAR being set, we
  • 41:14 - 41:18
    can increase the size of the non secure
    area. And if you check the ARM
  • 41:18 - 41:23
    documentation on the RBAR register, the
    reset values state of this register is
  • 41:23 - 41:28
    unknown. So unfortunately it doesn't just
    say zero, but I tried this on all chips I
  • 41:28 - 41:34
    had access to and it is zero on all chips
    I tested. And so now what we can do is we
  • 41:34 - 41:39
    glitch over this RBAR and now our final
    security state will be bigger and our
  • 41:39 - 41:43
    secure code is still running in the bottom
    half. But then the jump into non secure
  • 41:43 - 41:51
    will also give us access to the secret and
    it works. We get a fully stable glitch,
  • 41:51 - 41:57
    takes roughly 30 seconds to bypass it. I
    should mention that this is what I think
  • 41:57 - 42:00
    happens. All I know is that I inject a
    glitch and I can read the secret. I cannot
  • 42:00 - 42:05
    tell you exactly what happens, but this is
    the best interpretation I have so far. So
  • 42:05 - 42:11
    wuhu we have an attack with a cool name?
    And so I looked at another chip called the
  • 42:11 - 42:19
    NXP LPC55S69, and this one has 2
    Cortex-M33 cores, one of which has
  • 42:19 - 42:27
    TrustZone-M. The IDAU and the overall
    TrustZone layout seem to be very similar
  • 42:27 - 42:32
    to the NuMicro. And I got the dual glitch
    attack working and also the CrowRBAR
  • 42:32 - 42:39
    attack working. And the vendor response
    was amazing. Like holy crap, they called
  • 42:39 - 42:42
    me and wanted to fully understand it. They
    reproduced that. They got me on the phone
  • 42:42 - 42:48
    with an expert and the expert was super
    nice. But what he said came down to was
  • 42:48 - 42:55
    RTFM. But again, this is a long document,
    but it turns out that the example code did
  • 42:55 - 43:02
    not enable a certain security feature. And
    this security feature is helpfully named
  • 43:02 - 43:11
    Miscellaneous Control Register, basically,
    laughter which stands for Secure Control
  • 43:11 - 43:21
    Register, laughter obviously. And this
    register has a bit. If you set it, it
  • 43:21 - 43:27
    enables secure checking. And if I read
    just a couple of sentences first further,
  • 43:27 - 43:31
    when I read about the TrustZone on the
    chip, I would have actually seen this. But
  • 43:31 - 43:38
    Millennial sorry. Yeah. And so what this
    enables is called the memory protection
  • 43:38 - 43:41
    checkers and this is an additional memory
    security check that gives you finer
  • 43:41 - 43:46
    control over the memory layout. And so it
    basically checks if the attribution unit
  • 43:46 - 43:52
    security state is identical with the
    memory protection checker security state.
  • 43:52 - 43:58
    And so, for example, if our attack code
    tries to access memory, the MPC will check
  • 43:58 - 44:04
    whether this was really a valid request.
    So to say and stop you if you are unlucky
  • 44:04 - 44:10
    as I was. But turns out it's glitchable,
    but it's much, much harder to glitch and
  • 44:10 - 44:16
    you need multiple glitches. And the vendor
    response was awesome. They also say
  • 44:16 - 44:22
    they're working on improving the
    documentation for this. So yeah, super
  • 44:22 - 44:27
    cool. But still like it's not a full
    protection against glitching, but it gives
  • 44:27 - 44:33
    you certain security. And I think that's
    pretty awesome. Before we finish. Is
  • 44:33 - 44:38
    everything broken? No. These chips are not
    insecure. They are not protected against a
  • 44:38 - 44:44
    very specific attack scenario and align
    the chips that you want to use with your
  • 44:44 - 44:48
    threat model. If fault injection is part
    of your threat models. So, for example,
  • 44:48 - 44:52
    you're building a carkey. Maybe you should
    protect against glitching. If you're
  • 44:52 - 44:56
    building a hardware wallet, definitely you
    should protect against glitching. Thank
  • 44:56 - 45:01
    you. Also, by the way, if you want to play
    with some awesome fault injection
  • 45:01 - 45:06
    equipment, I have an EMFI glitcher with me
    and so. So just hit me up on Twitter and
  • 45:06 - 45:10
    I'm happy to show it to you. So thanks a
    lot.
  • 45:10 - 45:18
    applause
  • 45:18 - 45:25
    Herald: Thank you very much, Thomas. We do
    have an awesome 15 minutes for Q and A. So
  • 45:25 - 45:30
    if you line up, we have three microphones.
    Microphone number 3 actually has an
  • 45:30 - 45:34
    induction loop. So if you're hearing
    impaired and have a suitable device, you
  • 45:34 - 45:39
    can go to microphone 3 and actually hear
    the answer. And we're starting off with
  • 45:39 - 45:42
    our signal angel with questions from the
    Internet.
  • 45:42 - 45:48
    Thomas: Hello, Internet.
    Signal Angel: Hello. Are you aware of the
  • 45:48 - 45:54
    ST Cortex-M4 firewall? And can your
    research be somehow related to it? Or
  • 45:54 - 45:57
    maybe do you have plans to explore it in
    the future?
  • 45:57 - 46:02
    Thomas: I. So, yes, I'm very aware of the
    ST M3 and M4. If you watch our talk last
  • 46:02 - 46:07
    year at CCC called Wallet.fail, we
    actually exploited the sister chip, the
  • 46:07 - 46:13
    STM32 F2. The F4 has this strange firewall
    thing which feels very similar to
  • 46:13 - 46:19
    TrustZone-M. However, I cannot yet share
    any research related to that chip,
  • 46:19 - 46:22
    unfortunately. Sorry.
    Signal Angel: Thank you.
  • 46:22 - 46:29
    Herald: Microphone number 1, please.
    Mic 1: Hello. I'm just wondering, have you
  • 46:29 - 46:34
    tried to replicate this attack on
    multicore CPUs with higher frequency such
  • 46:34 - 46:39
    like 2GHz and others, how would you go
    about that?
  • 46:39 - 46:44
    Thomas: So I have not because there there
    are no TrustZone-M chips with this
  • 46:44 - 46:48
    frequency. However, people have done it on
    mobile phones and other equipment. So, for
  • 46:48 - 46:55
    example, yeah, there's a lot of materials
    on glitching higher frequency stuff. But
  • 46:55 - 46:59
    yeah, it will get expensive really quickly
    because the scope, the way you can even
  • 46:59 - 47:04
    see a two gigahertz clock, that's a nice
    car oscilloscope.
  • 47:04 - 47:09
    Herald: Microphone number 2, please.
    Mic 2: Thank you for your talk. Is the
  • 47:09 - 47:16
    more functionality to go from non-secure
    to secure area? Are there same standard
  • 47:16 - 47:20
    defined functionalities or the proprietory
    libraries from NXP?
  • 47:20 - 47:25
    Thomas: So the the veneer stuff is
    standard and you will find ARM documents
  • 47:25 - 47:29
    basically recommending you to do this. But
    all the tool chains, for example, the one
  • 47:29 - 47:35
    for the SAM L11 will generate the veneers
    for you. And so I have to be honest, I
  • 47:35 - 47:38
    have not looked at how exactly they are
    generated.
  • 47:38 - 47:42
    However, I did some rust stuff to play
    around with it. And yeah, it's relatively
  • 47:42 - 47:45
    simple for the tool chain and it's
    standard. So
  • 47:45 - 47:52
    Herald: the signal angel is signaling.
    Signal Angel: Yeah. That's not another
  • 47:52 - 47:56
    question from the internet but from me and
    I wanted to know how important is the
  • 47:56 - 48:01
    hardware security in comparison to the
    software security because you cannot hack
  • 48:01 - 48:06
    these devices without having physical
    access to them except of this supply chain
  • 48:06 - 48:09
    attack.
    Thomas: Exactly. And that depends on your
  • 48:09 - 48:14
    threat model. So that's basically if you
    build a door, if you build a hardware
  • 48:14 - 48:18
    wallet, you want to have hardware
    protection because somebody can steal it
  • 48:18 - 48:22
    potentially very easily and then... And if
    you, for example, look at your phone, you
  • 48:22 - 48:28
    probably maybe don't want to have anyone
    at customs be able to immediately break
  • 48:28 - 48:31
    into your phone. And that's another point
    where hardware security is very important.
  • 48:31 - 48:36
    And there with a car key, it's the same.
    If you rent a car, you hopefully the car
  • 48:36 - 48:42
    rental company doesn't want you to copy
    the key. And interestingly, the more
  • 48:42 - 48:46
    probably one of the most protected things
    in your home is your printer cartridge,
  • 48:46 - 48:50
    because I can tell you that the vendor
    invests a lot of money into you not being
  • 48:50 - 48:54
    able to clone the printer cartridge. And
    so there are a lot of cases where it's
  • 48:54 - 48:58
    maybe not the user who wants to protect
    against hardware attacks, but the vendor
  • 48:58 - 49:02
    who wants to protect against it.
    Herald: Microphone number 1, please.
  • 49:02 - 49:05
    Mic 1: So thank you again for the amazing
    Talk.
  • 49:05 - 49:08
    Thomas: Thank you.
    Mic 1: You mentioned higher order attacks,
  • 49:08 - 49:12
    I think twice. And for the second chip,
    you actually said you you broke it with
  • 49:12 - 49:15
    two glitches, two exploiteable glitches.
    Thomas: Yes.
  • 49:15 - 49:19
    Mic 1: So what did you do to reduce the
    search space or did you just search over
  • 49:19 - 49:22
    the entire space?
    Thomas: So the nice thing about these
  • 49:22 - 49:28
    chips is that you can actually you can if
    you have a security attribution unit, you
  • 49:28 - 49:34
    can decide when you turn it on, because
    you can just, I had the GPIO go up. Then I
  • 49:34 - 49:40
    enable the SAU. And then I had my search
    space very small because I knew it would
  • 49:40 - 49:45
    be just after I pulled up the GPIO. And so
    I was able to very precisely time where I
  • 49:45 - 49:50
    glitch and I was able because I wrote the
    code basically that does it. I could
  • 49:50 - 49:53
    almost count on the oscilloscope which
    instruction I'm hitting.
  • 49:53 - 49:57
    Mic 1: Thank you.
    Herald: Next question from microphone
  • 49:57 - 50:00
    number 2, please.
    Mic 2: Thank you for the talk. I was just
  • 50:00 - 50:05
    wondering if the vendor was to include the
    capacitor directly on the die, howfixed
  • 50:05 - 50:11
    would you consider it to be?
    Thomas: So against voltage glitching? It
  • 50:11 - 50:15
    might help. It depends. But for example,
    on a recent chip, we just used the
  • 50:15 - 50:19
    negative voltage to suck out the power
    from the capacitor. And also, you will
  • 50:19 - 50:24
    have EMFI glitching as a possibility and
    EMFI glitching is awesome because you
  • 50:24 - 50:28
    don't even have to solder. You just
    basically put a small coil on top of your
  • 50:28 - 50:33
    chip and inject the voltage directly into
    it behind any of the capacitors. And so
  • 50:33 - 50:40
    on. So it it helps, but it's not a. Often
    it's not done for security reasons. Let's
  • 50:40 - 50:43
    see.
    Herald: Next question again from our
  • 50:43 - 50:46
    Signal Angel.
    Signal Angel: Did you get to use your own
  • 50:46 - 50:56
    custom hardware to help you?
    Thomas: I partially the part that worked
  • 50:56 - 50:59
    is the summary.
    Herald: Microphone number 1, please.
  • 50:59 - 51:05
    Mic 1: Hi. Thanks for the interesting
    talk. All these vendors pretty much said
  • 51:05 - 51:08
    this sort of attack is sort of not really
    in scope for what they're doing.
  • 51:08 - 51:11
    Thomas: Yes.
    Mic 1: Are you aware of anyone like in
  • 51:11 - 51:15
    this sort of category of chip actually
    doing anything against glitching attacks?
  • 51:15 - 51:20
    Thomas: Not in this category, but there
    are secure elements that explicitly
  • 51:20 - 51:26
    protect against it. A big problem with
    researching those is that it's also to a
  • 51:26 - 51:30
    large degree security by NDA, at least for
    me, because I have no idea what's going
  • 51:30 - 51:35
    on. I can't buy one to play around with
    it. And so I can't tell you how good these
  • 51:35 - 51:39
    are. But I know from some friends that
    there are some chips. Are very good at
  • 51:39 - 51:43
    protecting against glitches. And
    apparently the term you need to look for
  • 51:43 - 51:47
    it is called glitch monitor. And if you
    see that in the data sheet, that tells you
  • 51:47 - 51:52
    that they at least thought about it
    Herald: Microphone number 2, please.
  • 51:52 - 52:00
    Mic 2: So what about brown-out or
    detection? Did microchip say why it didn't
  • 52:00 - 52:03
    catch your glitching attempts?
    Thomas: It's not meet to glitch it at two
  • 52:03 - 52:08
    to catch glitching attacks. Basically, a
    brownout detector is mainly there to keep
  • 52:08 - 52:14
    your chip stable. And so, for example, if
    you're supply voltage drops, you want to
  • 52:14 - 52:17
    make sure that you notice and don't
    accidentally glitch yourself. So, for
  • 52:17 - 52:21
    example, if it is running on a battery and
    your battery goes empty, you want your
  • 52:21 - 52:25
    chip to run stable, stable, stable off.
    And that's the idea behind a brownout
  • 52:25 - 52:31
    detector is my understanding. But yeah,
    they are not made to be fast enough to
  • 52:31 - 52:36
    catch glitching attacks.
    Herald: Do we have any more questions from
  • 52:36 - 52:39
    the hall?
    Thomas: Yes.
  • 52:39 - 52:45
    Herald: Yes? Where?
    Mic ?: Thank you for your amazing talk.
  • 52:45 - 52:49
    You have shown that it gets very
    complicated if you have two consecutive
  • 52:49 - 52:55
    glitches. So wouldn't it be an easy
    protection to just do the stuff twice or
  • 52:55 - 53:01
    three times and maybe randomize it? Would
    you consider this then impossible to be
  • 53:01 - 53:04
    glitched?
    Thomas: So adding randomization to the
  • 53:04 - 53:08
    point in time where you enable it helps,
    but then you can trigger off the power
  • 53:08 - 53:13
    consumption and so on. And I should add, I
    only tried to to trigger once and then use
  • 53:13 - 53:17
    just a simple delay. But in theory, if you
    do it twice, you could also glitch on the
  • 53:17 - 53:22
    power consumption signature and so on. So
    it might help. But somebody very motivated
  • 53:22 - 53:28
    will still be able to do it. Probably.
    Herald: OK. We have another question from
  • 53:28 - 53:31
    the Internet.
    Signal Angel: Is there a mitigation for
  • 53:31 - 53:37
    such a attack that I can do on PCB level
    or it can be addressed only on chip level?
  • 53:37 - 53:40
    Thomas: Only on chip level, because if you
    have a heat, can you just pull the chip
  • 53:40 - 53:46
    off and do it in a socket or if you do
    EMFI glitching, you don't even have to
  • 53:46 - 53:50
    touch the chip. You just go over it with
    the coil and inject directly into the
  • 53:50 - 53:55
    chip. So the chip needs to be secured
    against this type of stuff or you can add
  • 53:55 - 54:00
    a tamper protection case around your
    chips. So, yeah.
  • 54:00 - 54:03
    Herald: Another question from microphone
    number 1.
  • 54:03 - 54:08
    Mic 1: So I was wondering if you've heard
    anything or know anything about the STM32
  • 54:08 - 54:11
    L5 series?
    Thomas: I've heard a lot. I've seen
  • 54:11 - 54:17
    nothing. So, yes, I've heard about it. But
    it doesn't ship yet as far as I know. We
  • 54:17 - 54:20
    are all eagerly awaiting it.
    Mic 1: Thank you.
  • 54:20 - 54:24
    Herald: Microphone number 2, please
    Mic 2: Hey, very good talk. Thank you. Do
  • 54:24 - 54:29
    you, Will you release all the hardware
    design of the board and those scripts?
  • 54:29 - 54:31
    Thomas: Yes.
    Mic 2: Is there anything already
  • 54:31 - 54:33
    availability even if I understood it's not
    all finished?
  • 54:33 - 54:38
    Thomas: Oh, yes. So on chip.fail. There
    are thoughtful domains. It's awesome.
  • 54:38 - 54:44
    Chip.fail has the source code to our
    glitcher. I've also ported it to the
  • 54:44 - 54:49
    Lattice and I need to push that hopefully
    in the next few days. But then all the
  • 54:49 - 54:53
    hardware would be open sourced also
    because it's based on open source hardware
  • 54:53 - 54:59
    and yeah, I'm not planning to make any
    money or anything using it. It's just to
  • 54:59 - 55:03
    make life easier.
    Herald: Microphone number 2, please.
  • 55:03 - 55:07
    Mic 2: So you said already you don't
    really know what happens at the exact
  • 55:07 - 55:15
    moment of the glitch and you were lucky
    that you that you skipped an instruction
  • 55:15 - 55:24
    maybe. Do you have. Yes. A feeling what is
    happening inside the chip at the moment of
  • 55:24 - 55:29
    the glitch?
    Thomas: So I asked this precise question,
  • 55:29 - 55:37
    what exactly happens to multiple people? I
    got multiple answers. But basically my my
  • 55:37 - 55:41
    understanding is that you basically pull
    the voltage that it needs to set, for
  • 55:41 - 55:46
    example, the register. But I'm it's
    absolutely out of my domain to give an
  • 55:46 - 55:51
    educated comment on this. I'm a breaker,
    unfortunately, not a maker when it comes
  • 55:51 - 55:54
    to chips.
    Herald: Microphone number 2, please.
  • 55:54 - 56:02
    Mic 2: OK. Thank you. You said a lot of
    the chip attack. Can you tell us something
  • 56:02 - 56:08
    about JTAG attacks? So I just have a
    connection to JTAG?
  • 56:08 - 56:12
    Thomas: Yeah. So, for example, the attack
    on the KPH version of the chip was
  • 56:12 - 56:17
    basically a JTAG attack. I used JTAG to
    read out the chip, but I did have JTAG in
  • 56:17 - 56:24
    normal world. However, it's possible on
    most - on a lot of chips to reenable JTAG
  • 56:24 - 56:29
    even if it's locked. And for example,
    again, referencing last year's talk, we
  • 56:29 - 56:34
    were able to re enable JTAG on the STM32F2
    and I would assume was something similar
  • 56:34 - 56:39
    as possible on this chip as well. But I
    haven't tried.
  • 56:39 - 56:47
    Herald: Are there any more questions we
    still have a few minutes. I guess not.
  • 56:47 - 56:52
    Well, a big, warm round of applause for
    Thomas Roth.
  • 56:52 - 56:55
    applause
  • 56:55 - 56:59
    postroll music
  • 56:59 - 57:06
    Subtitles created by c3subtitles.de
    in the year 2021. Join, and help us!
Title:
36C3 - TrustZone-M(eh): Breaking ARMv8-M's security
Description:

more » « less
Video Language:
English
Duration:
57:22

English subtitles

Incomplete

Revisions