< Return to Video

36C3 - Open Source is Insufficient to Solve Trust Problems in Hardware

  • 0:00 - 0:18
    36C3 Intro
    ♪ (intro music) ♪
  • 0:18 - 0:23
    Herald: Welcome, everybody, to our very
    first talk on the first day of Congress.
  • 0:23 - 0:27
    The talk is "Open Source is Insufficient
    to Solve Trust Problems in Hardware,"
  • 0:27 - 0:32
    and although there is a lot to be said
    for free and open software, it is
  • 0:32 - 0:38
    unfortunately not always inherently more
    secure than proprietary or closed software,
  • 0:38 - 0:42
    and the same goes for hardware as well.
    And this talk will take us into
  • 0:42 - 0:47
    the nitty gritty bits of how to build
    trustable hardware and how it it has to be
  • 0:47 - 0:51
    implemented and brought together
    with the software in order to be secure.
  • 0:51 - 0:55
    We have one speaker here today.
    It's bunnie.
  • 0:55 - 0:58
    He's a hardware and firmware hacker.
    But actually,
  • 0:58 - 1:01
    the talk was worked on by three people,
    so it's not just bunnie, but also
  • 1:01 - 1:05
    Sean "Xobs" Cross and Tom Marble.
    But the other two are not present today.
  • 1:05 - 1:07
    But I would like you to welcome
    our speaker, bunnie,
  • 1:07 - 1:11
    with a big, warm, round of applause,
    and have a lot of fun.
  • 1:11 - 1:17
    Applause
  • 1:17 - 1:20
    bunnie: Good morning, everybody.
    Thanks for braving the crowds
  • 1:20 - 1:24
    and making it in to the Congress.
    And thank you again to the Congress
  • 1:24 - 1:31
    for giving me the privilege
    to address the Congress again this year.
  • 1:31 - 1:34
    Very exciting being the first talk
    of the day. Had font problems,
  • 1:34 - 1:39
    so I'm running from a .pdf backup.
    So we'll see how this all goes.
  • 1:39 - 1:43
    Good thing I make backups. So the
    topic of today's talk is
  • 1:43 - 1:47
    "Open Source is Insufficient
    to Solve Trust Problems in Hardware,"
  • 1:47 - 1:49
    and sort of some things
    we can do about this.
  • 1:49 - 1:53
    So my background is, I'm
    a big proponent of open source hardware. I
  • 1:53 - 1:58
    love it. And I've built a lot of things in
    open source, using open source hardware
  • 1:58 - 2:01
    principles. But there's been sort of a
    nagging question in me about like, you
  • 2:01 - 2:04
    know, some people would say things like,
    oh, well, you know, you build open source
  • 2:04 - 2:07
    hardware because you can trust it more.
    And there's been sort of this gap in my
  • 2:07 - 2:12
    head and this talk tries to distill out
    that gap in my head between trust and open
  • 2:12 - 2:19
    source and hardware. So I'm sure people
    have opinions on which browsers you would
  • 2:19 - 2:23
    think is more secure or trustable than the
    others. But the question is why might you
  • 2:23 - 2:26
    think one is more trustable than the other
    is. You have everything and hear from like
  • 2:26 - 2:31
    Firefox and Iceweasel down to like the
    Samsung custom browser or the you know,
  • 2:31 - 2:35
    xiaomi custom browser. Which one would
    you rather use for your browsing if you
  • 2:35 - 2:41
    had to trust something? So I'm sure people
    have their biases and they might say that
  • 2:41 - 2:45
    open is more trustable. But why do we say
    open is more trustable? Is it because we
  • 2:45 - 2:49
    actually read the source thoroughly and
    check it every single release for this
  • 2:49 - 2:54
    browser? Is it because we compile our
    source, our browsers from source before we
  • 2:54 - 2:57
    use them? No, actually we don't have the
    time to do that. So let's take a closer
  • 2:57 - 3:02
    look as to why we like to think that open
    source software is more secure. So this is
  • 3:02 - 3:08
    a kind of a diagram of the lifecycle of,
    say, a software project. You have a bunch
  • 3:08 - 3:13
    of developers on the left. They'll commit
    code into some source management program
  • 3:13 - 3:18
    like git. It goes to a build. And then
    ideally, some person who carefully managed
  • 3:18 - 3:22
    the key signs that build goes into an
    untrusted cloud, then gets download onto
  • 3:22 - 3:26
    users disks, pulled into RAM, run by the
    user at the end of the day. Right? So the
  • 3:26 - 3:32
    reason why actually we find that we might
    be able to trust things more is because in
  • 3:32 - 3:36
    the case of open source, anyone can pull
    down that source code like someone doing
  • 3:36 - 3:40
    reproducible builds an audit of some type,
    build it, confirm that the hashes match
  • 3:40 - 3:44
    and that the keys are all set up
    correctly. And then the users also have
  • 3:44 - 3:49
    the ability to know developers and sort of
    enforce community norms and standards upon
  • 3:49 - 3:53
    them to make sure that they're acting in
    sort of in the favor of the community. So
  • 3:53 - 3:56
    in the case that we have bad actors who
    want to go ahead and tamper with builds
  • 3:56 - 4:00
    and clouds and all the things in the
    middle, it's much more difficult. So open
  • 4:00 - 4:06
    is more trustable because we have tools to
    transfer trust in software, things like
  • 4:06 - 4:10
    hashing, things like public keys, things
    like Merkle trees. Right? And also in the
  • 4:10 - 4:14
    case of open versus closed, we have social
    networks that we can use to reinforce our
  • 4:14 - 4:20
    community standards for trust and
    security. Now, it's worth looking a little
  • 4:20 - 4:25
    bit more into the hashing mechanism
    because this is a very important part of
  • 4:25 - 4:29
    our software trust chain. So I'm sure a
    lot of people know what hashing is, for
  • 4:29 - 4:34
    people who don't know. Basically it takes
    a big pile of bits and turns them into a
  • 4:34 - 4:38
    short sequence of symbols so that a tiny
    change in the big pile of bits makes a big
  • 4:38 - 4:42
    change in the output symbols. And also
    knowing those symbols doesn't reveal
  • 4:42 - 4:48
    anything about the original file. So in
    this case here, the file on the left is
  • 4:48 - 4:55
    hashed to sort of cat, mouse, panda, bear
    and the file on the right hashes to, you
  • 4:55 - 5:01
    know, peach, snake, pizza, cookie. And the
    thing is, as you may not even have noticed
  • 5:01 - 5:05
    necessarily that there was that one bit
    changed up there, but it's very easy to
  • 5:05 - 5:07
    see that short string of symbols have
    changed. So you don't actually have to go
  • 5:07 - 5:11
    through that whole file and look for that
    needle in the haystack. You have this hash
  • 5:11 - 5:16
    function that tells you something has
    changed very quickly. Then once you've
  • 5:16 - 5:20
    computed the hashes, we have a process
    called signing, where a secret key is used
  • 5:20 - 5:24
    to encrypt the hash, users decrypt that
    using the public key to compare against a
  • 5:24 - 5:27
    locally computed hash. You know, so we're
    not trusting the server to compute the
  • 5:27 - 5:32
    hash. We reproduce it on our site and then
    we can say that it is now difficult to
  • 5:32 - 5:36
    modify that file or the signature without
    detection. Now the problem is, that there
  • 5:36 - 5:41
    is a time of check, time of use issue with
    the system, even though we have this
  • 5:41 - 5:45
    mechanism, if we decouple the point of
    check from the point of use, it creates a
  • 5:45 - 5:50
    man in the middle opportunity or a person
    the middle if you want. The thing is that,
  • 5:50 - 5:56
    you know, it's a class of attacks that
    allows someone to tamper with data as it
  • 5:56 - 6:00
    is in transit. And I'm kind of symbolizing
    this evil guy, I guess, because hackers
  • 6:00 - 6:06
    all wear hoodies and, you know, they also
    keep us warm as well in very cold places.
  • 6:06 - 6:12
    So now an example of a time of check, time
    of use issue is that if, say, a user
  • 6:12 - 6:16
    downloads a copy of the program onto their
    disk and they just check it after the
  • 6:16 - 6:20
    download to the disc. And they say, okay,
    great, that's fine. Later on, an adversary
  • 6:20 - 6:24
    can then modify the file on a disk as it's
    cut before it's copied to RAM. And now
  • 6:24 - 6:27
    actually the user, even though they
    download the correct version of file,
  • 6:27 - 6:32
    they're getting the wrong version into the
    RAM. So the key point is the reason why in
  • 6:32 - 6:37
    software we feel it's more trustworthy, we
    have a tool to transfer trust and ideally,
  • 6:37 - 6:42
    we place that point of check as close to
    the users as possible. So idea that we're
  • 6:42 - 6:46
    sort of putting keys into the CPU or some
    secure enclave that, you know, just before
  • 6:46 - 6:50
    you run it, you've checked it, that
    software is perfect and has not been
  • 6:50 - 6:55
    modified, right? Now, an important
    clarification is that it's actually more
  • 6:55 - 6:59
    about the place of check versus the place
    of use. Whether you checked one second
  • 6:59 - 7:03
    prior or a minute prior doesn't actually
    matter. It's more about checking the copy
  • 7:03 - 7:08
    that's closest to the thing that's running
    it, right? We don't call it PoCPoU because
  • 7:08 - 7:13
    it just doesn't have quite the same ring
    to it. But now this is important. That
  • 7:13 - 7:16
    reason why I emphasize place of check
    versus place of use is, this is why
  • 7:16 - 7:21
    hardware is not the same as software in
    terms of trust. The place of check is not
  • 7:21 - 7:25
    the place of use or in other words, trust
    in hardware is a ToCToU problem all the
  • 7:25 - 7:30
    way down the supply chain. Right? So the
    hard problem is how do you trust your
  • 7:30 - 7:33
    computers? Right? So we have problems
    where we have firmware, pervasive hidden
  • 7:33 - 7:38
    bits of code that are inside every single
    part of your system that can break
  • 7:38 - 7:42
    abstractions. And there's also the issue
    of hardware implants. So it's tampering or
  • 7:42 - 7:45
    adding components that can bypass security
    in ways that we're not, according to the
  • 7:45 - 7:51
    specification, that you're building
    around. So from the firmer standpoint,
  • 7:51 - 7:55
    it's more here to acknowledge is an issue.
    The problem is this is actually a software
  • 7:55 - 7:59
    problem. The good news is we have things
    like openness and runtime verification,
  • 7:59 - 8:02
    they go going to frame these questions. If
    you're, you know, a big enough player or
  • 8:02 - 8:05
    you have enough influence or something,
    you can coax out all the firmware blobs
  • 8:05 - 8:10
    and eventually sort of solve that problem.
    The bad news is that you're still relying
  • 8:10 - 8:15
    on the hardware to obediently run the
    verification. So if your hardware isn't
  • 8:15 - 8:17
    running the verification correctly, it
    doesn't matter that you have all the
  • 8:17 - 8:22
    source code for the firmware. Which brings
    us to the world of hardware implants. So
  • 8:22 - 8:25
    very briefly, it's worth thinking about,
    you know, how bad can this get? What are
  • 8:25 - 8:30
    we worried about? What is the field? If we
    really want to be worried about trust and
  • 8:30 - 8:34
    security, how bad can it be? So I've spent
    many years trying to deal with supply
  • 8:34 - 8:37
    chains. They're not friendly territory.
    There's a lot of reasons people want to
  • 8:37 - 8:44
    screw with the chips in the supply chain.
    For example, here this is a small ST
  • 8:44 - 8:48
    microcontroller, claims to be a secure
    microcontroller. Someone was like: "Ah,
  • 8:48 - 8:51
    this is not a secure, you know, it's not
    behaving correctly." We digest off the top
  • 8:51 - 8:55
    of it. On the inside, it's an LCX244
    buffer. Right. So like, you know, this was
  • 8:55 - 8:59
    not done because someone wanted to tamper
    with the secure microcontroller. It's
  • 8:59 - 9:02
    because someone wants to make a quick
    buck. Right. But the point is that that
  • 9:02 - 9:06
    marking on the outside is convincing.
    Right. You could've been any chip on the
  • 9:06 - 9:11
    inside in that situation. Another problem
    that I've had personally as I was building
  • 9:11 - 9:16
    a robot controller board that had an FPGA
    on the inside. We manufactured a thousand
  • 9:16 - 9:21
    of these and about 3% of them weren't
    passing tests, set them aside. Later on, I
  • 9:21 - 9:23
    pulled these units that weren't passing
    tests and looked at them very carefully.
  • 9:23 - 9:28
    And I noticed that all of the units, the
    FPGA units that weren't passing test had
  • 9:28 - 9:35
    that white rectangle on them, which is
    shown in a big more zoomed in version. It
  • 9:35 - 9:38
    turned out that underneath that white
    rectangle where the letters ES for
  • 9:38 - 9:43
    engineering sample, so someone had gone in
    and Laser blasted off the letters which
  • 9:43 - 9:46
    say that's an engineering sample, which
    means they're not qualified for regular
  • 9:46 - 9:50
    production, blending them into the supply
    chain at a 3% rate and managed to
  • 9:50 - 9:53
    essentially double their profits at the
    end of the day. The reason why this works
  • 9:53 - 9:56
    is because distributors make a small
    amount of money. So even a few percent
  • 9:56 - 10:00
    actually makes them a lot more profit at
    the end of day. But the key takeaway is
  • 10:00 - 10:04
    that this is just because 97% of your
    hardware is okay. It does not mean that
  • 10:04 - 10:10
    you're safe. Right? So it doesn't help to
    take one sample out of your entire set of
  • 10:10 - 10:13
    hardware and say all this is good. This is
    constructed correctly right, therefore all
  • 10:13 - 10:18
    of them should be good. That's a ToCToU
    problem, right? 100% hardware verification
  • 10:18 - 10:23
    is mandatory. If if you're worried about
    trust and verification. So let's go a bit
  • 10:23 - 10:27
    further down the rabbit hole. This is a
    diagram, sort of an ontology of supply
  • 10:27 - 10:32
    chain attacks. And I've kind of divided it
    into two axis. On the vertical axis, is
  • 10:32 - 10:36
    how easy is it to detect or how hard.
    Right? So in the bottom you might need a
  • 10:36 - 10:40
    SEM, a scanning electron microscope to do
    it, in the middle is an x-ray, a little
  • 10:40 - 10:44
    specialized and at the top is just visual
    or JTAG like anyone can do it at home.
  • 10:44 - 10:48
    Right? And then from left to right is
    execution difficulty. Right? Things are
  • 10:48 - 10:51
    going take millions of dollars and months.
    Things are going take 10$ and weeks. Or a
  • 10:51 - 10:57
    dollar in seconds. Right? There's sort of
    several broad classes I've kind of
  • 10:57 - 11:00
    outlined here. Adding components is very
    easy. Substituting components is very
  • 11:00 - 11:04
    easy. We don't have enough time to really
    go into those. But instead, we're gona
  • 11:04 - 11:08
    talk about kind of the two more scary
    ones, which are sort of adding a chip
  • 11:08 - 11:12
    inside a package and IC modifications. So
    let's talk about adding a chip in a
  • 11:12 - 11:16
    package. This one has sort of grabbed a
    bunch of headlines, so this sort of these
  • 11:16 - 11:21
    in the Snowden files, we've found these
    like NSA implants where they had put chips
  • 11:21 - 11:27
    literally inside of connectors and other
    chips to modify the computer's behavior.
  • 11:27 - 11:32
    Now, it turns out that actually adding a
    chip in a package is quite easy. It
  • 11:32 - 11:35
    happens every day. This is a routine
    thing, right? If you take open any SD
  • 11:35 - 11:39
    card, micro-SD card that you have, you're
    going to find that it has two chips on the
  • 11:39 - 11:42
    inside at the very least. One is a
    controller chip, one is memory chip. In
  • 11:42 - 11:48
    fact, they can stick 16, 17 chips inside
    of these packages today very handily.
  • 11:48 - 11:52
    Right? And so if you want to go ahead and
    find these chips, is the solution to go
  • 11:52 - 11:55
    ahead and X-ray all the things, you just
    take every single circuit board and throw
  • 11:55 - 11:58
    inside an x-ray machine. Well, this is
    what a circuit board looks like, in the
  • 11:58 - 12:03
    x-ray machine. Some things are very
    obvious. So on the left, we have our
  • 12:03 - 12:06
    Ethernet magnetic jacks and there's a
    bunch of stuff on the inside. Turns out
  • 12:06 - 12:09
    those are all OK right there. Don't worry
    about those. And on the right, we have our
  • 12:09 - 12:14
    chips. And this one here, you may be sort
    of tempted to look and say, oh, I see this
  • 12:14 - 12:18
    big sort of square thing on the bottom
    there. That must be the chip. Actually,
  • 12:18 - 12:22
    turns out that's not the chip at all.
    That's the solder pad that holds the chip
  • 12:22 - 12:26
    in place. You can't actually see the chip
    as the solder is masking it inside the
  • 12:26 - 12:30
    x-ray. So when we're looking at a chip
    inside of an x-ray, I've kind of giving
  • 12:30 - 12:35
    you a look right here on the left is what
    it looks like sort of in 3-D. And the
  • 12:35 - 12:37
    right is what looks like an x-ray, sort of
    looking from the top down. You're looking
  • 12:37 - 12:41
    at ghostly outlines with very thin spidery
    wires coming out of it. So if you were to
  • 12:41 - 12:46
    look at a chip-on-chip in an x-ray, this
    is actually an image of a chip. So in the
  • 12:46 - 12:50
    cross-section, you can see the several
    pieces of silicon that are stacked on top
  • 12:50 - 12:53
    of each other. And if you could actually
    do an edge on x-ray of it, this is what
  • 12:53 - 12:57
    you would see. Unfortunately, you'd have
    to take the chip off the board to do the
  • 12:57 - 13:00
    edge on x-ray. So what you do is you have
    to look at it from the top down and we
  • 13:00 - 13:04
    look at it from the top down, all you see
    are basically some straight wires. Like, I
  • 13:04 - 13:09
    can't it's not obvious from that top down
    x-ray, whether you're looking at multiple
  • 13:09 - 13:12
    chips, eight chips, one chip, how many
    chips are on the inside? That piece of
  • 13:12 - 13:16
    wire bonds all stitched perfectly in
    overlap over the chip. So you know. this
  • 13:16 - 13:20
    is what the chip-on-chip scenario might
    look like. You have a chip that's sitting
  • 13:20 - 13:24
    on top of a chip and wire bonds just sort
    of going a little bit further on from the
  • 13:24 - 13:28
    edge. And so in the X-ray, the only kind
    of difference you see is a slightly longer
  • 13:28 - 13:33
    wire bond in some cases. So it's actually,
    it's not not, you can find these, but it's
  • 13:33 - 13:38
    not like, you know, obvious that you've
    found an implant or not. So looking for
  • 13:38 - 13:43
    silicon is hard. Silicon is relatively
    transparent to X-rays. A lot of things
  • 13:43 - 13:48
    mask it. Copper traces, Solder masks the
    presence of silicon. This is like another
  • 13:48 - 13:54
    example of a, you know, a wire bonded chip
    under an X-ray. There's some mitigations.
  • 13:54 - 13:57
    If you have a lot of money, you can do
    computerized tomography. They'll build a
  • 13:57 - 14:03
    3D image of the chip. You can do X-ray
    diffraction spectroscopy, but it's not a
  • 14:03 - 14:07
    foolproof method. And so basically the
    threat of wirebonded package is actually
  • 14:07 - 14:12
    very well understood commodity technology.
    It's actually quite cheap. This is a I was
  • 14:12 - 14:16
    actually doing some wire bonding in China
    the other day. This is the wirebonding
  • 14:16 - 14:20
    machine. I looked up the price, it's 7000
    dollars for a used one. And you
  • 14:20 - 14:23
    basically just walk into the guy with a
    picture where you want the bonds to go. He
  • 14:23 - 14:27
    sort of picks them out, programs the
    machines motion once and he just plays
  • 14:27 - 14:30
    back over and over again. So if you want
    to go ahead and modify a chip and add a
  • 14:30 - 14:35
    wirebond, it's not as crazy as it sounds.
    The mitigation is that this is a bit
  • 14:35 - 14:39
    detectable inside X-rays. So let's go down
    the rabbit hole a little further. So
  • 14:39 - 14:42
    there's nother concept of threat use
    called the Through-Silicon Via. So this
  • 14:42 - 14:47
    here is a cross-section of a chip. On the
    bottom is the base chip and the top is a
  • 14:47 - 14:51
    chip that's only 0.1 to 0.2 millimeters
    thick, almost the width of a human hair.
  • 14:51 - 14:55
    And they actually have drilled Vias
    through the chip. So you have circuits on
  • 14:55 - 15:00
    the top and circuits on the bottom. So
    this is kind of used to sort of, you know,
  • 15:00 - 15:04
    putting interposer in between different
    chips, also used to stack DRAM and HBM. So
  • 15:04 - 15:08
    this is a commodity process to be able
    today. It's not science fiction. And the
  • 15:08 - 15:11
    second concept I want to throw at you is a
    thing called a Wafer Level Chip Scale
  • 15:11 - 15:15
    Package, WLCSP. This is actually a very
    common method for packaging chips today.
  • 15:15 - 15:19
    Basically it's solder bolts directly on
    top of chips. They're everywhere. If you
  • 15:19 - 15:24
    look inside of like an iPhone, basically
    almost all the chips are WLCSP package
  • 15:24 - 15:28
    types. Now, if I were to take that Wafer
    Level Chip Scale Package and cross-section
  • 15:28 - 15:32
    and look at it, it looks like a circuit
    board with some solder-balls and the
  • 15:32 - 15:36
    silicon itself with some backside
    passivation. If you go ahead and combine
  • 15:36 - 15:41
    this with a Through-Silicon Via implant, a
    man in the middle attack using Through-
  • 15:41 - 15:44
    Silicon Vias, this is what it looks like
    at the end of the day, you basically have
  • 15:44 - 15:47
    a piece of silicon this size, the original
    silicon, sitting in original pads, in
  • 15:47 - 15:50
    basically all the right places with the
    solder-balls masking the presence of that
  • 15:50 - 15:54
    chip. So it's actually basically a nearly
    undetectable implant if you want to
  • 15:54 - 15:58
    execute it, if you go ahead and look at
    the edge of the chip. They already have
  • 15:58 - 16:01
    seams on the sides. You can't even just
    look at the side and say, oh, I see a seam
  • 16:01 - 16:04
    on my chip. Therefore, it's a problem. The
    seam on the edge often times is because of
  • 16:04 - 16:08
    a different coding as the back or
    passivations, these types of things. So if
  • 16:08 - 16:13
    you really wanted to sort of say, OK, how
    well can we hide implant, this is probably
  • 16:13 - 16:16
    the way I would do it. It's logistically
    actually easier than to worry about an
  • 16:16 - 16:20
    implant because you don't have to get the
    chips in wire-bondable format, you
  • 16:20 - 16:23
    literally just buy them off the Internet.
    You can just clean off the solder-balls
  • 16:23 - 16:27
    with a hot air gun and then the hard part
    is building it so it can be a template for
  • 16:27 - 16:32
    doing the attack, which will take some
    hundreds of thousands of dollars to do and
  • 16:32 - 16:37
    probably a mid-end fab. But if you have
    almost no budget constraint and you have a
  • 16:37 - 16:40
    set of chips that are common and you want
    to build a template for, this could be a
  • 16:40 - 16:46
    pretty good way to hide an implant inside
    of a system. So that's sort of adding
  • 16:46 - 16:52
    chips inside packages. Let's talk a bit
    about chip modification itself. So how
  • 16:52 - 16:56
    hard is it to modify the chip itself?
    Let's say we've managed to eliminate the
  • 16:56 - 17:00
    possibility of someone's added chip, but
    what about the chip itself? So this sort
  • 17:00 - 17:03
    of goes, a lot of people said, hey,
    bunnie, why don't you spin an open source,
  • 17:03 - 17:06
    silicon processor, this will make it
    trustable, right?. This is not a problem.
  • 17:06 - 17:12
    Well, let's think about the attack surface
    of IC fabrication processes. So on the
  • 17:12 - 17:16
    left hand side here I've got kind of a
    flowchart of what I see fabrication looks
  • 17:16 - 17:23
    like. You start with a high level chip
    design, it's a RTL, like Verilog, or VHDL
  • 17:23 - 17:27
    these days or Python. You go into some
    backend and then you have a decision to
  • 17:27 - 17:31
    make: Do you own your backend tooling or
    not? And so I will go into this a little
  • 17:31 - 17:34
    more. If you don't, you trust the fab to
    compile it and assemble it. If you do, you
  • 17:34 - 17:38
    assemble the chip with some blanks for
    what's called "hard IP", we'll get into
  • 17:38 - 17:42
    this. And then you trust the fab to
    assemble that, make masks and go to mass
  • 17:42 - 17:47
    production. So there's three areas that I
    think are kind of ripe for tampering now,
  • 17:47 - 17:50
    "Netlist tampering", "hard IP tampering"
    and "mask tampering". We'll go into each
  • 17:50 - 17:55
    of those. So "Netlist tampering", a lot of
    people think that, of course, if you wrote
  • 17:55 - 17:59
    the RTL, you're going to make the chip. It
    turns out that's actually kind of a
  • 17:59 - 18:03
    minority case. We hear about that. That's
    on the right hand side called customer
  • 18:03 - 18:07
    owned tooling. That's when the customer
    does a full flow, down to the mask set.
  • 18:07 - 18:12
    The problem is it costs several million
    dollars and a lot of extra headcount of
  • 18:12 - 18:15
    very talented people to produce these and
    you usually only do it for flagship
  • 18:15 - 18:20
    products like CPUs, and GPUs or high-end
    routers, these sorts of things. I would
  • 18:20 - 18:25
    say most chips tend to go more towards
    what's called an ASIC side, "Application
  • 18:25 - 18:29
    Specific Integrated Circuit". What happens
    is that the customer will do some RTL and
  • 18:29 - 18:33
    maybe a high level floorplan and then the
    silicon foundry or service will go ahead
  • 18:33 - 18:36
    and do the place/route, the IP
    integration, the pad ring. This is quite
  • 18:36 - 18:40
    popular for cheap support chips, like your
    baseboard management controller inside
  • 18:40 - 18:44
    your server probably went through this
    flow, disk controllers probably got this
  • 18:44 - 18:48
    flow, mid-to-low IO controllers . So all
    those peripheral chips that we don't like
  • 18:48 - 18:52
    to think about, that we know that can
    handle our data probably go through a flow
  • 18:52 - 18:58
    like this. And, to give you an idea of how
    common it is, but how little you've heard
  • 18:58 - 19:01
    of it, there's a company called SOCIONEXT.
    There are a billion dollar company,
  • 19:01 - 19:04
    actually, you've probably never heard of
    them, and they offer services. You
  • 19:04 - 19:07
    basically just throw a spec over the wall
    and they'll build a chip to you all the
  • 19:07 - 19:10
    way to the point where you've done logic,
    synthesis and physical design and then
  • 19:10 - 19:15
    they'll go ahead and do the manufacturing
    and test and sample shipment for it. So
  • 19:15 - 19:19
    then, OK, fine, now, obviously, if you
    care about trust, you don't do an ASIC
  • 19:19 - 19:24
    flow, you pony up the millions of dollars
    and you do a COT flow, right? Well, there
  • 19:24 - 19:29
    is a weakness in COT flows. And this is
    it's called the "Hard IP problem". So this
  • 19:29 - 19:33
    here on the right hand side is an amoeba
    plot of the standard cells alongside a
  • 19:33 - 19:39
    piece of SRAM, a highlight this year. The
    image wasn't great for presentation, but
  • 19:39 - 19:45
    this region here is the SRAM-block. And
    all those little colorful blocks are
  • 19:45 - 19:50
    standard cells, representing your AND-
    gates and NAND-gates and that sort of
  • 19:50 - 19:55
    stuff. What happens is that the foundry
    will actually ask you, just leave an open
  • 19:55 - 20:00
    spot on your mask-design and they'll go
    ahead and merge in the RAM into that spot
  • 20:00 - 20:05
    just before production. The reason why
    they do this is because stuff like RAM is
  • 20:05 - 20:08
    a carefully guarded trade secret. If you
    can increase the RAM density of your
  • 20:08 - 20:13
    foundry process, you can get a lot more
    customers. There's a lot of knowhow in it,
  • 20:13 - 20:17
    and so foundries tend not to want to share
    the RAM. You can compile your own RAM,
  • 20:17 - 20:20
    there are open RAM projects, but their
    performance or their density is not as
  • 20:20 - 20:25
    good as the foundry specific ones. So in
    terms of Hard IP, what are the blocks that
  • 20:25 - 20:30
    tend to be Hard IP? Stuff like RF and
    analog, phase-locked-loops, ADCs, DACs,
  • 20:30 - 20:34
    bandgaps. RAM tends to be Hard IP, ROM
    tends to be Hard IP, eFuze that stores
  • 20:34 - 20:38
    your keys is going to be given to you as
    an opaque block, the pad ring around your
  • 20:38 - 20:42
    chip, the thing that protects your chip
    from ESD, that's going to be an opaque
  • 20:42 - 20:46
    block. Basically all the points you need
    to backdoor your RTL are going to be
  • 20:46 - 20:52
    trusted in the foundry in a modern
    process. So OK, let's say, fine, we're
  • 20:52 - 20:56
    going ahead and build all of our own IP
    blocks as well. We're gonna compile our
  • 20:56 - 21:00
    RAMs, do our own IO, everything, right?.
    So we're safe, right? Well, turns out that
  • 21:00 - 21:04
    masks can be tampered with post-
    processing. So if you're going to do
  • 21:04 - 21:08
    anything in a modern process, the mask
    designs change quite dramatically from
  • 21:08 - 21:11
    what you drew them to what actually ends
    up in the line: They get fractured into
  • 21:11 - 21:15
    multiple masks, they have resolution
    correction techniques applied to them and
  • 21:15 - 21:21
    then they always go through an editing
    phase. So masks are not born perfect. Masks
  • 21:21 - 21:24
    have defects on the inside. And so you can
    look up papers about how they go and they
  • 21:24 - 21:28
    inspect the mask, every single line on the
    inside when they find an error, they'll
  • 21:28 - 21:32
    patch over it, they'll go ahead and add
    bits of metal and then take away bits of
  • 21:32 - 21:36
    glass to go ahead and make that mask
    perfect or, better in some way, if you
  • 21:36 - 21:40
    have access to the editing capability. So
    what can you do with mask-editing? Well,
  • 21:40 - 21:45
    there's been a lot of papers written on
    this. You can look up ones on, for
  • 21:45 - 21:49
    example, "Dopant tampering". This one
    actually has no morphological change. You
  • 21:49 - 21:52
    can't look at it under a microscope and
    detect Dopant tampering. You have to have
  • 21:52 - 21:57
    something and either you have to do some
    wet chemistry or some X-ray-spectroscopy
  • 21:57 - 22:04
    to figure it out. This allows for circuit
    level change without a gross morphological
  • 22:04 - 22:08
    change of the circuit. And so this can
    allow for tampering with things like RNGs
  • 22:08 - 22:16
    or some logic paths. There are oftentimes
    spare cells inside of your ASIC, since
  • 22:16 - 22:18
    everyone makes mistakes, including chip
    designers and so you want a patch over
  • 22:18 - 22:22
    that. It can be done at the mask level, by
    signal bypassing, these types of things.
  • 22:22 - 22:29
    So some certain attacks can still happen
    at the mask level. So that's a very quick
  • 22:29 - 22:34
    sort of idea of how bad can it get. When
    you talk about the time of check, time of
  • 22:34 - 22:40
    use trust problem inside the supply chain.
    The short summary of implants is that
  • 22:40 - 22:44
    there's a lot of places to hide them. Not
    all of them are expensive or hard. I
  • 22:44 - 22:48
    talked about some of the more expensive or
    hard ones. But remember, wire bonding is
  • 22:48 - 22:53
    actually a pretty easy process. It's not
    hard to do and it's hard to detect. And
  • 22:53 - 22:56
    there's really no actual essential
    correlation between detection difficulty
  • 22:56 - 23:02
    and difficulty of the attack, if you're
    very careful in planning the attack. So,
  • 23:02 - 23:06
    okay, implants are possible. It's just
    this. Let's agree on that maybe. So now
  • 23:06 - 23:09
    the solution is we should just have
    trustable factories. Let's go ahead and
  • 23:09 - 23:12
    bring the fabs to the EU. Let's have a fab
    in my backyard or whatever it is, these
  • 23:12 - 23:18
    these types of things. Let's make sure all
    the workers are logged and registered,
  • 23:18 - 23:22
    that sort of thing. Let's talk about that.
    So if you think about hardware, there's
  • 23:22 - 23:26
    you, right?. And then we can talk about
    evil maids. But let's not actually talk
  • 23:26 - 23:30
    about those, because that's actually kind
    of a minority case to worry about. But
  • 23:30 - 23:36
    let's think about how stuff gets to you.
    There's a distributor, who goes through a
  • 23:36 - 23:39
    courier, who gets to you. All right. So
    we've gone and done all this stuff for the
  • 23:39 - 23:44
    trustful factory. But it's actually
    documented that couriers have been
  • 23:44 - 23:50
    intercepted and implants loaded. You know,
    by for example, the NSA on Cisco products.
  • 23:50 - 23:55
    Now, you don't even have to have access to
    couriers, now. Thanks to the way modern
  • 23:55 - 24:01
    commerce works, other customers can go
    ahead and just buy a product, tamper with
  • 24:01 - 24:05
    it, seal it back in the box, send it back
    to your distributor. And then maybe you
  • 24:05 - 24:08
    get one, right? That can be good enough.
    Particularly, if you know a corporation is
  • 24:08 - 24:11
    in a particular area. Targeting them, you
    buy a bunch of hard drives in the area,
  • 24:11 - 24:13
    seal them up, send them back and
    eventually one of them ends up in the
  • 24:13 - 24:17
    right place and you've got your implant,
    right? So there's a great talk last year
  • 24:17 - 24:20
    at 35C3. I recommend you check it out.
    That talks a little bit more about the
  • 24:20 - 24:25
    scenario, sort of removing tamper stickers
    and you know, the possibility that some
  • 24:25 - 24:29
    crypto wallets were sent back in the
    supply chain then and tampered with. OK,
  • 24:29 - 24:32
    and then let's let's take that back. We
    have to now worry about the wonderful
  • 24:32 - 24:36
    people in customs. We have to worry about
    the wonderful people in the factory who
  • 24:36 - 24:40
    have access to your hardware. And so if
    you cut to the chase, it's a huge attack
  • 24:40 - 24:44
    surface in terms of the supply chain,
    right? From you to the courier to the
  • 24:44 - 24:49
    distributor, customs, box build, the box
    build factory itself. Oftentimes we'll use
  • 24:49 - 24:53
    gray market resources to help make
    themselves more profitable, right? You
  • 24:53 - 24:57
    have distributors who go to them. You
    don't even know who those guys are. PCB
  • 24:57 - 25:01
    assembly, components, boards, chip fab,
    packaging, the whole thing, right? Every
  • 25:01 - 25:04
    single point is a place where someone can
    go ahead and touch a piece of hardware
  • 25:04 - 25:09
    along the chain. So can open source save
    us in this scenario? Does open hardware
  • 25:09 - 25:12
    solve this problem? Right. Let's think
    about it. Let's go ahead and throw some
  • 25:12 - 25:16
    developers with git on the left hand side.
    How far does it get, right? Well, we can
  • 25:16 - 25:19
    have some continuous integration checks
    that make sure that you know the hardware
  • 25:19 - 25:23
    is correct. We can have some open PCB
    designs. We have some open PDK, but then
  • 25:23 - 25:27
    from that point, it goes into a rather
    opaque machine and then, OK, maybe we can
  • 25:27 - 25:31
    put some test on the very edge before exit
    the factory to try and catch some
  • 25:31 - 25:36
    potential issues, right? But you can see
    all the area, other places, where a time
  • 25:36 - 25:41
    of check, time of use problem can happen.
    And this is why, you know, I'm saying that
  • 25:41 - 25:46
    open hardware on its own is not sufficient
    to solve this trust problem. Right? And
  • 25:46 - 25:50
    the big problem at the end of the day is
    that you can't hash hardware. Right? There
  • 25:50 - 25:54
    is no hash function for hardware. That's
    why I want to go through that early today.
  • 25:54 - 25:57
    There's no convenient, easy way to
    basically confirming the correctness of
  • 25:57 - 26:01
    your hardware before you use it. Some
    people say, well, bunnie, said once, there
  • 26:01 - 26:05
    is always a bigger microscope, right? You
    know, I do some, security reverse
  • 26:05 - 26:08
    engineering stuff. This is true, right? So
    there's a wonderful technique called
  • 26:08 - 26:12
    ptychographic X-ray Imaging, there is a
    great paper in nature about it, where they
  • 26:12 - 26:17
    take like a modern i7 CPU and they get
    down to the gate level nondestructively
  • 26:17 - 26:21
    with it, right? It's great for reverse
    engineering or for design verification.
  • 26:21 - 26:24
    The problem number one is it literally
    needs a building sized microscope. It was
  • 26:24 - 26:29
    done at the Swiss light source, that donut
    shaped thing is the size of the light
  • 26:29 - 26:33
    source for doing that type of
    verification, right? So you're not going
  • 26:33 - 26:37
    to have one at your point of use, right?
    You're going to check it there and then
  • 26:37 - 26:41
    probably courier it to yourself again.
    Time of check is not time of use. Problem
  • 26:41 - 26:46
    number two, it's expensive to do so.
    Verifying one chip only verifies one chip
  • 26:46 - 26:50
    and as I said earlier, just because 99.9%
    of your hardware is OK, doesn't mean
  • 26:50 - 26:54
    you're safe. Sometimes all it takes is one
    server out of a thousand, to break some
  • 26:54 - 26:59
    fundamental assumptions that you have
    about your cloud. And random sampling just
  • 26:59 - 27:02
    isn't good enough, right? I mean, would
    you random sample signature checks on
  • 27:02 - 27:06
    software that you install? Download? No.
    You insist 100% check and everything. If
  • 27:06 - 27:08
    you want that same standard of
    reliability, you have to do that for
  • 27:08 - 27:13
    hardware. So then, is there any role for
    open source and trustful hardware?
  • 27:13 - 27:17
    Absolutely, yes. Some of you guys may be
    familiar with that little guy on the
  • 27:17 - 27:23
    right, the SPECTRE logo. So correctness is
    very, very hard. Peer review can help fix
  • 27:23 - 27:27
    correctness bugs. Micro architectural
    transparency can able the fixes in SPECTRE
  • 27:27 - 27:30
    like situations. So, you know, for
    example, you would love to be able to say
  • 27:30 - 27:34
    we're entering a critical region. Let's
    turn off all the micro architectural
  • 27:34 - 27:38
    optimizations, sacrifice performance and
    then run the code securely and then go
  • 27:38 - 27:41
    back into, who cares what mode, and just
    get done fast, right? That would be a
  • 27:41 - 27:45
    switch I would love to have. But without
    that sort of transparency or without the
  • 27:45 - 27:48
    bill to review it, we can't do that. Also,
    you know, community driven features and
  • 27:48 - 27:51
    community own designs is very empowering
    and make sure that we're sort of building
  • 27:51 - 27:57
    the right hardware for the job and that
    it's upholding our standards. So there is
  • 27:57 - 28:02
    a role. It's necessary, but it's not
    sufficient for trustable hardware. Now the
  • 28:02 - 28:06
    question is, OK, can we solve the point of
    use hardware verification problem? Is it
  • 28:06 - 28:10
    all gloom and doom from here on? Well, I
    didn't bring us here to tell you it's just
  • 28:10 - 28:15
    gloom and doom. I've thought about this
    and I've kind of boiled it into three
  • 28:15 - 28:19
    principles for building verifiable
    hardware. The three principles are: 1)
  • 28:19 - 28:23
    Complexity is the enemy of verification.
    2) We should verify entire systems, not
  • 28:23 - 28:26
    just components. 3) And we need to empower
    end-users to verify and seal their
  • 28:26 - 28:32
    hardware. We'll go into this in the
    remainder of the talk. The first one is
  • 28:32 - 28:37
    that complexity is complicated. Right?
    Without a hashing function, verification
  • 28:37 - 28:44
    rolls back to bit-by-bit or atom-by-atom
    verification. Modern phones just have so
  • 28:44 - 28:49
    many components. Even if I gave you the
    full source code for the SOC inside of a
  • 28:49 - 28:52
    phone down to the mass level, what are you
    going to do with it? How are you going to
  • 28:52 - 28:57
    know that this mass actually matches the
    chip and those two haven't been modified?
  • 28:57 - 29:01
    So more complexity, is more difficult. The
    solution is: Let's go to simplicity,
  • 29:01 - 29:04
    right? Let's just build things from
    discrete transistors. Someone's done this.
  • 29:04 - 29:08
    The Monster 6502 is great. I love the
    project. Very easy to verify. Runs at 50
  • 29:08 - 29:13
    kHz. So you're not going to do a lot
    with that. Well, let's build processors at
  • 29:13 - 29:16
    a visually inspectable process node. Go to
    500 nanometers. You can see that with
  • 29:16 - 29:21
    light. Well, you know, 100 megahertz clock
    rate and a very high power consumption and
  • 29:21 - 29:25
    you know, a couple kilobytes RAM probably
    is not going to really do it either.
  • 29:25 - 29:30
    Right? So the point of use verification is
    a tradeoff between ease of verification
  • 29:30 - 29:34
    and features and usability. Right? So
    these two products up here largely do the
  • 29:34 - 29:39
    same thing. Air pods. Right? And
    headphones on your head. Right? Air pods
  • 29:39 - 29:44
    have something on the order of tens of
    millions of transistors for you to verify.
  • 29:44 - 29:48
    The headphone that goes on your head. Like
    I can actually go to Maxwell's equations
  • 29:48 - 29:51
    and actually tell you how the magnets work
    from very first principles. And there's
  • 29:51 - 29:54
    probably one transistor on the inside of
    the microphone to go ahead and amplify the
  • 29:54 - 30:00
    membrane. And that's it. Right? So this
    one, you do sacrifice some features and
  • 30:00 - 30:03
    usability, when you go to a headset. Like
    you can't say, hey, Siri, and they will
  • 30:03 - 30:08
    listen to you and know what you're doing,
    but it's very easy to verify and know
  • 30:08 - 30:13
    what's going on. So in order to start a
    dialog on user verification, we have to
  • 30:13 - 30:17
    serve a set of context. So I started a
    project called 'Betrusted' because the
  • 30:17 - 30:22
    right answer depends on the context. I
    want to establish what might be a minimum
  • 30:22 - 30:27
    viable, verifiable product. And it's sort
    of like meant to be user verifiable by
  • 30:27 - 30:30
    design. And when we think of it as a
    hardware software distro. So it's meant to
  • 30:30 - 30:34
    be modified and changed and customized
    based upon the right context at the end of
  • 30:34 - 30:40
    the day. This a picture of what it looks
    like. I actually have a little prototype
  • 30:40 - 30:44
    here. Very, very, very early product here
    at the Congress. If you wanna look at it.
  • 30:44 - 30:49
    It's a mobile device that is meant for
    sort of communication, sort of text based
  • 30:49 - 30:53
    communication and maybe voice
    authentication. So authenticator tokens
  • 30:53 - 30:56
    are like a crypto wall if you want. And
    the people were thinking about who might
  • 30:56 - 31:01
    be users are either high value targets
    politically or financially. So you don't
  • 31:01 - 31:04
    have to have a lot of money to be a high
    value target. You could also be in a very
  • 31:04 - 31:09
    politically risky for some people. And
    also, of course, looking at developers and
  • 31:09 - 31:12
    enthusiasts and ideally we're thinking
    about a global demographic, not just
  • 31:12 - 31:16
    English speaking users, which is sort of a
    thing when you think about the complexity
  • 31:16 - 31:19
    standpoint, this is where we really have
    to sort of champ at the bit and figure out
  • 31:19 - 31:24
    how to solve a lot of hard problems like
    getting Unicode and, you know, right to
  • 31:24 - 31:28
    left rendering and pictographic fonts to
    work inside a very small tax surface
  • 31:28 - 31:34
    device. So this leads me to the second
    point. In which we verify entire systems,
  • 31:34 - 31:38
    not just components. We all say, well, why
    not just build a chip? Why not? You know,
  • 31:38 - 31:42
    why are you thinking about a whole device?
    Right. The problem is, that private keys
  • 31:42 - 31:46
    are not your private matters. Screens can
    be scraped and keyboards can be logged. So
  • 31:46 - 31:50
    there's some efforts now to build
    wonderful security enclaves like Keystone
  • 31:50 - 31:55
    and Open Titan, which will build, you
    know, wonderful secure chips. The problem
  • 31:55 - 31:58
    is, that even if you manage to keep your
    key secret, you still have to get that
  • 31:58 - 32:03
    information through an insecure CPU from
    the screen to the keyboard and so forth.
  • 32:03 - 32:06
    Right? And so, you know, people who have
    used these, you know, on screen touch
  • 32:06 - 32:09
    keyboards have probably seen something of
    a message like this saying that, by the
  • 32:09 - 32:12
    way, this keyboard can see everything
    you're typing, clean your passwords.
  • 32:12 - 32:15
    Right? And people probably clip and say,
    oh, yeah, sure, whatever. I trust that.
  • 32:15 - 32:19
    Right? OK, well, this answer, this little
    enclave on the site here isn't really
  • 32:19 - 32:22
    doing a lot of good. When you go ahead and
    you say, sure, I'll run this implant
  • 32:22 - 32:29
    method, they can go ahead and modify all
    my data and intercept all my data. So in
  • 32:29 - 32:33
    terms of making a device variable, let's
    talk about the concept of practice flow.
  • 32:33 - 32:36
    How do I take these three principles and
    turn them into something? So this is you
  • 32:36 - 32:40
    know, this is the ideal of taking these
    three requirements and turning it into the
  • 32:40 - 32:45
    set of five features, a physical keyboard,
    a black and white LCD, a FPGA-based RISC-V
  • 32:45 - 32:49
    SoC, users-sealable keys and so on. It's
    easy to verify and physically protect. So
  • 32:49 - 32:53
    let's talk about these features one by
    one. First one is a physical keyboard. Why
  • 32:53 - 32:56
    am I using a physical keyboard and not a
    virtual keyboard? People love the virtual
  • 32:56 - 33:00
    keyboard. The problem is that captouch
    screens, which is necessary to do a good
  • 33:00 - 33:05
    virtual keyboard, have a firmware block.
    They have a microcontroller to do the
  • 33:05 - 33:08
    touch screens, actually. It's actually
    really hard to build these things we want.
  • 33:08 - 33:11
    If you can do a good job with it and build
    an awesome open source one, that'll be
  • 33:11 - 33:15
    great, but that's a project in and of
    itself. So in order to sort of get an easy
  • 33:15 - 33:18
    win here and we can, let's just go with
    the physical keyboard. So this is what the
  • 33:18 - 33:22
    device looks like with this cover off. We
    have a physical keyboard, PCV with a
  • 33:22 - 33:25
    little overlay that does, you know, so we
    can do multilingual inserts and you can go
  • 33:25 - 33:29
    to change that out. And it's like it's
    just a two layer daughter card. Right.
  • 33:29 - 33:33
    Just hold up to like, you know, like, OK,
    switches, wires. Right? Not a lot of
  • 33:33 - 33:36
    places to hide things. So I'll take that
    as an easy win for an input surface,
  • 33:36 - 33:40
    that's verifiable. Right? The output
    surface is a little more subtle. So we're
  • 33:40 - 33:44
    doing a black and white LCD. If you say,
    OK, why not use a curiosity? If you ever
  • 33:44 - 33:52
    take apart a liquid crystal display, look
    for a tiny little thin rectangle sort of
  • 33:52 - 33:57
    located near the display area. That's
    actually a silicon chip that's bonded to
  • 33:57 - 34:01
    the glass. That's what it looks like at
    the end of the day. That contains a frame
  • 34:01 - 34:05
    buffer and a command interface. It has
    millions of transistors on the inside and
  • 34:05 - 34:09
    you don't know what it does. So if you're
    ever assuming your adversary may be
  • 34:09 - 34:14
    tampering with your CPU, this is also a
    viable place you have to worry about. So I
  • 34:14 - 34:19
    found a screen. It's called a memory LCD
    by sharp electronics. It turns out they do
  • 34:19 - 34:23
    all the drive electrons on glass. So this
    is a picture of the driver electronics on
  • 34:23 - 34:27
    the screen through like a 50x microscope
    with a bright light behind it. Right? You
  • 34:27 - 34:34
    can actually see the transistors that are
    used to to drive everything on the display
  • 34:34 - 34:38
    it's a nondestructive method of
    verification. But actually more important
  • 34:38 - 34:42
    to the point is that there's so few places
    to hide things, you probably don't need to
  • 34:42 - 34:45
    check it, right? There's not - If you want
    to add an implant to this, you would need
  • 34:45 - 34:50
    to grow the glass area substantially or
    add a silicon chip, which is a thing that
  • 34:50 - 34:55
    you'll notice, right. So at the end of the
    day, the less places to hide things is
  • 34:55 - 34:59
    less need to check things. And so I can
    feel like this is a screen where I can
  • 34:59 - 35:03
    write data to, and it'll show what I want
    to show. The good news is that display has
  • 35:03 - 35:07
    a 200 ppi pixel density. So it's not -
    even though it's black and white - it's
  • 35:07 - 35:12
    kind of closer to E-Paper. EPD in terms of
    resolution. So now we come to the hard
  • 35:12 - 35:17
    part, right, the CPU. The silicon problem,
    right? Any chip built in the last two
  • 35:17 - 35:21
    decades is not going to be inspectable,
    fully inspectable with optical microscope,
  • 35:21 - 35:24
    right? Thorough analysis requires removing
    layers and layers of metal and dielectric.
  • 35:24 - 35:29
    This is sort of a cross section of a
    modernish chip and you can see the sort of
  • 35:29 - 35:35
    the huge stack of things to look at on
    this. This process is destructive and you
  • 35:35 - 35:38
    can think of it as hashing, but it's a
    little bit too literal, right? We want
  • 35:38 - 35:41
    something where we can check the thing
    that we're going to use and then not
  • 35:41 - 35:47
    destroy it. So I've spent quite a bit of
    time thinking about options for
  • 35:47 - 35:50
    nondestructive silicon verification. The
    best I could come up with maybe was using
  • 35:50 - 35:54
    optical fauilt induction somehow combined
    with some chip design techniques to go
  • 35:54 - 35:58
    ahead and like scan a laser across and
    look at fault syndromes and figure out,
  • 35:58 - 36:02
    you know, does the thing... do the gates
    that we put down correspond to the thing
  • 36:02 - 36:07
    that I built. The problem is, I couldn't
    think of a strategy to do it that wouldn't
  • 36:07 - 36:10
    take years and tens of millions of dollars
    to develop, which puts it a little bit far
  • 36:10 - 36:14
    out there and probably in the realm of
    like sort of venture funded activities,
  • 36:14 - 36:18
    which is not really going to be very
    empowering of everyday people. So let's
  • 36:18 - 36:22
    say I want something a little more short
    term than that, then that sort of this,
  • 36:22 - 36:27
    you know, sort of, you know, platonic
    ideal of verifiability. So the compromise
  • 36:27 - 36:32
    I sort of arrived at is the FPGA. So field
    programmable gate arrays, that's what FPGA
  • 36:32 - 36:37
    stands for, are large arrays of logic and
    wires that are user configured to
  • 36:37 - 36:42
    implement hardware designs. So this here
    is an image inside an FPGA design tool. On
  • 36:42 - 36:47
    the top right is an example of one sort of
    logic sub cell. It's got a few flip flops
  • 36:47 - 36:52
    and lookup tables in it. It's embedded in
    this huge mass of wires that allow you to
  • 36:52 - 36:56
    wire it up at runtime to figure out what's
    going on. And one thing that this diagram
  • 36:56 - 37:00
    here shows is I'm able to sort of
    correlate design. I can see "Okay. The
  • 37:00 - 37:04
    decode_to_execute_INSTRUCTION_reg bit 26
    corresponds to this net." So now we're
  • 37:04 - 37:09
    sort of like bring that Time Of Check a
    little bit closer to Time Of Use. And so
  • 37:09 - 37:13
    the idea is to narrow that ToCToU gap by
    compiling your own CPU. We can basically
  • 37:13 - 37:17
    give you the CPU from source. You can
    compile it yourself. You can confirm the
  • 37:17 - 37:21
    bit stream. So now we're sort of enabling
    a bit more of that trust transfer like
  • 37:21 - 37:25
    software, right. But there's a subtlety in
    that the toolchains are not necessarily
  • 37:25 - 37:30
    always open. There's some FOSS flows like
    symbiflow. They have a 100% open flow for
  • 37:30 - 37:35
    ICE40 and ECP5 and there's like 7-series
    where they've a coming-soon status, but
  • 37:35 - 37:42
    they currently require some closed vendor
    tools. So picking FPGA is a difficult
  • 37:42 - 37:45
    choice. There's a usability versus
    verification tradeoff here. The big
  • 37:45 - 37:49
    usability issue is battery life. If we're
    going for a mobile device, you want to use
  • 37:49 - 37:54
    it all day long or you want to be dead by
    noon. It turns out that the best sort of
  • 37:54 - 37:58
    chip in terms of battery life is a
    Spartan7. It gives you 4x, roughly 3 to
  • 37:58 - 38:05
    4x, in terms of battery life. But the tool
    flow is still semi-closed. But the, you
  • 38:05 - 38:09
    know, I am optimistic that symbiflow will
    get there and we can also fork and make an
  • 38:09 - 38:13
    ECP5 version if that's a problem at the
    end of day. So let's talk a little bit
  • 38:13 - 38:18
    more about sort of FPGA features. So one
    thing I like to say about FPGA is: they
  • 38:18 - 38:22
    offer a sort of ASLR, so address-space
    layout randomization, but for hardware.
  • 38:22 - 38:27
    Essentially, a design has a kind of
    pseudo-random mapping to the device. This
  • 38:27 - 38:31
    is a sort of a screenshot of two
    compilation runs at the same source code
  • 38:31 - 38:35
    with a very small modification to it. And
    basically a version number stored in a
  • 38:35 - 38:42
    GPR. And then you can see that the
    actually the locations of a lot of the
  • 38:42 - 38:46
    registers are basically shifted around.
    The reason why this is important is
  • 38:46 - 38:50
    because this hinders a significant class
    of silicon attacks. All those small mass
  • 38:50 - 38:54
    level changes I talked about the ones
    where we just "Okay, we're just gonna head
  • 38:54 - 38:58
    and change a few wires or run a couple
    logic cells around", those become more
  • 38:58 - 39:02
    less likely to capture a critical bit. So
    if you want to go ahead and backdoor a
  • 39:02 - 39:06
    full FPGA, you're going to have to change
    the die size. You have to make it
  • 39:06 - 39:10
    substantially larger to be able to sort of
    like swap out the function in those cases.
  • 39:10 - 39:13
    And so now the verification bar goes from
    looking for a needle in a haystack to
  • 39:13 - 39:17
    measuring the size of the haystack, which
    is a bit easier to do towards the user
  • 39:17 - 39:22
    side of things. And it turns out, at least
    in Xilinx-land, it's just a change of a
  • 39:22 - 39:29
    random parameter does the trick. So some
    potential attack vectors against FPGA is
  • 39:29 - 39:34
    like "OK, well, it's closed silicon." What
    are the backdoors there? Notably inside a
  • 39:34 - 39:39
    7-series FPGA they actually document
    introspection features. You can pull out
  • 39:39 - 39:43
    anything inside the chip by instantiating
    a certain special block. And then we still
  • 39:43 - 39:46
    also have to worry about the whole class
    of like man in the middle. I/O- and JTAG
  • 39:46 - 39:50
    implants that I talked about earlier. So
    It's easy, really easy, to mitigate the
  • 39:50 - 39:53
    known blocks, basically lock them down,
    tie them down, check them in the bit
  • 39:53 - 39:58
    stream, right? In terms of the I/O-man-in-
    the-middle stuff, this is where we're
  • 39:58 - 40:03
    talking about like someone goes ahead and
    puts a chip in in the path of your FPGA.
  • 40:03 - 40:06
    There's a few tricks you can do. We can do
    sort of bust encryption on the RAM and the
  • 40:06 - 40:12
    ROM at the design level that frustrates
    these. At the implementation, basically,
  • 40:12 - 40:15
    we can use the fact that data pins and
    address pins can be permuted without
  • 40:15 - 40:19
    affecting the device's function. So every
    design can go ahead and permute those data
  • 40:19 - 40:25
    and address pin mappings sort of uniquely.
    So any particular implant that goes in
  • 40:25 - 40:28
    will have to be able to compensate for all
    those combinations, making the implant a
  • 40:28 - 40:32
    little more difficult to do. And of
    course, we can also fall back to sort of
  • 40:32 - 40:38
    careful inspection of the device. In terms
    of the closed source silicon, the thing
  • 40:38 - 40:42
    that I'm really optimistic for there is
    that so in terms of the closed source
  • 40:42 - 40:47
    system, the thing that we have to worry
    about is that, for example, now that
  • 40:47 - 40:50
    Xilinx knows that we're doing these
    trustable devices using a tool chain, they
  • 40:50 - 40:54
    push a patch that compiles back doors into
    your bit stream. So not even as a silicon
  • 40:54 - 40:58
    level implant, but like, you know, maybe
    the tool chain itself has a backdoor that
  • 40:58 - 41:05
    recognizes that we're doing this. So the
    cool thing is, this is a cool project: So
  • 41:05 - 41:09
    there's a project called "Prjxray",
    project x-ray, it's part of the Symbiflow
  • 41:09 - 41:12
    effort, and they're actually documenting
    the full bit stream of the 7-Series
  • 41:12 - 41:16
    device. It turns out that we don't yet
    know what all the bit functions are, but
  • 41:16 - 41:19
    the bit mappings are deterministic. So if
    someone were to try and activate a
  • 41:19 - 41:23
    backdoor in the bit stream through
    compilation, we can see it in a diff. We'd
  • 41:23 - 41:26
    be like: Wow, we've never seen this bit
    flip before. What is this? Do we can look
  • 41:26 - 41:30
    into it and figure out if it's malicious
    or not, right? So there's actually sort of
  • 41:30 - 41:34
    a hope that essentially at the end of day
    we can build sort of a bit stream checker.
  • 41:34 - 41:37
    We can build a thing that says: Here's a
    bit stream that came out, does it
  • 41:37 - 41:41
    correlate to the design source, do all the
    bits check out, do they make sense? And so
  • 41:41 - 41:44
    ideally we would come up with like a one
    click tool. And now we're at the point
  • 41:44 - 41:47
    where the point of check is very close to
    the point of use. The users are now
  • 41:47 - 41:51
    confirming that the CPUs are correctly
    constructed and mapped to the FPGA
  • 41:51 - 41:56
    correctly. So the sort of the summary of
    FPGA vs. custom silicon is sort of like,
  • 41:56 - 42:02
    the pros of custom silicon is that they
    have great performance. We can do a true
  • 42:02 - 42:05
    single chip enclave with hundreds of
    megahertz speeds and tiny power
  • 42:05 - 42:10
    consumption. But the cons of silicon is
    that it's really hard to verify. So, you
  • 42:10 - 42:14
    know, open source doesn't help that
    verification and Hard IP blocks are tough
  • 42:14 - 42:17
    problems we talked about earlier. So FPGAs
    on the other side, they offer some
  • 42:17 - 42:20
    immediate mitigation paths. We don't have
    to wait until we solve this verification
  • 42:20 - 42:25
    problem. We can inspect the bit streams,
    we can randomize the logic mapping and we
  • 42:25 - 42:30
    can do per device unique pin mapping. It's
    not perfect, but it's better than I think
  • 42:30 - 42:35
    any other solution I can offer right now.
    The cons of it is that FPGAs are just
  • 42:35 - 42:38
    barely good enough to do this today. So
    you need a little bit of external RAM
  • 42:38 - 42:42
    which needs to be encrypted, but 100
    megahertz speed performance and about five
  • 42:42 - 42:48
    to 10x the power consumption of a custom
    silicon solution, which in a mobile device
  • 42:48 - 42:52
    is a lot. But, you know, actually part of
    the reason, the main thing that drives the
  • 42:52 - 42:56
    thickness in this is the battery, right?
    And most of that battery is for the FPGA.
  • 42:56 - 43:01
    If we didn't have to go with an FPGA it
    could be much, much thinner. So now let's
  • 43:01 - 43:05
    talk a little about the last two points,
    user-sealable keys, and verification and
  • 43:05 - 43:08
    protection. And this is that third point,
    "empowering end users to verify and seal
  • 43:08 - 43:13
    their hardware". So it's great that we can
    verify something but can it keep a secret?
  • 43:13 - 43:16
    No, transparency is good up to a point,
    but you want to be able to keep secrets so
  • 43:16 - 43:20
    that people won't come up and say: oh,
    there's your keys, right? So sealing a key
  • 43:20 - 43:24
    in the FPGA, ideally we want user
    generated keys that are hard to extract,
  • 43:24 - 43:28
    we don't rely on a central keying
    authority and that any attack to remove
  • 43:28 - 43:33
    those keys should be noticeable. So any
    high level apps, I mean, someone with
  • 43:33 - 43:37
    infinite funding basically should take
    about a day to extract it and the effort
  • 43:37 - 43:40
    should be trivially evident. The solution
    to that is basically self provisioning and
  • 43:40 - 43:45
    sealing of the cryptographic keys in the
    bit stream and a bit of epoxy. So let's
  • 43:45 - 43:50
    talk a little bit about provisioning those
    keys. If we look at the 7-series FPGA
  • 43:50 - 43:56
    security, they offer a sort of encrypted
    HMAC 256-AES, with 256-bit SHA bit
  • 43:56 - 44:02
    streams. There's a paper which discloses a
    known weakness in it, so the attack takes
  • 44:02 - 44:06
    about a day or 1.6 million chosen cipher
    text traces. The reason why it takes a day
  • 44:06 - 44:10
    is because that's how long it takes to
    load in that many chosen ciphertexts
  • 44:10 - 44:14
    through the interfaces. The good news is
    there's some easy mitigations to this. You
  • 44:14 - 44:17
    can just glue shut the JTAG-port or
    improve your power filtering and that
  • 44:17 - 44:22
    should significantly complicate the
    attack. But the point is that it will take
  • 44:22 - 44:24
    a fixed amount of time to do this and you
    have to have direct access to the
  • 44:24 - 44:29
    hardware. It's not the sort of thing that,
    you know, someone at customs or like an
  • 44:29 - 44:33
    "evil maid" could easily pull off. And
    just to put that in perspective, again,
  • 44:33 - 44:38
    even if we improved dramatically the DPA-
    resistance of the hardware, if we knew a
  • 44:38 - 44:42
    region of the chip that we want to
    inspect, probably with the SEM in it and a
  • 44:42 - 44:45
    skilled technician, we could probably pull
    it off in a matter of a day or a couple of
  • 44:45 - 44:49
    days. Takes only an hour to decap the
    silicon, you know, an hour for a few
  • 44:49 - 44:53
    layers, a few hours in the FIB to delayer
    a chip, and an afternoon in the the SEM
  • 44:53 - 44:58
    and you can find out the keys, right? But
    the key point is that, this is kind of the
  • 44:58 - 45:04
    level that we've agreed is OK for a lot of
    the silicon enclaves, and this is not
  • 45:04 - 45:07
    going to happen at a customs checkpoint or
    by an evil maid. So I think I'm okay with
  • 45:07 - 45:11
    that for now. We can do better. But I
    think that's it's a good starting point,
  • 45:11 - 45:15
    particularly for something that's so cheap
    and accessible. So then how do we get
  • 45:15 - 45:18
    those keys in FPGA and how do we keep them
    from getting out? So those keys should be
  • 45:18 - 45:21
    user generated, never leave device, not be
    accessible by the CPU after it's
  • 45:21 - 45:24
    provisioned, be unique per device. And it
    should be easy for the user to get it
  • 45:24 - 45:28
    right. It should be. You don't have to
    know all the stuff and type a bunch
  • 45:28 - 45:35
    commands to do it, right. So if you look
    inside Betrusted there's two rectangles
  • 45:35 - 45:39
    there, one of them is the ROM that
    contains a bit stream, the other one is
  • 45:39 - 45:43
    the FPGA. So we're going to draw those in
    the schematic form. Inside the ROM, you
  • 45:43 - 45:48
    start the day with an unencrypted bit
    stream in ROM, which loads an FPGA. And
  • 45:48 - 45:51
    then you have this little crypto engine.
    There's no keys on the inside. There's no
  • 45:51 - 45:54
    anywhere. You can check everything. You
    can build your own bitstream, and do what
  • 45:54 - 45:59
    you want to do. The crypto engine then
    generates keys from a TRNG that's located
  • 45:59 - 46:03
    on chip. Probably some help of some off-
    chip randomness as well, because I don't
  • 46:03 - 46:07
    necessarily trust everything inside the
    FPGA. Then that crypto engine can go ahead
  • 46:07 - 46:12
    and, as it encrypts the external bit
    stream, inject those keys back into the
  • 46:12 - 46:15
    bit stream because we know where that
    block-RAM is. We can go ahead and inject
  • 46:15 - 46:20
    those keys back into that specific RAM
    block as we encrypt it. So now we have a
  • 46:20 - 46:26
    sealed encrypted image on the ROM, which
    can then load the FPGA if it had the key.
  • 46:26 - 46:29
    So after you've gone ahead and provisioned
    the ROM, hopefully at this point you don't
  • 46:29 - 46:36
    lose power, you go ahead and you burn the
    key into the FPGA's keying engine which
  • 46:36 - 46:41
    sets it to only boot from that encrypted
    bit stream, blow out the readback-
  • 46:41 - 46:45
    disabled-bit and the AES-only boot is
    blown. So now at this point in time,
  • 46:45 - 46:49
    basically there's no way to go ahead and
    put in a bit stream that says tell me your
  • 46:49 - 46:52
    keys, whatever it is. You have to go and
    do one of these hard techniques to pull
  • 46:52 - 46:57
    out the key. You can maybe enable hardware
    upgrade path if you want by having the
  • 46:57 - 47:01
    crypto and just be able to retain a copy
    of the master key and re-encrypt it, but
  • 47:01 - 47:05
    that becomes a vulnerability because the
    user can be coerced to go ahead and load
  • 47:05 - 47:08
    inside a bit stream that then leaks out
    the keys. So if you're really paranoid at
  • 47:08 - 47:14
    some point in time, you seal this thing
    and it's done. You know, you have to go
  • 47:14 - 47:18
    ahead and do that full key extraction
    routine to go ahead and pull stuff out if
  • 47:18 - 47:22
    you forget your passwords. So that's the
    sort of user-sealable keys. I think we can
  • 47:22 - 47:28
    do that with FPGA. Finally, easy to verify
    and easy to protect. Just very quickly
  • 47:28 - 47:31
    talking about this. So if you want to make
    an expectable tamper barrier, a lot of
  • 47:31 - 47:35
    people have talked about glitter seals.
    Those are pretty cool, right? The problem
  • 47:35 - 47:39
    is, I find that glitter seals are too hard
    to verify. Right. Like, I have tried
  • 47:39 - 47:43
    glitter-seals before and I stare at the
    thing and I'm like: Damn, I have no idea
  • 47:43 - 47:45
    if this is the seal I put down. And so
    then I say, ok, we'll take a picture or
  • 47:45 - 47:50
    write an app or something. Now I'm relying
    on this untrusted device to go ahead and
  • 47:50 - 47:56
    tell me if the seal is verified or not. So
    I have a suggestion for a DIY watermark
  • 47:56 - 48:00
    that relies not on an app to go and
    verify, but our very, very well tuned
  • 48:00 - 48:03
    neural networks inside our head to go
    ahead and verify things. So the idea is
  • 48:03 - 48:08
    basically, there's this nice epoxy that I
    found. It comes in this Bi-packs, 2 part
  • 48:08 - 48:12
    epoxy, you just put on the edge of a table
    and you go like this and it goes ahead and
  • 48:12 - 48:17
    mixes the epoxy and you're ready to use.
    It's very easy for users to apply. And
  • 48:17 - 48:21
    then you just draw a watermark on a piece
    of tissue paper. It turns out humans are
  • 48:21 - 48:25
    really good at identifying our own
    handwriting, our own signatures, these
  • 48:25 - 48:28
    types of things. Someone can go ahead and
    try to forge it. There's people who are
  • 48:28 - 48:33
    skilled in doing this, but this is way
    easier than looking at a glitter-seal. You
  • 48:33 - 48:37
    go ahead and put that down on your device.
    You swab on the epoxy and at the end of
  • 48:37 - 48:41
    day, you end up with a sort of tissue
    paper plus a very easily recognizable
  • 48:41 - 48:45
    seal. If someone goes ahead and tries to
    take this off or tamper with it, I can
  • 48:45 - 48:48
    look at it easy and say, yes, this is a
    different thing than what I had yesterday,
  • 48:48 - 48:51
    I don't have to open an app, I don't have
    to look at glitter patterns, I don't have
  • 48:51 - 48:54
    to do these sorts of things. And I can go
    ahead and swab onto all the I/O-ports that
  • 48:54 - 49:02
    need to do. So it's a bit of a hack, but I
    think that it's a little closer towards
  • 49:02 - 49:10
    not having to rely on third party apps to
    verify a tamper evidence seal. So I've
  • 49:10 - 49:16
    talked about sort of this implementation
    and also talked about how it maps to these
  • 49:16 - 49:21
    three principles for building trustable
    hardware. So the idea is to try to build a
  • 49:21 - 49:26
    system that is not too complex so that we
    can verify most the parts or all of them
  • 49:26 - 49:30
    at the end-user point, look at the
    keyboard, look at the display and we can
  • 49:30 - 49:36
    go ahead and compile the FPGA from source.
    We're focusing on verifying the entire
  • 49:36 - 49:40
    system, the keyboard and the display,
    we're not forgetting the user. They secret
  • 49:40 - 49:43
    starts with the user and ends with the
    user, not with the edge of the silicon.
  • 49:43 - 49:48
    And finally, we're empowering end-users to
    verify and seal their own hardware. You
  • 49:48 - 49:52
    don't have to go through a central keying
    authority to go ahead and make sure
  • 49:52 - 49:57
    secrets are are inside your hardware. So
    at the end of the day, the idea behind
  • 49:57 - 50:01
    Betrusted is to close that hardware time
    of check/time of use gap by moving the
  • 50:01 - 50:08
    verification point closer to the point of
    use. So in this huge, complicated
  • 50:08 - 50:12
    landscape of problems that we can have,
    the idea is that we want to, as much as
  • 50:12 - 50:19
    possible, teach users to verify their own
    stuff. So by design, it's meant to be a
  • 50:19 - 50:23
    thing that hopefully anyone can be taught
    to sort of verify and use, and we can
  • 50:23 - 50:28
    provide tools that enable them to do that.
    But if that ends up being too high of a
  • 50:28 - 50:32
    bar, I would like it to be within like one
    or two nodes in your immediate social
  • 50:32 - 50:36
    network, so anyone in the world can find
    someone who can do this. And the reason
  • 50:36 - 50:41
    why I kind of set this bar is, I want to
    sort of define the maximum level of
  • 50:41 - 50:45
    technical competence required to do this,
    because it's really easy, particularly
  • 50:45 - 50:49
    when sitting in an audience of these
    really brilliant technical people to say,
  • 50:49 - 50:52
    yeah, of course everyone can just hash
    things and compile things and look at
  • 50:52 - 50:55
    things in microscopes and solder and then
    you get into life and reality and then be
  • 50:55 - 51:01
    like: oh, wait, I had completely forgotten
    what real people are like. So this tries
  • 51:01 - 51:07
    to get me grounded and make sure that I'm
    not sort of drinking my own Kool-Aid in
  • 51:07 - 51:12
    terms of how useful open hardware is as a
    mechanism to verify anything. Because I
  • 51:12 - 51:14
    hand a bunch of people schematics and say,
    check this and they'll be like: I have no
  • 51:14 - 51:22
    idea. So the current development status is
    that: The hardware is kind of an initial
  • 51:22 - 51:28
    EVT stage for types subject to significant
    change, particularly part of the reason
  • 51:28 - 51:32
    we're here is talking about this is to
    collect more ideas and feedback on this,
  • 51:32 - 51:36
    to make sure we're doing it right. The
    software is just starting. We're writing
  • 51:36 - 51:41
    our own OS called Xous, being done by Sean
    Cross, and we're exploring the UX and
  • 51:41 - 51:44
    applications being done by Tom Marble
    shown here. And I actually want to give a
  • 51:44 - 51:49
    big shout out to NLnet for funding us
    partially. We have a grant, a couple of
  • 51:49 - 51:53
    grants for under privacy and trust
    enhancing technologies. This is really
  • 51:53 - 51:57
    significant because now we can actually
    think about the hard problems, and not
  • 51:57 - 52:00
    have to be like, oh, when do we go
    crowdfunded, when do we go fundraising.
  • 52:00 - 52:04
    Like a lot of time, people are like: This
    looks like a product, can we sell this
  • 52:04 - 52:11
    now? It's not ready yet. And I want to be
    able to take the time to talk about it,
  • 52:11 - 52:16
    listen to people, incorporate changes and
    make sure we're doing the right thing. So
  • 52:16 - 52:19
    with that, I'd like to open up the floor
    for Q&A. Thanks to everyone, for coming to
  • 52:19 - 52:20
    my talk.
  • 52:20 - 52:29
    Applause
  • 52:29 - 52:32
    Herald: Thank you so much, bunnie, for the
    great talk. We have about five minutes
  • 52:32 - 52:36
    left for Q&A. For those who are leaving
    earlier, you're only supposed to use the
  • 52:36 - 52:40
    two doors on the left, not the one, not
    the tunnel you came in through, but only
  • 52:40 - 52:45
    the doors on the left back, the very left
    door and the door in the middle. Now, Q&A,
  • 52:45 - 52:49
    you can pile up at the microphones. Do we
    have a question from the Internet? No, not
  • 52:49 - 52:54
    yet. If someone wants to ask a question
    but is not present but in the stream, or
  • 52:54 - 52:58
    maybe a person in the room who wants to
    ask a question, you can use the hashtag
  • 52:58 - 53:02
    #Clark and Twitter. Mastodon and IRC are
    being monitored. So let's start with
  • 53:02 - 53:04
    microphone number one.
    Your question, please.
  • 53:04 - 53:10
    Q: Hey, bunnie. So you mentioned that with
    the foundry process that the Hard IP-
  • 53:10 - 53:17
    blocks, the prototyped IP-blocks are a
    place where attacks could be made. Do you
  • 53:17 - 53:22
    have the same concern about the Hard IP
    blocks in the FPGA, either the embedded
  • 53:22 - 53:28
    block RAM or any of the other special
    features that you might be using?
  • 53:28 - 53:34
    bunnie: Yeah, I think that we do have to
    be concerned about implants that have
  • 53:34 - 53:41
    existed inside the FPGA prior to this
    project. And I think there is a risk, for
  • 53:41 - 53:45
    example, that there's a JTAG-path that we
    didn't know about. But I guess the
  • 53:45 - 53:49
    compensating side is that the military,
    U.S. military does use a lot of these in
  • 53:49 - 53:53
    their devices. So they have a self-
    interest in not having backdoors inside of
  • 53:53 - 54:01
    these things as well. So we'll see. I
    think that the answer is it's possible. I
  • 54:01 - 54:08
    think the upside is that because the FPGA
    is actually a very regular structure,
  • 54:08 - 54:11
    doing like sort of a SEM-level analysis,
    of the initial construction of it at
  • 54:11 - 54:15
    least, is not insane. We can identify
    these blocks and look at them and make
  • 54:15 - 54:19
    sure the right number of bits. That
    doesn't mean the one you have today is the
  • 54:19 - 54:23
    same one. But if they were to go ahead and
    modify that block to do sort of the
  • 54:23 - 54:27
    implant, my argument is that because of
    the randomness of the wiring and the
  • 54:27 - 54:30
    number of factors they have to consider,
    they would have to actually grow the
  • 54:30 - 54:35
    silicon area substantially. And that's a
    thing that is a proxy for detection of
  • 54:35 - 54:38
    these types of problems. So that would be
    my kind of half answer to that problem.
  • 54:38 - 54:41
    It's a good question, though. Thank you.
    Herald: Thanks for the question. The next
  • 54:41 - 54:46
    one from microphone number three, please.
    Yeah. Move close to the microphone.
  • 54:46 - 54:51
    Thanks.
    Q: Hello. My question is, in your proposed
  • 54:51 - 54:56
    solution, how do you get around the fact
    that the attacker, whether it's an implant
  • 54:56 - 55:02
    or something else, will just attack it
    before they user self provisioning so
  • 55:02 - 55:05
    it'll compromise a self provisioning
    process itself?
  • 55:05 - 55:13
    bunnie: Right. So the idea of the self
    provisioning process is that we send the
  • 55:13 - 55:19
    device to you, you can look at the circuit
    boards and devices and then you compile
  • 55:19 - 55:24
    your own FPGA, which includes a self
    provisioning code from source and you can
  • 55:24 - 55:26
    confirm, or if you don't want to compile,
    you can confirm that the signatures match
  • 55:26 - 55:30
    with what's on the Internet. And so
    someone wanting to go ahead and compromise
  • 55:30 - 55:34
    that process and so stash away some keys
    in some other place, that modification
  • 55:34 - 55:40
    would either be evident in the bit stream
    or that would be evident as a modification
  • 55:40 - 55:44
    of the hash of the code that's running on
    it at that point in time. So someone would
  • 55:44 - 55:50
    have to then add a hardware implant, for
    example, to the ROM, but that doesn't help
  • 55:50 - 55:52
    because it's already encrypted by the time
    it hits the ROM. So it'd really have to be
  • 55:52 - 55:56
    an implant that's inside the FPGA and then
    trammel's question just sort of talked
  • 55:56 - 56:02
    about that situation itself. So I think
    the attack surface is limited at least for
  • 56:02 - 56:06
    that.
    Q: So you talked about how the courier
  • 56:06 - 56:12
    might be a hacker, right? So in this case,
    you know, the courier would put a
  • 56:12 - 56:18
    hardware implant, not in the Hard IP, but
    just in the piece of hardware inside the
  • 56:18 - 56:22
    FPGA that provisions the bit stream.
    bunnie: Right. So the idea is that you
  • 56:22 - 56:27
    would get that FPGA and you would blow
    your own FPGA bitstream yourself. You
  • 56:27 - 56:30
    don't trust my factory to give you a bit
    stream. You get the device.
  • 56:30 - 56:34
    Q: How do you trust that the bitstream is
    being blown. You just get indicate your
  • 56:34 - 56:37
    computer's saying this
    bitstream is being blown.
  • 56:37 - 56:40
    bunnie: I see, I see, I see. So how do you
    trust that the ROM actually doesn't have a
  • 56:40 - 56:43
    backdoor in itself that's pulling in the
    secret bit stream that's not related to
  • 56:43 - 56:53
    him. I mean, possible, I guess. I think
    there are things you can do to defeat
  • 56:53 - 56:59
    that. So the way that we do the semi
    randomness in the compilation is that
  • 56:59 - 57:03
    there's a random 64-Bit random number we
    compile into the bit stream. So we're
  • 57:03 - 57:07
    compiling our own bitstream. You can read
    out that number and see if it matches. At
  • 57:07 - 57:13
    that point, if someone had pre burned a
    bit stream onto it that is actually loaded
  • 57:13 - 57:16
    instead of your own bit stream, it's not
    going to be able to have that random
  • 57:16 - 57:21
    number, for example, on the inside. So I
    think there's ways to tell if, for
  • 57:21 - 57:25
    example, the ROM has been backdoored and
    it has two copies of the ROM, one of the
  • 57:25 - 57:27
    evil one and one of yours, and then
    they're going to use the evil one during
  • 57:27 - 57:31
    provisioning, right? I think that's a
    thing that can be mitigated.
  • 57:31 - 57:34
    Herald: All right. Thank you very much. We
    take the very last question from
  • 57:34 - 57:39
    microphone number five.
    Q: Hi, bunnie. So one of the options you
  • 57:39 - 57:45
    sort of touched on in the talk but then
    didn't pursue was this idea of doing some
  • 57:45 - 57:50
    custom silicon in a sort of very low-res
    process that could be optically inspected
  • 57:50 - 57:52
    directly.
    bunnie: Yes.
  • 57:52 - 57:56
    Q: Is that completely out of the question
    in terms of being a usable route in the
  • 57:56 - 58:00
    future or, you know, did you look into
    that and go to detail at all?
  • 58:00 - 58:05
    bunnie: So I thought about that when
    there's a couple of issues: 1) Is that if
  • 58:05 - 58:10
    we rely on optical verification now, users
    need optical verification prior to do it.
  • 58:10 - 58:14
    So we have to somehow move those optical
    verification tools to the edge towards
  • 58:14 - 58:18
    that time of use. Right. So nice thing
    about the FPGA is everything I talked
  • 58:18 - 58:21
    about building your own midstream,
    inspecting the bit stream, checking the
  • 58:21 - 58:27
    hashes. Those are things that don't
    require particular sort of user equipment.
  • 58:27 - 58:32
    But yes, if we if we were to go ahead and
    build like an enclave out of 500
  • 58:32 - 58:36
    nanometers, silicon like it probably run
    around 100 megahertz, you'd have a few
  • 58:36 - 58:41
    kilobytes of RAM on the inside. Not a lot.
    Right. So you have a limitation in how
  • 58:41 - 58:47
    much capability you have on it and would
    consume a lot of power. But then every
  • 58:47 - 58:53
    single one of those chips. Right. We put
    them in a black piece of epoxy. How do you
  • 58:53 - 58:55
    like, you know, what keeps someone from
    swapping that out with another chip?
  • 58:55 - 58:58
    Q: Yeah. I mean, I was I was thinking of
    like old school, transparent top, like on
  • 58:58 - 59:00
    a lark.
    bunnie: So, yeah, you can go ahead and
  • 59:00 - 59:04
    wire bond on the board, put some clear
    epoxy on and then now people have to take
  • 59:04 - 59:11
    a microscope to look at that. That's a
    possibility. I think that that's the sort
  • 59:11 - 59:15
    of thing that I think I am trying to
    imagine. Like, for example, my mom using
  • 59:15 - 59:20
    this and asking her do this sort of stuff.
    I just don't envision her knowing anyone
  • 59:20 - 59:23
    who would have an optical microscope who
    could do this for except for me. Right.
  • 59:23 - 59:29
    And I don't think that's a fair assessment
    of what is verifiable by the end user. At
  • 59:29 - 59:34
    the end of the day. So maybe for some
    scenarios it's OK. But I think that the
  • 59:34 - 59:38
    full optical verification of a chip and
    making that sort of the only thing between
  • 59:38 - 59:43
    you and implant, worries me. That's the
    problem with the hard chip is that
  • 59:43 - 59:47
    basically if someone even if it's full,
    you know, it's just to get a clear thing
  • 59:47 - 59:52
    and someone just swapped out the chip with
    another chip. Right. You still need to
  • 59:52 - 59:56
    know, a piece of equipment to check that.
    Right. Whereas like when I talked about
  • 59:56 - 59:59
    the display and the fact that you can look
    at that, actually the argument for that is
  • 59:59 - 60:02
    not that you have to check the display.
    It's that you don't it's actually because
  • 60:02 - 60:05
    it's so simple. You don't need to check
    the display. Right. You don't need the
  • 60:05 - 60:08
    microscope to check it, because there is
    no place to hide anything.
  • 60:08 - 60:11
    Herald: All right, folks, we ran out of
    time. Thank you very much to everyone who
  • 60:11 - 60:14
    asked a question. And please give another
    big round of applause to our great
  • 60:14 - 60:17
    speaker, bunnie. Thank you so much for the
    great talk. Thanks.
  • 60:17 - 60:18
    Applause
  • 60:18 - 60:21
    bunnie: Thanks everyone!
  • 60:21 - 60:24
    Outro
  • 60:24 - 60:46
    Subtitles created by c3subtitles.de
    in the year 2020. Join, and help us!
Title:
36C3 - Open Source is Insufficient to Solve Trust Problems in Hardware
Description:

more » « less
Video Language:
English
Duration:
01:00:46

English subtitles

Revisions