Return to Video

PUFs, protection, privacy, PRNGs (33c3)

  • 0:00 - 0:16
    33C3 preroll music
  • 0:16 - 0:23
    Herald: Ok, please welcome Pol van Aubel,
    PhD student at Radboud University in
  • 0:23 - 0:30
    Nijmegen, and he's going to give a talk
    of physically uncloneable functions.
  • 0:30 - 0:32
    A warm round of applause, please.
  • 0:32 - 0:39
    applause
    Thank you.
  • 0:39 - 0:42
    Pol: Thank you, thank you for having me,
    thank you for having me on prime time,
  • 0:42 - 0:45
    when everybody is finally awake,
    but not yet drunk.
  • 0:45 - 0:46
    mild laughter
  • 0:46 - 0:53
    And, thank you for letting me compete with
    the space track. So, well, my Herald just
  • 0:53 - 1:00
    explained who I am, but the work in this
    talk is actually mostly not from me. It's
  • 1:00 - 1:06
    by many many authors, and there will be
    citations on almost every slide. Don't pay
  • 1:06 - 1:11
    attention to those. It was simply too hard
    for me to make two different sets of
  • 1:11 - 1:17
    slides. Download the slides afterwards, if
    something interests you. The entire intent
  • 1:17 - 1:21
    of this talk is to get you interested in
    this material, get you reading the papers,
  • 1:21 - 1:25
    and get you implementing this stuff
    yourself. So, without further ado, and
  • 1:25 - 1:31
    without any further egocentric blathering,
    let's look at the problem we're trying to
  • 1:31 - 1:40
    solve. In computer security, since the
    1980's, we've noticed that we might want
  • 1:40 - 1:46
    unique identification and authentication
    of devices. And then, specifically, integrated
  • 1:46 - 1:52
    circuits. So, we want to distinguish
    chips, uniquely, from the same
  • 1:52 - 2:00
    manufacturing masks, even, and with high
    accuracy, unforgeably. Eh, simple task,
  • 2:00 - 2:07
    right? So, in order to explain how we get
    to physically uncloneable functions, I'm
  • 2:07 - 2:11
    first going to explain some history in
    anti-counterfeiting. And
  • 2:11 - 2:16
    anti-counterfeiting, you can think of
    money, you can think of magstripe cards,
  • 2:16 - 2:22
    you can think of identity documents, and
    nuke counters, or as they are commonly
  • 2:22 - 2:29
    called in literature, "Treaty Limited
    Item" identifiers. So let's start with
  • 2:29 - 2:36
    money. Historically, money has been
    protected with highly intricate imagery.
  • 2:36 - 2:42
    This is an example from right after the US
    Revolution, and I personally really like
  • 2:42 - 2:47
    the, let's see, the "To Counterfeit is
    Death". Because, you know, while it was a
  • 2:47 - 2:53
    crime against the state, you were drawn
    and quartered when you did it. Then we
  • 2:53 - 2:56
    fast forward a few centuries, and I would
    like to know from the audience, who has
  • 2:56 - 3:02
    ever seen this? ... Quite a lot. Can
    anybody tell me what it is?
  • 3:02 - 3:05
    audience comment (inaudible)
  • 3:05 - 3:13
    The EURion constellation. It's
    intended to prevent photocopiers from
  • 3:13 - 3:17
    copying your money. So basically, when the
    photocopier detects this thing, it'll just
  • 3:17 - 3:22
    not... it'll just say, "I don't want to
    copy." You can actually use this on your
  • 3:22 - 3:29
    own stuff if you want. But, we see a
    common theme in those entire few
  • 3:29 - 3:35
    centuries, namely, you mark all valid
    bills the same, and then you make sure
  • 3:35 - 3:42
    that you can check the marks in order to
    identify that it is legitimate. An
  • 3:42 - 3:48
    alternative to this would be to have
    different marks for each bill, and sign
  • 3:48 - 3:55
    that marking. But, you get into a whole
    bunch of questions, like, how do I then
  • 3:55 - 4:01
    prevent somebody from copying that
    bill-specific valid mark a hundred
  • 4:01 - 4:05
    thousand times, and just, you know,
    copying the signature as well. It's not as
  • 4:05 - 4:17
    though anybody is checking paper money
    online. So, back in 1983, Bader proposed
  • 4:17 - 4:24
    an anti-counterfeiting measure, which
    basically meant you sprinkle random-length
  • 4:24 - 4:31
    cuts of optical fibers into your paper,
    before it becomes paper, the... mull. (?)
  • 4:31 - 4:39
    And then, you make the money, and you use
    basically a light bar scanner, so whatever
  • 4:39 - 4:43
    a photocopier does as well. And then there
    will be a dot pattern that appears around
  • 4:43 - 4:48
    the light bar. And you extract that dot
    pattern, you make that into a series of
  • 4:48 - 4:54
    bits, and you sign that dot pattern. And then
    you print the signature onto the bill. Now
  • 4:54 - 4:57
    there's several problems with this, which
    are all explained in those papers, I don't
  • 4:57 - 5:05
    have time to go into that, but in
    principle, this works. Then, next, cards.
  • 5:05 - 5:09
    You know magnetic stripes and PIN, the way
    we used to use them in Europe, I think you
  • 5:09 - 5:16
    still use them in the US, I'm not sure...
    But because nobody knows how to copy
  • 5:16 - 5:25
    magstripes, right? So, you add stuff to
    the card, so that it becomes detectable
  • 5:25 - 5:30
    when somebody has copied the card onto a
    forgery. So, you use holograms. So far as
  • 5:30 - 5:36
    I know, holograms are also copyable, now,
    I don't have a literature reference there,
  • 5:36 - 5:47
    but... stuff can be done. Now somebody in
    1980 already proposed this: You randomly
  • 5:47 - 5:57
    disperse magnetic fibers in a coating, you
    scan those fibers with a, well,
  • 5:57 - 6:04
    electromagnetic sensing device, and turn
    them into pulses and AND the pulses with clock
  • 6:04 - 6:09
    etc., turn them into bits again, sign
    that pattern etc. Then there's also
  • 6:09 - 6:12
    this nice proposal where you randomly
    disperse conductive particles in an
  • 6:12 - 6:18
    insulating material, scan it with microwave,
    it's basically the same principle from
  • 6:18 - 6:26
    also the 1980s. Next, identity documents,
    somebody proposed using the translucency
  • 6:26 - 6:34
    of a paper strip in an identity document,
    scan that strip, turn the translucency
  • 6:34 - 6:41
    pattern into a bitmask, sign the bitmask,
    etc. Now, Simmons already said that
  • 6:41 - 6:44
    this was too easily cloneable, because,
    you know, you can just take a photograph
  • 6:44 - 6:51
    of this and reproduce it through
    photographic techniques. So translucency
  • 6:51 - 6:57
    isn't really nice. Now you could also
    potentially use the exact
  • 6:57 - 7:04
    three-dimensional cotton fiber pattern of
    the paper. But that proposal was also in
  • 7:04 - 7:14
    1991, and Simmons also said, this is
    infeasible to do. However, in 1999,
  • 7:14 - 7:19
    somebody came up with something similar,
    they take the texture hash of a postal
  • 7:19 - 7:24
    envelope, so you just print a square on
    the envelope, take a high resolution
  • 7:24 - 7:31
    picture of that, and then hash that with a
    certain hashing code that ensures that all
  • 7:31 - 7:41
    these things collapse into the same bit
    pattern every time. This works. Then
  • 7:41 - 7:47
    finally, those Treaty Limited Items, the
    reflective particle tags, you basically
  • 7:47 - 7:53
    affix such a tag to the surface of a
    treaty-limited item, then you cure them
  • 7:53 - 7:58
    with ultraviolet light, so that you turn
    it into a glass-like substance, which
  • 7:58 - 8:04
    makes a tamper evident, if I try to take
    it off, the glass breaks, and it also
  • 8:04 - 8:08
    preserves the particle orientation, and
    then you put a laser onto it, you look at the
  • 8:08 - 8:14
    reflective pattern, and you have your
    identifier. So, if you ever have a bunch
  • 8:14 - 8:20
    of nukes to count, that might be
    interesting. The common theme here is that
  • 8:20 - 8:27
    we are using an intrinsic aspect of an
    item that's infeasible to copy, but easily
  • 8:27 - 8:36
    readable. It's unpredictable, and it
    should ideally be unchanging. Which brings
  • 8:36 - 8:46
    us to a proposal in 2001 of physical
    one-way-functions. Basically the idea was,
  • 8:46 - 8:55
    you have an epoxy with miniscule glass
    spheres, you cure the epoxy, you make it
  • 8:55 - 8:58
    into a ten by ten by two and a half
    millimeters, don't know the exact
  • 8:58 - 9:06
    dimensions anymore, ... I say "sphere", I
    mean, what's it called, cube... cuboids,
  • 9:06 - 9:11
    something like that... And then you illuminate
    it by a laser, and then you get a speckle
  • 9:11 - 9:15
    pattern out of that, because the laser
    will disperse in a really unpredictable
  • 9:15 - 9:23
    pattern, and you capture that at 320 by
    240 pixels, you turn that into a 2400 bit
  • 9:23 - 9:27
    key with a so called Gabor transform. I have
    no idea how the math behind that works because
  • 9:27 - 9:34
    that's not my field of expertise. And you
    get interesting properties like drilling a
  • 9:34 - 9:39
    hole here causes half the bits to flip, so
    it’s tamper resistant, and it mirrors the
  • 9:39 - 9:45
    way one-way functions work, like SHA-1 and
    SHA-256: ideally, if you flip one bit in
  • 9:45 - 9:51
    your input, half your output bits are
    flipped. So this paper is really the first
  • 9:51 - 10:01
    paper that proposed this as a connection
    with cryptography. So here, reading the
  • 10:01 - 10:06
    structure is feasible, because, you know,
    you have this glass pattern, you can just
  • 10:06 - 10:10
    – I say just, but you can use
    microscopic techniques to read it out
  • 10:10 - 10:18
    exactly, but good luck with having this
    sub-micron accuracy for all those glass
  • 10:18 - 10:31
    spheres in the epoxy. So, you can, in
    theory, if you know the structure, emulate
  • 10:31 - 10:38
    or simulate how a laser passes through
    this. But it requires a lot of
  • 10:38 - 10:45
    computational power, and in order to...
    you also can't build a database of responses
  • 10:45 - 10:53
    to challenges, because imagine that the
    challenge to the structure is a laser at
  • 10:53 - 10:58
    different orientations, like, I can say a
    laser under an angle of 5 degrees, or 10
  • 10:58 - 11:03
    degrees or 20 degrees, and at different
    locations, and all those responses will be
  • 11:03 - 11:10
    different. So this challenge-response space
    is infeasibly huge. So the protocol here
  • 11:10 - 11:17
    would be, first, you read this thing on a
    trusted terminal, and you create a random
  • 11:17 - 11:22
    collection of challenge-response pairs.
    Your challenges have to be kept secret
  • 11:22 - 11:27
    because, next, you get an authentication
    request from an untrusted terminal, and
  • 11:27 - 11:35
    you challenge that terminal. And the idea
    would be that it is infeasible to send the
  • 11:35 - 11:43
    correct response key if you don't have the
    device containing this PUF, er, this
  • 11:43 - 11:47
    physical one-way function. So you then
    receive the response key, and you reject
  • 11:47 - 11:56
    this if the key differs by too many bits.
    Because it won't be a perfect match. There
  • 11:56 - 12:01
    might be scratches, there might be slight
    micron differences in the orientations, it
  • 12:01 - 12:09
    might be a bad camera, you get some
    differences, and the way you then do this
  • 12:09 - 12:19
    is you calculate the least probable
    acceptance rate of a counterfeit device
  • 12:19 - 12:24
    that you want, and then you get to this
    amount of bits. And then you can get a
  • 12:24 - 12:29
    better match rate if you repeat steps
    (4) - (6) a few times, and if you run
  • 12:29 - 12:37
    out of challenge pairs you can just go
    back to (1). That's the general idea. So
  • 12:37 - 12:39
    this is the first paper that made this
    connection with cryptography, it has a
  • 12:39 - 12:45
    defined protocol, but there are several
    not-so-nice things, like, you have special
  • 12:45 - 12:51
    equipment required, and we would really
    like to have the same possibility in
  • 12:51 - 12:56
    silicon and silicon only. Now in this
    paper already, the proposal was that you
  • 12:56 - 13:03
    might be able to have a similar approach,
    if you scatter electrons... uhhh... I
  • 13:03 - 13:07
    don't understand what this says, but I
    know that it is not what we're going to
  • 13:07 - 13:15
    see next. As an aside, if you do this kind
    of thing, then you get to read very old
  • 13:15 - 13:22
    papers. So, wasn't it nice, back when you
    could say this: "In the fuel rod placement
  • 13:22 - 13:27
    monitor … high radiation levels in the "hot"
    cell provided the general tamper resistance."
  • 13:27 - 13:30
    Or: "The seismic sensors … would detect any
    attempt to gain physical access to the
  • 13:30 - 13:33
    package long before the information
    security is in jeopardy." Now I wouldn't
  • 13:33 - 13:40
    actually take that one as a bet because
    I know you guys, but, uhhm, the first one
  • 13:40 - 13:45
    is pretty good. And you get to see things
    like this. This is how RSA was done in
  • 13:45 - 13:54
    1984. This is a – I think – that's an
    ISA, maybe pre-ISA bus, I dont know... So
  • 13:54 - 14:00
    this is how that was done. And the text is
    really beautiful: they scanned an old,
  • 14:00 - 14:06
    basically typed on a typing machine paper.
    This is available online, by the
  • 14:06 - 14:12
    way, if you have university access...
    sorry... Then, there are other solutions
  • 14:12 - 14:14
    to this problem, of course. You have
    hardware security modules, you have
  • 14:14 - 14:19
    smartcards, you have trusted platform
    modules... actually, I found out we only
  • 14:19 - 14:24
    have those since 2006, I felt [like] they
    were older?... But you still have the
  • 14:24 - 14:27
    problem of key management, right? Because
    the key isn't tied to the platform. If I
  • 14:27 - 14:32
    can extract the key, and put it into
    another trusted platform module, or
  • 14:32 - 14:38
    another hardware security module, then
    we're still dead in the water. So the aspects of
  • 14:38 - 14:43
    these things is the key never leaves the
    device – ideally – but then how does
  • 14:43 - 14:47
    the key enter the device? You can enter
    new keys, you can enter key-encrypting
  • 14:47 - 14:52
    keys to decrypt keys that you never see,
    that another hardware security module exports,
  • 14:52 - 14:59
    it's all interesting crypto, but you also
    get the problem of what can the key do,
  • 14:59 - 15:05
    are you limited to 1024 bits RSA, and is
    it possible to emulate all this once you
  • 15:05 - 15:13
    have the key, right? We really want to
    have other aspects to our function. Now,
  • 15:13 - 15:21
    this is the first name for PUFs, "silicon
    physical random functions", but they
  • 15:21 - 15:28
    already knew that "PRF" might have some
    three letter acronym that clashes with
  • 15:28 - 15:31
    "pseudo-random function", so they decided
    to go for "physical uncloneable
  • 15:31 - 15:35
    functions". There's an interesting
    discussion going on whether it should be
  • 15:35 - 15:41
    "physical" or "physically"... not going
    into that. The idea is, tamper resistance
  • 15:41 - 15:47
    in general is expensive, is difficult,
    it's just... let's look at a different
  • 15:47 - 15:52
    approach. There is enough process
    variation across identical integrated
  • 15:52 - 15:56
    circuits, where... yeah, so, they're not
    identical because of those process
  • 15:56 - 16:03
    variations. And already in 2000 somebody
    made a... Lofstrom, Daasch and Taylor had
  • 16:03 - 16:14
    a small paper on specific special device
    identification circuits. But, if you want
  • 16:14 - 16:17
    to use those for secure device
    identification and authentication, then
  • 16:17 - 16:24
    just a single such circuit is not enough.
    You need more. So, what do you do? You
  • 16:24 - 16:29
    build this. And I don't think it's
    really feasible, ... basically, this is
  • 16:29 - 16:33
    the entire circuit, you have a delay
    circuit here, ...this is a ring oscillator
  • 16:33 - 16:41
    PUF. So you have a delay circuit here,
    this is a self-oscillating loop,
  • 16:41 - 16:46
    basically, this feeds back into this. And
    the challenge here is a bit for each of
  • 16:46 - 16:54
    these blocks. And what the bit says: if
    it's one then you pass through, if it's
  • 16:54 - 16:58
    zero you pass over. So if you have a
    different challenge, you have a different
  • 16:58 - 17:04
    path through this PUF. So ideally, for
    each challenge, it should be unpredictable
  • 17:04 - 17:12
    whether this final arbiter block here...
    uhh, somewhere over there... gives a one
  • 17:12 - 17:20
    or a zero, and then you count the pulses,
    and you identify your circuits. Now
  • 17:20 - 17:25
    attacks on this were also quite well
    studied in this paper... possible attacks.
  • 17:25 - 17:29
    So, you have the duplication attack, which
    is basically cloning, which should be
  • 17:29 - 17:33
    impossible. Right, that's the general
    idea: Cloning should be impossible. There
  • 17:33 - 17:41
    is emulation from measuring, so, you build
    a model from this by measuring the exact
  • 17:41 - 17:47
    distances between logical units inside the
    PUF, or the length of the wires inside the
  • 17:47 - 17:52
    PUF, also deemed infeasible because how
    are you going to measure this without
  • 17:52 - 17:59
    destroying the PUF. This is back in 2001. Then
    there was emulation from modeling, so basically, if
  • 17:59 - 18:02
    you get these challenge-response pairs, if
    you get enough of them, you can apply some
  • 18:02 - 18:08
    nice maching-learning algorithms to that,
    and then you get prediction of responses.
  • 18:08 - 18:11
    And, finally, you have the control
    algorithm attack, which is
  • 18:11 - 18:16
    attacking the PUF's control algorithm
    without ever getting into the PUF. If you
  • 18:16 - 18:24
    can do that, then your PUF is useless. So,
    they also proposed controlled physically
  • 18:24 - 18:30
    uncloneable functions, which is the same
    but with bells on. So you have an access
  • 18:30 - 18:38
    function for the PUF, which is part of the
    PUF. This is to prevent against that final
  • 18:38 - 18:45
    attack. So basically you overlay the logic
    of the access function with the PUF, so
  • 18:45 - 18:50
    that to access the logic of the access
    function, you have to break the PUF. And
  • 18:50 - 18:56
    if you break the PUF, everything breaks,
    no longer works. So this gives additional
  • 18:56 - 19:02
    properties. An uncontrolled PUF can only
    be used for device authentication. This
  • 19:02 - 19:10
    can be used to have nice things like proof
    of execution on a specific device.
  • 19:10 - 19:14
    Potentially [for] things that I don't have an
    opinion on: on code that only runs on specific
  • 19:14 - 19:20
    devices, but basically whatever you need a
    secure cryptographic key for, you should
  • 19:20 - 19:24
    really be using a controlled PUF. Is
    the idea. But you can still do device
  • 19:24 - 19:29
    identification. So, how does a controlled
    PUF look? You have a random hash here, you
  • 19:29 - 19:35
    have a potential ID here, you have the PUF
    here, Challenge, ID, Personality into the
  • 19:35 - 19:39
    random hash, you run that through the PUF,
    do some error correction, because PUFs are
  • 19:39 - 19:43
    not ideal, and then the random hash again,
    and then the response. This is to prevent
  • 19:43 - 19:50
    all these attacks. If you're interested in
    this, read the paper. Then, in 2011, a
  • 19:50 - 19:55
    formal model was proposed, what do we
    really need from PUFs? First, we need
  • 19:55 - 20:00
    robustness. Across evaluations, we need
    the same response. We need physical
  • 20:00 - 20:04
    uncloneability, it really shouldn't be
    possible to clone these things, and we
  • 20:04 - 20:12
    need unpredictability. Now, these two are
    potentially a lot, so we'll get into that
  • 20:12 - 20:18
    on the final slide, I think... And since then,
    since 2001 there have been a lot of proposals
  • 20:18 - 20:24
    and attacks on PUFs. So, first, there are
    the Arbiter PUFs, which are all delay
  • 20:24 - 20:31
    based. So, the general idea here is that
    if you run a signal through a chip, it is
  • 20:31 - 20:36
    delayed by a certain amount. But the
    amount is unique per chip. But it turns
  • 20:36 - 20:43
    out that you can pretty easily model this.
    And even the Bistable Ring PUF, which is
  • 20:43 - 20:51
    fairly recent, I think, you can do some
    fancy machine learning... I highly
  • 20:51 - 20:55
    recommend this paper, "Pac learning of
    arbiter PUFs". Basically, the idea is, you
  • 20:55 - 21:00
    have 30000 challenge-response pairs, and
    that is enough to give you 100% accuracy
  • 21:00 - 21:07
    on a 256-bit challenge PUF. That's not
    good. This doesn't really work, if you can
  • 21:07 - 21:17
    model it that way. And you can also use
    optical measuring of signals through
  • 21:17 - 21:22
    devices at six pico-second accuracy. So
    these things might not be around for much
  • 21:22 - 21:28
    longer. Then there are memory-based PUFs.
    They are based on bistable memory, which
  • 21:28 - 21:36
    basically looks like this, and it's also
    delay-based, but here it's unique to this
  • 21:36 - 21:41
    cell. You have a block of these cells,
    they are all independent, so you get a
  • 21:41 - 21:48
    pattern out of this. These cells go to one
    or zero, and they are pretty fairly stable
  • 21:48 - 21:54
    in doing it. I'll show you a picture later of
    what happens if you have a nice PUF of
  • 21:54 - 22:00
    this type and if you don't have a nice PUF
    of this type. However, if you have a SRAM
  • 22:00 - 22:07
    PUF, for instance, you have fairly limited
    SRAM. So you can just, in principle, read
  • 22:07 - 22:12
    all this out and store all the bits in a
    database. And then you can, er, clone the
  • 22:12 - 22:21
    PUF. Because you can use focused ion beams
    to trim the SRAM of another chip into the
  • 22:21 - 22:26
    correct orientation. And, well, emulation,
    if you have this database, you can just
  • 22:26 - 22:32
    respond from your database. So, this is,
    in some literature, termed a "weak PUF",
  • 22:32 - 22:38
    but it's probably still the most useful
    one we have right now. This is usually
  • 22:38 - 22:42
    also what's in your devices if it's
    claimed to have a physical uncloneable
  • 22:42 - 22:48
    function. But they are of the control
    variety most of the time. Then finally,
  • 22:48 - 22:54
    recently somebody proposed, I think that
    was, yeah, Schaller, Xiong, and
  • 22:54 - 23:00
    Anagnosto... can not pronounce it. But the
    decay-based PUFs, the idea is, you have
  • 23:00 - 23:07
    DRAM, take the power off, put the power
    back on, look how it decayed. No attacks
  • 23:07 - 23:16
    on that that I have seen yet. So, the
    final few minutes of this talk will be
  • 23:16 - 23:27
    about your very own memory PUFs. Which is
    trivial. Right? ...No. It's not, actually.
  • 23:27 - 23:32
    And all this time before, you might think,
    why would we even bother with this? It
  • 23:32 - 23:38
    seems to be hopeless for PUFs, there is
    not enough randomness in the silicon, but
  • 23:38 - 23:42
    I disagree. Because for one, some
    protection is better than none, which is
  • 23:42 - 23:49
    what most system-on-chip devices have. And
    two, I do not believe in silver bullets.
  • 23:49 - 23:56
    This should be part of a greater security
    mechanism. So if nothing else, if all you
  • 23:56 - 24:03
    want from this talk is some interesting
    paper to read, just one, read this one.
  • 24:03 - 24:07
    That's on slide 39, it's called
    "Lightweight anti-counterfeiting solution
  • 24:07 - 24:12
    for low and commodity hardware using
    inherent PUFs." And, preferably, you also
  • 24:12 - 24:17
    read this related one, "PUF based software
    protection for low end embedded devices."
  • 24:17 - 24:22
    Don't be fooled by the terms "IP
    protection" and "license model". This is a
  • 24:22 - 24:26
    Secure Boot environment. You want it, in
    your Raspberry Pi, for instance. I don't
  • 24:26 - 24:31
    know whether Raspberry Pis have it, that's
    for you to find out. So what you'll need
  • 24:31 - 24:39
    is a device with a masked ROM to hold the
    boot loader, like the first stage of code
  • 24:39 - 24:45
    needs to be under your control. You need
    to have that modifiable startup code, you
  • 24:45 - 24:51
    need to be able to modify it, obviously.
    And you need on-board SRAM, to build the
  • 24:51 - 24:57
    PUF from. And then you need some
    non-volatile memory for encrypted firmware
  • 24:57 - 25:06
    and helper data. So, in the puffin
    project, which that earlier pic was a result
  • 25:06 - 25:17
    of... So, there are several results here.
    This is an STM32F100B microcontroller,
  • 25:17 - 25:22
    this is PandaBoard, which is pretty much like a
    mobile phone actually, so what you want to
  • 25:22 - 25:27
    see is this. White noise. This part is a
    PUF-like memory range, this part is
  • 25:27 - 25:32
    probably spoiled by the bootloader or
    something like that or the wrong code, but
  • 25:32 - 25:40
    this you can use. This looks good. So,
    once you have such a white-noise area, you
  • 25:40 - 25:46
    start measuring a lot of times, and then
    you compute the Hamming distance between
  • 25:46 - 25:50
    lots of measurements from lots of
    different devices. And you want it to look
  • 25:50 - 25:55
    like this, you want it be around half.
    Because that means that every device will
  • 25:55 - 26:03
    look different. By about 50%. You also measure
    the inner class Hamming distance, which is same
  • 26:03 - 26:09
    measurements from the same PUF, and you
    want that to be below 0.1. You don't want
  • 26:09 - 26:14
    that to be too inaccurate, because then
    your error correction becomes too complex
  • 26:14 - 26:19
    and starts leaking information, and you
    will need error correction, using for
  • 26:19 - 26:29
    example Golay codes. So this first paper I
    mentioned, the... this one... the
  • 26:29 - 26:33
    lightweight anti-counterfeiting one, this
    is also from that paper. Read it, it also
  • 26:33 - 26:36
    explains how this fuzzy extraction works.
    If you're interested in this, there's lots
  • 26:36 - 26:43
    of scientific literature out there. And
    then finally, you build this fuzzy
  • 26:43 - 26:49
    extractor, and then you enrol your chip.
    And you generate some helper data for
  • 26:49 - 26:54
    this error correction, and then once you
    challenge the chip you send this
  • 26:54 - 26:59
    error-correcting data with the challenge.
    And in the end the idea would be that you
  • 26:59 - 27:05
    get a secret S' from every chip. Now how
    can you use this? You have the bootloader
  • 27:05 - 27:09
    in the masked ROM, this is the first-stage
    bootloader, it challenges the PUF, and
  • 27:09 - 27:14
    decrypts the second-stage bootloader,
    which comes from external memory. And then
  • 27:14 - 27:18
    you boot the embedded operating system.
    So, this should look familiar to a lot of
  • 27:18 - 27:24
    you, because this is basically also how
    device attestation on x86 works if you're
  • 27:24 - 27:35
    using trusted platform modules. So, in a
    bit more detail, same procedure, query the
  • 27:35 - 27:39
    PUF, decrypt and call, here the key also
    ends up, and you decrypt and call the
  • 27:39 - 27:46
    kernel, and then finally, this is how it
    really looks in real detail. And even if
  • 27:46 - 27:52
    you don't want to build this, you'll still
    have this: So, remember when I showed you
  • 27:52 - 27:58
    the inner class Hamming distance, the 10% of
    differences between meausurements? That's
  • 27:58 - 28:03
    caused by the red dots. Those are the
    unstable SRAM cells. You can use those as
  • 28:03 - 28:08
    seeds for a random function. And
    hopefully, you won't have this. This looks
  • 28:08 - 28:12
    wrong, this is not a PUF, this is too
    predictable. Unfortunately, all this won't
  • 28:12 - 28:16
    be possible on x86, because we looked for
    the PUFs in the CPUs, but Intel and AMD
  • 28:16 - 28:23
    both explicitly zero everything. Finally,
    a word on privacy. I don't have too much
  • 28:23 - 28:28
    time for this, but I really liked the fact
    they mentioned they feel that users... users
  • 28:28 - 28:32
    feel that they can be tracked if you have
    a unique identifier. As though, its not a
  • 28:32 - 28:39
    valid concern. Damn the users, being paranoid.
    Now, back to the controlled PUF. You can
  • 28:39 - 28:43
    add personality IDs as a user. If they
    challenge it, you add a personality, so
  • 28:43 - 28:47
    one application reading the PUF gets a
    different ID from another application,
  • 28:47 - 28:51
    which changes the entire output of the
    hash function, no paranoia required
  • 28:51 - 29:00
    anymore. Hopefully. Finally, the references.
    Google Scholar is your friend. The rest of
  • 29:00 - 29:06
    the slides are... all kinds of
    references... Read it! You've already seen
  • 29:06 - 29:09
    all of those, read it,
    thank you for your attention.
  • 29:09 - 29:18
    applause
  • 29:18 - 29:22
    Herald: Thank you, Pol. We have
    time for maybe two questions.
  • 29:22 - 29:26
    Please come up to the mics... Mic 3!
  • 29:26 - 29:32
    Mic 3: What do you think about MEMS-based
    physically uncloneable functions, where
  • 29:32 - 29:37
    they basically use the accelerometer
    sensors, and the deviations in these
  • 29:37 - 29:41
    sensors by inducing challenges
    as controlled vibration?
  • 29:41 - 29:44
    Pol: Sorry, I missed the
    first word of your question.
  • 29:44 - 29:48
    Mik 3: The MEMS-based… basically the
    technology that is being used to build
  • 29:48 - 29:55
    accelerometers in silicon. So Bosch has
    some PUF chips based on that, where they
  • 29:55 - 29:59
    have arrays of these MEMS-chips, and then
    a controlled vibrator to induce the
  • 29:59 - 30:00
    challenge into that.
  • 30:00 - 30:05
    Pol: I think they're probably more secure
    than silicon-based PUFs, because they are
  • 30:05 - 30:10
    built for randomness, whereas we're here
    trying to extract randomness from an
  • 30:10 - 30:15
    existing circuit. Yeah, they're
    interesting. Use them if you can, but most
  • 30:15 - 30:20
    people don't have the option.
  • 30:20 - 30:21
    Mik 3: Thank you.
  • 30:21 - 30:25
    Herald: More questions?
    Pol: Up there!
  • 30:25 - 30:28
    Herald: Ok, Mic 7!
  • 30:28 - 30:32
    Mic 7: Hi, thanks for your talk, I'd never
    heard of PUFs. I recently went on a
  • 30:32 - 30:37
    quest to find a usable smartcard that met
    all the things I wanted to do, like open
  • 30:37 - 30:46
    source, et cetera. Can you expand a bit on
    how PUFs could be used with an OpenPGP
  • 30:46 - 30:50
    smartcard or similar?
  • 30:50 - 30:54
    Pol: Short answer: no. I have no idea
    whether OpenPGP will ever support anything
  • 30:54 - 31:01
    like this. You have the PKCS protocols,
    I know that in theory this is possible.
  • 31:01 - 31:05
    I don't know whether anything has
    implemented it. There are PUFs on
  • 31:05 - 31:10
    smartcards, but whether.. We haven't looked
    into this, I don't know of anyone who has.
  • 31:10 - 31:11
    Mic 7: Thank you.
  • 31:11 - 31:14
    Pol: But that doesn't mean
    it doesn't exist.
  • 31:14 - 31:17
    Herald: That would be all.
    Please give it up for Pol, one more time.
  • 31:17 - 31:20
    Pol: Thanks!
    applause
  • 31:20 - 31:25
    postroll music
  • 31:25 - 31:44
    subtitles created by c3subtitles.de
    in the year 2017. Join, and help us!
Title:
PUFs, protection, privacy, PRNGs (33c3)
Description:

more » « less
Video Language:
English
Duration:
31:44

English subtitles

Revisions