Return to Video

Joanna Rutkowska: Towards (reasonably) trustworthy x86 laptops

  • 0:00 - 0:12
    preroll music
  • 0:12 - 0:15
    Herald: I’m really glad that
    you’re all here and that today
  • 0:15 - 0:19
    I can introduce Joanna Rutkowska to you.
  • 0:19 - 0:25
    She will be talking about reasonably
    trustworthy x86 systems.
  • 0:25 - 0:30
    She’s the founder and leader
    of the Invisible Things Lab
  • 0:30 - 0:37
    and also – that’s also something you all
    probably use – the Qubes OS project.
  • 0:37 - 0:42
    She presented numerous attacks
    on Intel based systems and also
  • 0:42 - 0:47
    virtualization systems. But today she
    will not only speak about the problems
  • 0:47 - 0:52
    of those machines but will present some
    solutions to make them more secure.
  • 0:52 - 0:55
    Give it up for Joanna Rutkowska!
  • 0:55 - 1:03
    applause
  • 1:03 - 1:09
    Joanna: Okay, so, let’s start
    with stating something obvious.
  • 1:09 - 1:14
    Personal computers have become
    really the extensions of our brains.
  • 1:14 - 1:18
    I think most of you will
    probably agree with that.
  • 1:18 - 1:21
    Yet the problem is that they are insecure
  • 1:21 - 1:26
    and untrustworthy,
  • 1:26 - 1:29
    which personally bothers me al lot.
  • 1:29 - 1:32
    And here I want to make a quick digression
  • 1:32 - 1:38
    for the vocabulary I’m gonna
    be using during this presentation.
  • 1:38 - 1:41
    When we say, well, there are
    three adjectives related to trust
  • 1:41 - 1:45
    and people will often confuse them.
    When we say something is “trusted”,
  • 1:45 - 1:50
    that means by definition something
    can compromise the security of
  • 1:50 - 1:55
    the whole system, which means
    we don’t like things to be trusted.
  • 1:55 - 2:00
    “Trusted third party”, “Trusted CA”
    we don’t like that.
  • 2:00 - 2:02
    When we say something is...
  • 2:02 - 2:06
    because “trusted” doesn’t necessarily
    mean that it is “secure”.
  • 2:06 - 2:13
    So, what is secure? Secure is
    something that is resistant to attacks.
  • 2:13 - 2:19
    Perhaps this laptop might
    be resistant to attacks.
  • 2:19 - 2:23
    If I open [an] email attachment and the
    email attachment compromises my system
  • 2:23 - 2:29
    or maybe that if I plug
    USB for the slide changer
  • 2:29 - 2:34
    I might be hoping that this
    action will not compromise
  • 2:34 - 2:37
    my whole PC.
  • 2:37 - 2:41
    And yet something can be
    secure but not trustworthy.
  • 2:41 - 2:45
    A good example of this might be e. g.
  • 2:45 - 2:50
    Intel Management Engine (ME), that I’m
    gonna be talking about more later.
  • 2:50 - 2:54
    So it might be very resistant to attacks,
    so it might be a backdoor.
  • 2:54 - 2:57
    A backdoor that is
    very resistant to attacks,
  • 2:57 - 3:02
    yet it is not acting in
    the interests of the user.
  • 3:02 - 3:09
    So it’s not “good”, whatever
    good means in your assumed,
  • 3:09 - 3:13
    moral reference.
  • 3:13 - 3:19
    So, there’s been of course a lot of work
  • 3:19 - 3:24
    in the last 20 years and maybe more
  • 3:24 - 3:28
    to build solutions that provide
  • 3:28 - 3:31
    security and trustworthiness
  • 3:31 - 3:36
    and most of this work has focused on
    the application layer and things like
  • 3:36 - 3:39
    GnuPG and PGP first,
  • 3:39 - 3:45
    TOR, all the secure communication
    protocols and programs.
  • 3:45 - 3:50
    But of course,
  • 3:50 - 3:56
    it is clear that any effort
    on the application level
  • 3:56 - 4:01
    is just meaningless if we can
    not assure, if we can not trust
  • 4:01 - 4:07
    our operating system (OS).
    Because the OS is the trusted part.
  • 4:07 - 4:13
    So if the OS is compromised
    then everything is lost.
  • 4:13 - 4:18
    And there has been some efforts,
    notably the project I started 5 years ago
  • 4:18 - 4:24
    and now this is like more than a dozen
    of people working on it: Qubes OS.
  • 4:24 - 4:28
    It tries to address the problem of OS’s
  • 4:28 - 4:32
    being part of the PCB,
    so what we try to do is
  • 4:32 - 4:38
    shrink the amount of trusted
    code to an absolute minimum.
  • 4:38 - 4:42
    There’s been also other
    efforts in this area.
  • 4:42 - 4:47
    But OS’s is not something I’m
    gonna be discussing today.
  • 4:47 - 4:54
    What I’m gonna be discussing today
    is the final layer, is the hardware.
  • 4:54 - 5:01
    Because what was OS to applications
    it is hardware to the OS.
  • 5:01 - 5:05
    Again, most of the effort so far
  • 5:05 - 5:08
    to create secure and trustworthy OS,
  • 5:08 - 5:14
    they have always been assuming
    that the hardware is trusted.
  • 5:14 - 5:19
    That means that... usually
    that means for most OS’s that
  • 5:19 - 5:24
    a single malicious
    peripheral on this laptop,
  • 5:24 - 5:31
    like a malicious Wi-Fi module
    or maybe embedded controller
  • 5:31 - 5:35
    can compromise again my whole PC,
  • 5:35 - 5:39
    my whole digital life.
  • 5:39 - 5:43
    So what to do about it?
  • 5:43 - 5:46
    Before we discuss what to do about it
  • 5:46 - 5:53
    we should quickly
  • 5:53 - 5:59
    recap all the problems
    with present PC platforms
  • 5:59 - 6:03
    and specifically I’m gonna
    be focusing on x86 platform
  • 6:03 - 6:10
    and specifically with Intel x86
    platform, which means: laptops.
  • 6:10 - 6:17
    This picture shows how a
    typical modern laptop looks like.
  • 6:17 - 6:23
    You can see that it consists of
  • 6:23 - 6:28
    a processor in the center,
    and then there is memory,
  • 6:28 - 6:32
    some peripherals, keyboard and monitor.
  • 6:32 - 6:36
    Very simple.
  • 6:36 - 6:41
    It can be very simple, because
  • 6:41 - 6:47
    if we look at present Intel processors
  • 6:47 - 6:53
    they really integrate everything
    and the kitchen sink.
  • 6:53 - 7:00
    Ten years ago we used to have a processor,
    a Northbridge, a Southbridge
  • 7:00 - 7:04
    and perhaps even more discrete
    elements on the motherboard.
  • 7:04 - 7:12
    Today nearly all these elements have been
    integrated into one processor package.
  • 7:12 - 7:14
    This is Broadwell
  • 7:14 - 7:20
    and this long element
    here is the CPU and GPU
  • 7:20 - 7:26
    and the other one it
    is said to be the PCH.
  • 7:26 - 7:31
    PCH is what used to be
    the platform controller hub,
  • 7:31 - 7:35
    which is what used to be called
    the Southbridge and Northbridge.
  • 7:35 - 7:40
    The line has somehow
    blurred between these.
  • 7:40 - 7:42
    Of course there is only one
    company making these.
  • 7:42 - 7:46
    It’s an American
    company called INTEL.
  • 7:46 - 7:51
    It is a completely opaque construction.
  • 7:51 - 8:00
    We have absolutely no ways
    to examine what’s inside.
  • 8:00 - 8:05
    That obviously...
    The advantage is that
  • 8:05 - 8:09
    it makes construction of computers,
    of laptops very easy now.
  • 8:09 - 8:13
    And lots of vendors can
    produce little sexy laptops,
  • 8:13 - 8:18
    like the one I have here.
  • 8:18 - 8:24
    On this picture we see now some more
    elements that are in this processor.
  • 8:24 - 8:29
    So, when you say processor
    today, it’s no longer CPU.
  • 8:29 - 8:34
    Processor is now CPU, GPU,
    memory controller, hub, PCIe,
  • 8:34 - 8:39
    root, some Southbridge,
  • 8:39 - 8:46
    so e.g. SATA controller and so on,
    as well as something
  • 8:46 - 8:53
    that is called Management Engine (ME),
    which we discuss in a moment.
  • 8:53 - 8:57
    There are few more elements
    here that are important.
  • 8:57 - 9:02
    The most important for us
    is the SPI flash element.
  • 9:02 - 9:08
    Because what’s interesting is that
    with this whole integration
  • 9:08 - 9:12
    that has happened to the processor
    and the other peripherals,
  • 9:12 - 9:16
    still the storage for the firmware,
  • 9:16 - 9:22
    so the storage where your BIOS as
    well as other firmware is stored,
  • 9:22 - 9:30
    is still a discrete element.
  • 9:30 - 9:34
    We’ll see this element in
    one of the next slides.
  • 9:34 - 9:39
    So let’s now consider first
  • 9:39 - 9:46
    the problem of boot security.
  • 9:46 - 9:50
    Obviously everybody understands
    that boot security is something
  • 9:50 - 9:54
    - how to start the chain of
    trust for whatever software
  • 9:54 - 10:03
    is gonna be running later -
    is of a paramount importance.
  • 10:03 - 10:16
    I think I’m out of range.
  • 10:16 - 10:19
    Connected with boot security
    is malicious peripherals,
  • 10:19 - 10:23
    that I mentioned shortly before.
  • 10:23 - 10:28
    So we’ll be now thinking: Can we assure
  • 10:28 - 10:31
    only good code is started
  • 10:31 - 10:38
    and how the peripherals
    might interfere here.
  • 10:38 - 10:42
    Again, we will look at this SPI flash.
  • 10:42 - 10:48
    If we're now considering the boot
    security we would like to understand
  • 10:48 - 10:54
    what code is loaded on the platform. And
    if we now think where this code is stored,
  • 10:54 - 10:58
    it seems that the code is stored on
    the SPI Flash and potentially also
  • 10:58 - 11:03
    on some of the discrete elements.
  • 11:03 - 11:08
    Let me state it again that this whole
    integrated processor package
  • 11:08 - 11:13
    has everything and the kitchen
    sink except for the flash,
  • 11:13 - 11:22
    so except for the storage of the firmware.
  • 11:22 - 11:26
    Here we have one of the SPI flash chips.
  • 11:26 - 11:28
    This is from my Laptop actually.
  • 11:28 - 11:32
    It’s a little microcontroller
  • 11:32 - 11:39
    and it typically stores the firmware for
    these things, that are written here.
  • 11:39 - 11:47
    Now the question is, let's say I
    have got this laptop from a store.
  • 11:47 - 11:56
    How can I actually verify what
    firmware is really on this chip?
  • 11:56 - 12:01
    Well I can perhaps boot it into some
    minimal Linux and try to ask it.
  • 12:01 - 12:05
    But of course if there is some malicious
    something on the motherboard,
  • 12:05 - 12:13
    not necessarily this chip,
    I will not get reliable answers.
  • 12:13 - 12:21
    Another question: Let’s say I somehow
    know that there’s something trustworthy
  • 12:21 - 12:27
    on this SPI chip. Can I somehow enforce
    read-only’ness of this program?
  • 12:27 - 12:30
    There have been some efforts to do that.
  • 12:30 - 12:37
    Like some projects by Peter Stuge
    who just took a soldering iron
  • 12:37 - 12:44
    and connected one of the pins - one
    of these 8 pins is called “write protect”.
  • 12:44 - 12:50
    If you ground it, it will be telling the
    chip to discard any Write commands.
  • 12:50 - 12:55
    But again, remember, this chip
    is still a little microcontroller,
  • 12:55 - 12:59
    it’s a little computer. So it might
    ignore whatever you requested to do.
  • 12:59 - 13:05
    It’s not like you are cutting off
    a signal for Write commands.
  • 13:05 - 13:10
    You are merely asking the
    processor to ignore it.
  • 13:10 - 13:16
    So if you don’t trust the chip in the
    first place, this doesn’t provide you
  • 13:16 - 13:20
    a reliable way to enforce read-only’ness.
  • 13:20 - 13:26
    Finally can I upload my own firmware?
    Can I choose to use whatever BIOS I want?
  • 13:26 - 13:31
    Again, we don’t seem to have luck here.
  • 13:35 - 13:38
    And as I mentioned, this is just
    one of the places on the platform
  • 13:38 - 13:42
    where the state is stored.
    Embedded controller would be
  • 13:42 - 13:49
    a whole another microcontroller
    having its own internal flash.
  • 13:49 - 13:54
    Or if not, using another
    SPI chip to get flash from.
  • 13:54 - 13:59
    A disk would be another
    microcontroller with a small computer,
  • 13:59 - 14:05
    having its own - typically -
    flash for its own firmware.
  • 14:05 - 14:11
    And perhaps the same
    with the Wi-Fi module.
  • 14:11 - 14:18
    Now for many years, myself and
    lots of other people believed that
  • 14:18 - 14:25
    technologies like TPM, trusted execution
    technology... like UEFI Secure Boot
  • 14:25 - 14:29
    I never really liked, but many people
    did - they believed that they could
  • 14:29 - 14:32
    somehow solve the problem of secure boot.
  • 14:32 - 14:38
    But all of these technologies
    have been shown to fail horribly
  • 14:38 - 14:44
    in this... on this premise.
  • 14:44 - 14:47
    And then we have...
    So these were problems,
  • 14:47 - 14:53
    the tip of the iceberg of
    problems of the secure boot.
  • 14:53 - 15:02
    The short story is: Today we can
    not really assure secure boot.
  • 15:02 - 15:08
    Maybe before we move on to
    Intel ME: e.g. Intel TXT:
  • 15:08 - 15:12
    Trusted Execution Technology was
    introduced by Intel in the hope
  • 15:12 - 15:19
    to put the BIOS outside of the TCB,
    trusted computing base for the platform.
  • 15:19 - 15:26
    So, the idea was that if you use TXT
    which you can think of as
  • 15:26 - 15:33
    a special instruction of the processor,
    that was the root of trust.
  • 15:33 - 15:37
    So, the promise was that
    when using Intel TXT
  • 15:37 - 15:45
    you can start the chain of trust
  • 15:45 - 15:50
    without trusting the BIOS.
    As well as other peripherals
  • 15:50 - 15:54
    like Wi-Fi card, which might
    be malicious perhaps.
  • 15:54 - 16:01
    And that was just great.
    And I really like the technology.
  • 16:01 - 16:05
    With my team we have done
    lots of research on TXT.
  • 16:05 - 16:11
    But one of the first attacks that we have
    presented, and that was back in like 2009,
  • 16:11 - 16:16
    was that we could bypass TXT
    by having a malicious SMM.
  • 16:16 - 16:20
    SMM was loaded by the BIOS.
  • 16:20 - 16:27
    So apparently it turned out, that the BIOS
    could not be really put outside of the TCB
  • 16:27 - 16:32
    so easily, because if it was really
    malicious it would provide a malicious SMM
  • 16:32 - 16:38
    and then the SMM could bypass TXT.
    So the response from Intel was: “OK,
  • 16:38 - 16:46
    but worry not, we have a technology
    called STM - Secure Transfer Monitor.”
  • 16:46 - 16:55
    That is a little hypervisor to sandbox
    the SMM which might be malicious.
  • 16:55 - 17:02
    So they wanted to boot
    a special dedicated...
  • 17:02 - 17:09
    they built it actually... they built a
    special technology to sandbox this SMM.
  • 17:09 - 17:14
    And then it turned out
    this is not so easy.
  • 17:14 - 17:17
    Because as usual they
    were missing the details.
  • 17:17 - 17:22
    And it is 6 years, 6 years has passed and
  • 17:22 - 17:28
    we still have not seen
    any real STM in the wild.
  • 17:28 - 17:36
    Which just is an example how
    hopeless this approach in building,
  • 17:36 - 17:44
    in trying to provide secure
    boot is for the x86 platform.
  • 17:44 - 17:47
    Another problem with x86 that
  • 17:47 - 17:52
    has risen to prominence in the recent
    years is the Intel Management Engine.
  • 17:52 - 17:59
    One of these things, that Intel
  • 17:59 - 18:05
    has put into this integrated processor
    is called Management Engine (ME).
  • 18:05 - 18:12
    And this ME is a little microcontroller
    that is inside your processor.
  • 18:12 - 18:19
    It has its own internal RAM,
    it has its own internal peripherals.
  • 18:19 - 18:27
    Like DMA engine, which
    has access to the host RAM.
  • 18:27 - 18:34
    And of course, it loads only
    Intel-signed firmware.
  • 18:34 - 18:41
    And it has also its own private ROM inside
    the processor, that nobody can inspect.
  • 18:41 - 18:45
    And nobody knows what it does.
  • 18:45 - 18:51
    And it runs a whole bunch
    of proprietary programs.
  • 18:51 - 18:58
    And it runs even Intel’s
    own proprietary OS.
  • 18:58 - 19:04
    And this all is happening all the time
    when you have some power connected
  • 19:04 - 19:07
    to your processor.
    Even if it’s in a sleep mode.
  • 19:07 - 19:11
    It’s running all the time
    here on my computer.
  • 19:11 - 19:18
    It can be doing anything it wants.
  • 19:18 - 19:22
    Obviously when I say something
    like that the first thought for,
  • 19:22 - 19:25
    at least for security people
    is: “This is an ideal
  • 19:25 - 19:31
    backdooring or rootkitting
    infrastructure.” Which is true.
  • 19:31 - 19:34
    However there is another
    problem and it’s Zombification.
  • 19:34 - 19:38
    I call it Zombification of personal
    computing that I will discuss in a moment.
  • 19:38 - 19:50
    I’m just stressing these are two somehow
    independent problems with this ME.
  • 19:50 - 19:57
    About 10 or more years ago I used to be
    a very active malware researcher or
  • 19:57 - 20:03
    scout malware researcher. Especially
    rootkit researcher and back then when,
  • 20:03 - 20:09
    if I was to imagine an ideal
    infrastructure for writing rootkits,
  • 20:09 - 20:17
    I couldn’t possibly imagine
    anything better than ME.
  • 20:17 - 20:23
    Because ME has access to essentially
    everything that is important.
  • 20:23 - 20:27
    As I mentioned it has
    unconstrained access to DRAM,
  • 20:27 - 20:31
    to the actual CPU, to GPU.
  • 20:31 - 20:36
    It can also talk to your networking card,
  • 20:36 - 20:40
    especially to the Ethernet card
  • 20:40 - 20:46
    which controller is also in the
    Southbridge in the processor.
  • 20:46 - 20:51
    It can also talk to the SPI
    flash and asks the SPI flash.
  • 20:51 - 20:55
    It has its own dedicated
    partition on the SPI flash,
  • 20:55 - 21:02
    which can be used to store
    whatever ME wants to store there.
  • 21:02 - 21:08
    This is really problematic and
    we don’t know what it runs.
  • 21:11 - 21:15
    But the other problem,
    that is perhaps less obvious,
  • 21:15 - 21:27
    is what I call zombification of
    the General Purpose Computing.
  • 21:27 - 21:33
    About a year ago there
    was a book published by
  • 21:33 - 21:39
    one of the Intel architects, one of the
    architects who designed Intel ME.
  • 21:39 - 21:42
    I highly recommend this book.
  • 21:42 - 21:52
    It’s the only somehow official source
    of information about Intel ME.
  • 21:52 - 21:58
    And what the book has made clear is that
  • 21:58 - 22:04
    the model of computing that
    Intel envisions in the future,
  • 22:04 - 22:09
    is to take the model, that we have
    today, which looks like this.
  • 22:09 - 22:14
    The size of the boxes somehow
    attempts to present
  • 22:14 - 22:21
    the amount of logic or involvement
    of each of the layers
  • 22:21 - 22:25
    in processing of the user data.
  • 22:25 - 22:30
    Obviously we have most of this
    processing done in the applications.
  • 22:30 - 22:37
    But we also have some involvement from the
    OS and also from the hardware, of course.
  • 22:37 - 22:42
    For example, when we want to
    generate a random number
  • 22:42 - 22:57
    we would usually ask an OS to
  • 22:57 - 22:59
    return us the random number.
  • 22:59 - 23:05
    Because the OS can generate it using
    timings and interrupts, whatever.
  • 23:05 - 23:11
    But again, most of the logic, most of
    the code is in the application’s layer.
  • 23:11 - 23:14
    And this is good, because
  • 23:14 - 23:20
    thanks to computing being
    general purpose computing,
  • 23:20 - 23:23
    everyone of us can write applications.
  • 23:23 - 23:29
    We can argue what is the best
    way to implement some crypto.
  • 23:29 - 23:34
    Some people can write it one way, some
    other people can write it another way.
  • 23:34 - 23:36
    And that’s good.
  • 23:36 - 23:42
    Now this is the model
    that Intel wants to go to.
  • 23:42 - 23:48
    It essentially wants to eliminate
    all the logic that touches data
  • 23:48 - 23:55
    from apps and OS even
    and move it to Intel ME.
  • 23:55 - 24:04
    Because, remember,
    Intel ME is also an OS.
  • 24:04 - 24:11
    It’s a separate OS. Only that this is an
    OS that nobody knows how it works.
  • 24:11 - 24:14
    It’s an OS, that nobody
    has any possibility
  • 24:14 - 24:18
    to look at the source code
    or even reverse engineer.
  • 24:18 - 24:22
    Because even we can not
    really analyse the binaries.
  • 24:22 - 24:30
    It’s the OS that is fully controlled
    by Intel. And not to mention that
  • 24:30 - 24:34
    any functionality it offers is
    also fully controlled by Intel.
  • 24:34 - 24:43
    Without anybody being
    able to verify what they do.
  • 24:43 - 24:45
    That might not even be malicious.
  • 24:45 - 24:48
    They may not even be
    doing malicious things.
  • 24:48 - 24:52
    Perhaps they are just
    implementing something wrong.
  • 24:52 - 24:57
    Bugs. Security bugs, right?
  • 24:57 - 25:01
    But of course Intel believes
    that whatever Intel writes
  • 25:01 - 25:07
    must be secure.
  • 25:07 - 25:12
    For some reason the must have missed
  • 25:12 - 25:17
    a number of papers that my team and others
  • 25:17 - 25:25
    have published in the recent 10 years.
  • 25:25 - 25:31
    The questions are: Can we disable Intel ME
    or can we control what code runs there?
  • 25:31 - 25:34
    Can we see at least what code is there?
  • 25:34 - 25:40
    And as far as I’m aware the
    answer is unfortunately: Not.
  • 25:40 - 25:44
    As I mentioned, I have this cool
    laptop it runs Qubes OS, of course,
  • 25:44 - 25:52
    but still it not only runs Qubes OS.
    It also runs side by side Intel ME.
  • 25:52 - 25:55
    Intel ME proprietary OS.
  • 25:55 - 26:06
    And I can’t do anything about it.
  • 26:06 - 26:14
    About 6 or 7 years ago my team
    has done some work on Intel AMT.
  • 26:14 - 26:18
    I believe this was the first and probably
    the only work where we managed
  • 26:18 - 26:24
    to actually inject code into
    ME. That was back in times
  • 26:24 - 26:29
    when Intel ME was not in the
    processor. It was in the Northbridge.
  • 26:29 - 26:36
    It was in the Q35 or Q45 chipset,
    if I remember correctly.
  • 26:36 - 26:42
    So we demonstrated how we
    can inject a rootkit into ME.
  • 26:42 - 26:46
    Of course Intel then patched it.
  • 26:46 - 26:51
    Now they continue to think that whatever
    they write will be always secure.
  • 26:51 - 26:55
    But, the problem is...
  • 26:55 - 27:01
    For a number of years after that
    presentation I used to believe
  • 27:01 - 27:09
    that we could use VTD
    - an INTEL IMMU technology with TXT -
  • 27:09 - 27:12
    perhaps to effectively sandbox ME.
  • 27:12 - 27:15
    Because some of the specifications I saw,
  • 27:15 - 27:21
    suggested that VTD should be able to
  • 27:21 - 27:26
    sandbox ME accesses to host memory.
  • 27:26 - 27:33
    And because we used VTD heavily
    on Qubes, thanks to Xen using it,
  • 27:33 - 27:43
    I was pretty much not
    that worried about ME.
  • 27:43 - 27:51
    Unfortunately it turned out
    that ME can just bypass VTD.
  • 27:51 - 27:59
    And this is a feature of this ME.
  • 27:59 - 28:05
    Which brings us to this rather
    sad conclusion that perhaps
  • 28:05 - 28:15
    if we look at Intel x86 platform,
    then the war is lost here.
  • 28:15 - 28:18
    It might be lost even
    if we didn’t have ME.
  • 28:18 - 28:24
    Even if we somehow manage to
    convince Intel to get rid of ME,
  • 28:24 - 28:31
    or at least to offer OEMs, Laptop
    vendors, an option to disable it,
  • 28:31 - 28:35
    by fusing something.
  • 28:39 - 28:45
    The problem with secure boot
    that I mentioned earlier,
  • 28:45 - 28:50
    and that I analysed in more detail
    in a paper I released 2 months ago,
  • 28:50 - 28:53
    is that it really is hopeless,
  • 28:53 - 28:58
    because of the complexity
    of the architecture
  • 28:58 - 29:04
    where we have ring 3, ring 0, okay. Then
    we have SMM, then we have virtualisation,
  • 29:04 - 29:10
    then we have STM to sandbox SMM,
    and the interactions between these.
  • 29:10 - 29:20
    This all doesn’t look really like
    it could be solved effectively,
  • 29:20 - 29:27
    which of course bothers me a lot.
  • 29:27 - 29:29
    At least on purely egoistic reasons,
  • 29:29 - 29:35
    because I spent the last 5 years
    of my life on this Qubes project.
  • 29:35 - 29:42
    And of course with such a state of
    things it makes my whole Qubes project
  • 29:42 - 29:47
    somehow meaningless.
  • 29:47 - 29:52
    If the situation is so bad,
  • 29:52 - 29:58
    perhaps the only way to solve the problem
    is to change the rules of the game.
  • 29:58 - 30:06
    Because you can not really
    win under the old rules.
  • 30:06 - 30:14
    That’s why I wanted to share
    this approach with you today.
  • 30:14 - 30:22
    That starts with recognizing that
  • 30:22 - 30:28
    most of the problems here is
    related to the persistent state,
  • 30:28 - 30:33
    that is stored pretty much
    everywhere on your platform,
  • 30:33 - 30:42
    which usually keeps the
    firmware, but not only.
  • 30:42 - 30:51
    So let’s imagine, that we could do a
    clean separation of state from hardware.
  • 30:51 - 30:59
    So this is the current picture.
    This is your laptop.
  • 30:59 - 31:06
    The reddish boxes are state,
    the persistent state.
  • 31:06 - 31:12
    That means these are places
    where malware can persist.
  • 31:12 - 31:18
    So you reinstall the OS, but
    the malware still can re-infect.
  • 31:18 - 31:23
    There are also places where
    malware can store secrets,
  • 31:23 - 31:30
    once it steals them from you.
    So imagine I can have malware,
  • 31:30 - 31:36
    that might only be stealing
    my disk encryption key.
  • 31:36 - 31:43
    And it can store it somewhere on
    the disk or maybe on SPI flash.
  • 31:43 - 31:49
    Or maybe in the Wi-Fi module firmware, or
    maybe in the embedded controller firmware,
  • 31:49 - 31:57
    somewhere. Somewhere
    there in those red rectangles.
  • 31:57 - 32:00
    Now if the malware does it,
    that is a pretty fatal situation,
  • 32:00 - 32:04
    because if my laptop
    gets stolen or seized,
  • 32:04 - 32:10
    perhaps then the adversary who gets
  • 32:10 - 32:14
    a key to the malware can
    just decrypt the blobs.
  • 32:14 - 32:22
    And the blobs would reveal my disk
    decryption key. And then the game is over.
  • 32:22 - 32:26
    And also another problem with
    this state is that it might be
  • 32:26 - 32:34
    revealing many of the user and
    personally identifiable information.
  • 32:34 - 32:40
    How ever you read this PI shortcut.
  • 32:40 - 32:44
    These are for example MAC addresses.
  • 32:44 - 32:48
    Or maybe processor serial number.
  • 32:48 - 32:51
    Or maybe ME serial number. Whatever!
  • 32:51 - 32:58
    Or maybe the list of SSID networks,
    that ME has seen recently.
  • 32:58 - 33:02
    How do you know it’s not being
    stored somewhere on your SPI flash?
  • 33:02 - 33:07
    You don’t know what is stored there.
    Even though I can’t take off my SPI flash
  • 33:07 - 33:14
    or just connect a programmer to my
    SPI flash - an EEPROM programmer -
  • 33:14 - 33:27
    I can read the contents of the SPI
    flash, but all of this will be encrypted.
  • 33:27 - 33:31
    Now we recognize, that the
    state might be problematic.
  • 33:31 - 33:37
    And now imagine a picture, that
    we have the laptop, which has
  • 33:37 - 33:44
    no persistent state storage.
    Which is this blue rectangle.
  • 33:44 - 33:48
    Let’s call it stateless laptop.
  • 33:48 - 33:54
    And then we have another element,
    that we’re gonna call trusted stick
  • 33:54 - 33:58
    for lack of any more sexy name for it.
  • 33:58 - 34:05
    That’s gonna be keeping all the firmware,
    all the platform configuration,
  • 34:05 - 34:10
    all the system partitions,
    like boot and root,
  • 34:10 - 34:15
    all the user partitions.
  • 34:15 - 34:19
    Now we see that... and of course the
    firmware and system partitions
  • 34:19 - 34:23
    will be exposed in a read only manner.
  • 34:23 - 34:28
    So even if malware, perhaps a traditional
    malware, that got into my system
  • 34:28 - 34:35
    through a malicious attachment,
    even if it found a weakness in the BIOS,
  • 34:35 - 34:40
    or maybe in the chipset, allowing
    it to re-flash normally, allowing it
  • 34:40 - 34:51
    to re-flash the BIOS - we have seen plenty of
    such attacks in the recent several years.
  • 34:51 - 34:55
    Now it would not be able
    to succeed, because
  • 34:55 - 35:00
    the trusted stick, which gonna be a
    simple FPGA implemented device,
  • 35:00 - 35:08
    will just be exposing
    the read-only storage.
  • 35:08 - 35:13
    You see that firmware injection
    can be prevented this way.
  • 35:13 - 35:17
    Also there is no places
    to store stolen secrets.
  • 35:17 - 35:22
    Again, the same malware running in the ME
  • 35:22 - 35:28
    still can steal my disk encryption
    key or my PGP private key.
  • 35:28 - 35:31
    But it has no place to store it.
  • 35:31 - 35:36
    So if somebody now takes my laptop,
    will not be able to find it there.
  • 35:36 - 35:40
    You might say, maybe it will be
    able to store it on the stick.
  • 35:40 - 35:45
    But then, again, the stick, the firmware
    and system partitions are read-only.
  • 35:45 - 35:49
    And the user partitions
    are encrypted by the stick.
  • 35:49 - 35:57
    So even if ME can send something to be
    stored there, nobody besides the user
  • 35:57 - 36:03
    can really get hands on this blob.
  • 36:03 - 36:07
    Also we get a reliable way to
    verify what firmware we use.
  • 36:07 - 36:11
    Or ability to choose what
    firmware we want to use.
  • 36:11 - 36:18
    Because we can just take this stick,
    plug into our trustworthy computer,
  • 36:18 - 36:26
    some, I don’t know, Lenovo X60 from 15
    years ago, that we have running Coreboot
  • 36:26 - 36:30
    and we just analysed all
    the elements, whatever.
  • 36:30 - 36:39
    So we finally a have a way to
    upload firmware in a reliable way.
  • 36:39 - 36:46
    Thanks to the actual laptop having no
    state, we can have something like Tails
  • 36:46 - 36:55
    finally doing what it advertises.
    I can boot Tails or something like that.
  • 36:55 - 37:03
    I can use it, I can shut it down and there
    is no more traces of my activity there.
  • 37:03 - 37:08
    I can give my laptop to somebody other.
    Or I can boot some other environment.
  • 37:08 - 37:20
    Perhaps some, I don’t know,
    Windows to play games, or whatever.
  • 37:20 - 37:27
    So what would it take to have
    such a stateless laptop?
  • 37:27 - 37:33
    This is the simplest version which
    shows that the only modification
  • 37:33 - 37:38
    that has been made here was
    to take the SPI flash chip
  • 37:38 - 37:43
    and essentially put it outside
    the laptop on a trusted stick.
  • 37:43 - 37:50
    And just route the wiring,
    just 4 wires, to the trusted stick.
  • 37:50 - 37:51
    And that’s pretty much it.
  • 37:51 - 37:56
    That’s the simplest version. Oh,
    and I also got rid of the disk.
  • 37:56 - 38:03
    And also I had to ensure, that
    whatever discrete devices,
  • 38:03 - 38:06
    which are in that case embedded
    controller and Wi-Fi module,
  • 38:06 - 38:11
    they do not have flash memory
    but use something like OTP memory.
  • 38:11 - 38:14
    We can further get rid
    of the Wi-Fi, and use
  • 38:14 - 38:18
    an external USB connected
    one if that is not possible.
  • 38:18 - 38:23
    And for the embedded controller that
    should be possible, much more easier,
  • 38:23 - 38:27
    because embedded controller is always
    something that the OEM chooses.
  • 38:27 - 38:34
    So we can just talk to whatever
    OEM, who would like to implement
  • 38:34 - 38:39
    this stateless laptop, and ask the
    OEM to use an embedded controller
  • 38:39 - 38:45
    with essentially ROM, instead of flash.
  • 38:45 - 38:52
    So that’s the simplest version,
    which is really simple.
  • 38:52 - 38:57
    This is a more complex version
    where we also fit something
  • 38:57 - 39:05
    that I call here SPI Multiplexer.
    Which allows to share the firmware
  • 39:05 - 39:09
    not just to the processor, but
    also to the embedded controller.
  • 39:09 - 39:12
    And perhaps also with the disk.
  • 39:12 - 39:16
    Because maybe we actually
    would like to have internal disk.
  • 39:16 - 39:23
    Because internal disk will always
    be faster and will always be bigger
  • 39:23 - 39:29
    than whatever disk we will
    put on our trusted stick.
  • 39:29 - 39:36
    You might object, that, come on, disk
    is actually not a stateless thing! Right?
  • 39:36 - 39:44
    Because disk is made especially
    to store state persistently.
  • 39:44 - 39:50
    But it’s a special disk, that I will
    mention in just a few minutes.
  • 39:50 - 39:55
    It’s a special disk running trusted
    firmware and doing read-only
  • 39:55 - 39:59
    and encryption for everything.
  • 39:59 - 40:05
    And now for the trusted stick:
  • 40:05 - 40:11
    As I mentioned, the trusted
    stick is envisioned to have
  • 40:11 - 40:14
    read-only and encrypted partitions.
  • 40:14 - 40:23
    And the read-only partitions are for
    firmware and the system code.
  • 40:23 - 40:29
    So the first block is something that we
    would like to export over SPI, typically,
  • 40:29 - 40:34
    and the system partition is
    something that we make visible
  • 40:34 - 40:43
    to the OS using something like
    pretending being USB mass storage
  • 40:43 - 40:53
    or actually implementing
    USB mass storage protocol.
  • 40:53 - 40:58
    And the encrypted partition
    - again, the important thing here is
  • 40:58 - 41:06
    that encryption should be
    implemented by the stick itself.
  • 41:06 - 41:11
    So we have some key here,
  • 41:11 - 41:14
    the question is how this key should be...
  • 41:14 - 41:22
    What input should be taken
    to derive this key from.
  • 41:22 - 41:28
    It could be something that
    is persistent to the stick.
  • 41:28 - 41:31
    It could be combined with a
    passphrase, that the user enters
  • 41:31 - 41:38
    using a traditional keyboard,
    plus maybe a secret from the TPM.
  • 41:38 - 41:44
    And when I say TPM I think about the
    firmware TPM inside the processor
  • 41:44 - 41:47
    that is using storage provided by
  • 41:47 - 41:57
    encrypted firmware partition.
  • 41:57 - 42:02
    The optional internal disk
    that I just mentioned,
  • 42:02 - 42:07
    it should essentially do
    the same as the stick,
  • 42:07 - 42:11
    and because it will be
    running trusted firmware
  • 42:11 - 42:16
    that it will be fetching
    from the trusted stick itself
  • 42:16 - 42:20
    the disk will not have any flash memory.
  • 42:20 - 42:24
    So because we will trust
    the hardware of the disk
  • 42:24 - 42:28
    and because we will trust the
    firmware, we will trust the firmware
  • 42:28 - 42:33
    to provide read-only and encrypted
    partitions just like those ones
  • 42:33 - 42:39
    I mentioned on the stick, which is nice
    because it reveals the stick from acting
  • 42:39 - 42:44
    as a mass storage device, which has
  • 42:44 - 42:51
    practical consequences which are nice.
  • 42:51 - 42:59
    So there’s a picture with the internal
    trusted disk, which you see just here.
  • 42:59 - 43:04
    As you can see, it takes also the
    firmware from the trusted stick.
  • 43:04 - 43:10
    And there is even an open source
    project, OpenSSD. And it looks like
  • 43:10 - 43:18
    people have already built an open hardware
    open firmware SSD, a very performant disk.
  • 43:18 - 43:29
    So this is not just science
    fiction, even for this SSD.
  • 43:29 - 43:35
    Okay, so that looks all very nice,
    but there is one problem.
  • 43:35 - 43:43
    Even though malware may not have any
    place on the laptop to leak the secrets,
  • 43:43 - 43:47
    it still might try to do
    it over networking.
  • 43:47 - 43:53
    And let’s differentiate now between
    classic malware and sophisticated malware.
  • 43:53 - 43:59
    Classic malware is something you get with
    an attachment or some drive-by-attack
  • 43:59 - 44:04
    which we’ll discuss in a moment.
    Now, let’s focus on sophisticated malware.
  • 44:04 - 44:17
    So, a hypothetical rootkit in ME.
  • 44:17 - 44:24
    Before we move on, for obvious
    reasons, such a sophisticated malware
  • 44:24 - 44:30
    would not be interested
    in getting caught easily.
  • 44:30 - 44:38
    So, it would not be establishing a
    TCP connection to NSA.gov server
  • 44:38 - 44:45
    or whatever, right?
    That would be plain stupid.
  • 44:45 - 44:50
    Having that in mind, let’s
    consider a few scenarios.
  • 44:50 - 44:55
    Scenario number 0 is
    an air-gapped system.
  • 44:55 - 44:58
    Even though it might
    be an air-gapped system,
  • 44:58 - 45:02
    still remember there is ME running there.
  • 45:02 - 45:12
    If the computer is not
    inside a Faraday cage,
  • 45:12 - 45:19
    there is still plenty of other
    networks or devices around it.
  • 45:19 - 45:26
    Which means that ME can theoretically
    use your Wi-Fi card or even speaker
  • 45:26 - 45:33
    to establish a covert channel with, say,
    your phone, that might be just nearby.
  • 45:33 - 45:39
    So, in order to make such a
    system truly air-gapped,
  • 45:39 - 45:43
    knowing that we can not get rid of the ME,
  • 45:43 - 45:48
    we really need to have kill-switches
    for any transmitting devices,
  • 45:48 - 45:54
    including the speakers, and apparently
    even that might not be enough,
  • 45:54 - 45:58
    because some people showed
    covert channels that used
  • 45:58 - 46:04
    things like power fluctuations
  • 46:04 - 46:12
    or temperature fluctuations but let’s
    leave that exotic examples aside.
  • 46:12 - 46:19
    A more interesting scenario is a
    closed network of trusted nodes.
  • 46:19 - 46:24
    In that scenario we assume that
    all these people are trusted.
  • 46:24 - 46:28
    Again, by definition that means that
    any of these people can compromise
  • 46:28 - 46:33
    the security of anybody else.
    We really don’t like trusted things,
  • 46:33 - 46:38
    but, well, let’s start with something.
  • 46:38 - 46:46
    Now, even though each of these trusted
    peers which run state-less laptops,
  • 46:46 - 46:53
    even each of these have
    this malicious ME in itself
  • 46:53 - 46:58
    because we gonna fit a small proxy
  • 46:58 - 47:03
    so a modification that we are...
    that we should additionally do,
  • 47:03 - 47:08
    that I have not shown you before,
    is that rather than connecting
  • 47:08 - 47:14
    your Wi-Fi module directly to the
    processor which is not good,
  • 47:14 - 47:19
    because it gives full authority of the
    processor over this Wi-Fi module.
  • 47:19 - 47:24
    Instead we would like to
    connect it to some proxy.
  • 47:24 - 47:28
    It would be doing some kind of tunnelling.
  • 47:28 - 47:34
    Something like VPN or maybe TORifying
    any traffic that is generated there.
  • 47:34 - 47:40
    So even though ME might be willing
    to be sending some traffic
  • 47:40 - 47:43
    maybe not explicit traffic,
  • 47:43 - 47:48
    maybe will be piggybacking on
    some user generated traffic,
  • 47:48 - 47:56
    by only modifying, I don’t know, TCP
    initial sequence numbers, or something.
  • 47:56 - 48:01
    It still all will be happening
    inside the tunnel.
  • 48:01 - 48:06
    Again, some people might be
    saying “Yeah, but still ME
  • 48:06 - 48:10
    might be modulating the timings
    of the generated packets
  • 48:10 - 48:19
    and this way try to convey some
    information using timing.”
  • 48:19 - 48:22
    We can’t truly do much about that
    but on the other hand it would be
  • 48:22 - 48:28
    extremely difficult for ME to
    do that, implementation wise.
  • 48:28 - 48:33
    Finally a scenario where
    we want to use any...
  • 48:33 - 48:38
    when we want to connect with anybody
    not just with a trusted computer.
  • 48:38 - 48:44
    So, say, with some website on the internet
    that might or might not be trusted.
  • 48:44 - 48:52
    Again, by having this proxy which by
    the way might be implemented inside
  • 48:52 - 48:57
    this embedded controller that
    we know, if you remember,
  • 48:57 - 49:06
    it runs the trusted firmware because it
    fetches firmware from our trusted stick.
  • 49:06 - 49:12
    So the proxy again is tunnelling
    any potential leakage from ME
  • 49:12 - 49:18
    which means that a malicious ISP or
    any part of the infrastructure here
  • 49:18 - 49:24
    still can not really retrieve the
    secrets that ME tries to leak.
  • 49:24 - 49:28
    But of course at some point we need to
    terminate the stunnel that might be
  • 49:28 - 49:36
    the VPN server or might be a TOR exit
    node or maybe the server itself if it’s,
  • 49:36 - 49:46
    say, maybe an onion server, in
    which case anybody here can still
  • 49:46 - 49:55
    presumably get the secrets
    from ME in this cloud here.
  • 49:55 - 49:58
    Which unfortunately we
    can do nothing about.
  • 49:58 - 50:05
    Except for hoping that at least
    if we used perhaps TOR
  • 50:05 - 50:09
    for this first part of the equation
  • 50:09 - 50:14
    then at least a malicious
    administrator of, say, this server
  • 50:14 - 50:17
    would not be able to
    correlate whose decryption...
  • 50:17 - 50:20
    whose disk decryption key it really is.
  • 50:20 - 50:24
    Although this is very tricky. That would
    only work if this computer was
  • 50:24 - 50:28
    a special purpose computer. If you
    used something like Qubes here
  • 50:28 - 50:32
    and if you wanted to have different
    partitions, one for your personal
  • 50:32 - 50:38
    and only one of these partitions
    would be special secret partitions.
  • 50:38 - 50:42
    That wouldn’t work, because
    ME still would be able to gather
  • 50:42 - 50:47
    your identification information
    from whatever partitions you have.
  • 50:47 - 50:53
    Because again, it has unconstrained access
    to all the personal, all the host memory.
  • 50:53 - 51:01
    But still this... using this
    proxy pushes adversary,
  • 51:01 - 51:06
    hypothetical adversary, in the
    ME, into extreme difficulty
  • 51:06 - 51:12
    of needing to piggy-back on
    some higher-level protocols
  • 51:12 - 51:18
    establishing exotic cover channels.
    Comparing to what they can do today
  • 51:18 - 51:23
    one can... they simply steal the key
    and store it on SPI first partition.
  • 51:23 - 51:29
    Or maybe on your disk. This is like
    orders of magnitude more difficult
  • 51:29 - 51:35
    for them to do.
  • 51:35 - 51:38
    We mentioned the sophisticated
    malware and I mentioned
  • 51:38 - 51:44
    the classic malware is a different story.
    The classic malware doesn’t need
  • 51:44 - 51:52
    to be shy against leaking something
    through whatever means you can think of.
  • 51:52 - 51:59
    Perhaps by sending email to somebody. But
    obviously to address the classic malware
  • 51:59 - 52:07
    problem we can address it quite
    reasonably well on the OS level.
  • 52:07 - 52:14
    For example using compartmentalization.
    But here comes the problem,
  • 52:14 - 52:21
    is, that a malicious BIOS...
  • 52:21 - 52:25
    Let me get back a little bit. Because
    so far we having assuming that
  • 52:25 - 52:29
    we don’t really need to trust the BIOS.
    Because having this stateless laptop
  • 52:29 - 52:34
    and trusted stick, even if the BIOS was
    malicious, it still, again, would not
  • 52:34 - 52:40
    be able to change anything in its own
    firmware partition, would not be able to
  • 52:40 - 52:45
    store any stolen secrets
    anywhere. So it’s convenient
  • 52:45 - 52:51
    to figure that the BIOS
    might not be trusted.
  • 52:51 - 52:59
    But then, again, a compromised
    BIOS might instead be providing
  • 52:59 - 53:08
    privilege escalation backdoors for
    classic malware that executes on your
  • 53:08 - 53:13
    compartmentalised OS.
    Such as to do VM escape.
  • 53:13 - 53:16
    Such things are trivial to implement.
  • 53:16 - 53:21
    And we don’t want classic malware
    which means we want to ensure
  • 53:21 - 53:28
    that the BIOS does not
    provide such backdoors.
  • 53:28 - 53:35
    And to make it short, we need open-source
    BIOS. Something like Coreboot.
  • 53:35 - 53:40
    It’s great that we have Coreboot
    and we could help Coreboot
  • 53:40 - 53:44
    to become such a BIOS
    for this stateless laptop.
  • 53:44 - 53:50
    Even though Coreboot is not fully open-
    source - it relies on so-called Intel FSP,
  • 53:50 - 53:56
    the firmware support package, which
    is a Intel blob that is needed to
  • 53:56 - 54:05
    initialize your DRAM and other silicon
    - still it should be reasonably easy to
  • 54:05 - 54:11
    ensure that FSP does not
    provide SMM backdoors.
  • 54:11 - 54:16
    So this is a solvable problem.
  • 54:16 - 54:25
    Finally there’s this question:
    So let’s say half a year from now
  • 54:25 - 54:30
    or a year from now pure reason
    or somebody will tell you
  • 54:30 - 54:38
    here is the stateless laptop.
    You can order, just a 1000 dollars.
  • 54:38 - 54:44
    So you got the laptop. But how do you
    know it really IS a stateless laptop?
  • 54:44 - 54:50
    Maybe it is full of state-caring
    elements. Maybe it’s full of
  • 54:50 - 54:56
    radio devices that are
    emanating radios everywhere.
  • 54:56 - 55:04
    This comes down to the problem of:
    How do we compare 2 different PCBs?
  • 55:04 - 55:06
    Two different Printed Circuit Boards?
  • 55:06 - 55:14
    As far as I’m aware right now our industry
    has no ways to compare two different PCBs
  • 55:14 - 55:18
    and to state: yes they look identical.
  • 55:18 - 55:26
    Because if we had that, then we could
    have the vendor, the laptop vendor
  • 55:26 - 55:33
    which would obviously have to be
    open-hardware to publish the schematics
  • 55:33 - 55:38
    and pictures of the boards and then
    anybody who ordered this laptop
  • 55:38 - 55:44
    would have an opportunity to always,
    say, photograph the board and
  • 55:44 - 55:51
    have a diff tool to compare it.
    If it really looks the same.
  • 55:51 - 55:57
    Sure we would not be able to see inside
    the chips but at least the geometry-wise
  • 55:57 - 56:06
    comparison would be a tremendous step
    to making such malicious modifications
  • 56:06 - 56:10
    by vendors very difficult.
  • 56:10 - 56:14
    This is a vision problem, kind of, right?
    You take 2 photos, have 2 photos
  • 56:14 - 56:21
    of 2 PCBs and you have a tool to compare
    it. And I believe Jake Applebaum
  • 56:21 - 56:27
    has already mentioned it,
    some... a year ago probably,
  • 56:27 - 56:38
    it’s a great research project for all
    you academic people sitting here.
  • 56:38 - 56:44
    That’s an example of a board that...
    I have no idea, I got this laptop,
  • 56:44 - 56:53
    I opened it, I see this board. Sure, I can
    identify some IC elements
  • 56:53 - 56:56
    like this embedded controller here...
  • 56:56 - 57:00
    But, really, maybe it’s connected
    somehow differently,
  • 57:00 - 57:03
    maybe there is some other flash
    elements there, I don’t know.
  • 57:03 - 57:09
    I would like to have an ability now to
    check this with a called-in image
  • 57:09 - 57:18
    that some experts will analyze
    in-depth and say it’s safe to use.
  • 57:18 - 57:26
    Many people say that perhaps we should
    all give up on Intel x86 because ME e.g.
  • 57:26 - 57:34
    applause
  • 57:34 - 57:39
    Yet this is not such a nice idea.
  • 57:39 - 57:48
    Or maybe this is not such a
    silver bullet, I should have said.
  • 57:48 - 57:53
    First, we have ARM. Everybody says
    “Why not ARM? Let’s go to ARM!”
  • 57:53 - 57:58
    First: There’s no such thing as an ARM
    processor. Okay?
  • 57:58 - 58:04
    ARM just sells the specifications, or IP.
  • 58:04 - 58:12
    And then the vendors, like Samsung,
    Texas Instruments etc. who take this IP
  • 58:12 - 58:18
    and design and make very own SOC.
    This is still a proprietary processor
  • 58:18 - 58:25
    that they can put whatever they want
    inside. E.g. we have Trust Zone
  • 58:25 - 58:31
    that by itself is not as closed as ME.
    That there is nothing that would
  • 58:31 - 58:36
    prevent a vendor to actually take
    Trust Zone and lock it down
  • 58:36 - 58:41
    and end up with something
    like ME very easily.
  • 58:41 - 58:46
    Just a matter of the
    vendor willing to do that.
  • 58:46 - 58:53
    Also the diversity of the processors
    make it difficult for OS’s like Qubes
  • 58:53 - 58:59
    that would like to use advanced
    technologies like IMMU for isolation
  • 58:59 - 59:04
    to actually support all of them because
    different PSOCs might be implementing
  • 59:04 - 59:12
    completely different versions or
    even technologies doing that.
  • 59:12 - 59:17
    Another alternative, a much better one
    is to use open-hardware processors.
  • 59:17 - 59:25
    Currently that means FPGA-
    implemented processors.
  • 59:25 - 59:29
    In the future maybe we will have 3D
    printers that will allow everybody
  • 59:29 - 59:34
    to print it. That will be great. But
    probably is not coming any time,
  • 59:34 - 59:37
    in the coming 10 or 20 years.
  • 59:37 - 59:45
    And meanwhile the performance and
    lack of really any security technologies
  • 59:45 - 59:51
    like IOMMU or virtualization doesn’t
    make this a viable solution
  • 59:51 - 59:56
    for the coming say 5 years at least.
  • 59:56 - 59:59
    And even then, even if we have
    such an open-source processor
  • 59:59 - 60:06
    this clean separation of state
    still makes lots of sense.
  • 60:06 - 60:12
    Right? Again, because firmware
    infections can be easily prevented
  • 60:12 - 60:17
    because malware, if it gets there
    somehow still has no places
  • 60:17 - 60:23
    to store stolen secrets because
    it provides reliable way to verify
  • 60:23 - 60:29
    or upload firmware. And makes it
    easy to boot multiple environments.
  • 60:29 - 60:32
    And share laptops with others.
  • 60:32 - 60:38
    I know that most of you will now say:
    “Yeah, that may be cool idea but
  • 60:38 - 60:48
    the market will never
    buy into that!” Right?
  • 60:48 - 60:55
    Understanding that PCs are really,
    as I said, extension of our brains,
  • 60:55 - 61:03
    we should stop thinking about
    market forces as the ultimate force
  • 61:03 - 61:08
    shaping how our personal
    computing looks like.
  • 61:08 - 61:14
    Just like we didn’t desert
    to market forces
  • 61:14 - 61:20
    to give us human rights. Right?
  • 61:20 - 61:26
    We should not count on the market forces
    to give us trustworthy personal computers.
  • 61:26 - 61:29
    Because that might just not be really...
  • 61:29 - 61:35
    applause
  • 61:35 - 61:40
    That just might not be in the
    interest of the market forces!
  • 61:40 - 61:44
    So, hopefully, some legislation
    could be of help here.
  • 61:44 - 61:49
    Maybe EU could do something here.
  • 61:49 - 61:56
    Because it’s really fun, when I
    often talk with other engineers,
  • 61:56 - 62:03
    and we all know that our world
    now really runs on computers,
  • 62:03 - 62:07
    and yet it apparently...
    Almost every engineer I talked to
  • 62:07 - 62:13
    says something like “Yeah but the
    sales people will never do that,
  • 62:13 - 62:16
    the business will never agree to that.”
  • 62:16 - 62:24
    But if the world runs on computers
    shouldn’t it be us, the engineers,
  • 62:24 - 62:29
    who should actually have the
    final say how this should...
  • 62:29 - 62:33
    how the computer technology
    should look like?
  • 62:33 - 62:38
    Yeah, I’ll just leave it here with this.
    Thank you very much!
  • 62:38 - 62:40
    final applause
  • 62:40 - 62:45
    postroll music
  • 62:45 - 62:51
    subtitles created by
    c3subtitles.de in 2016
Title:
Joanna Rutkowska: Towards (reasonably) trustworthy x86 laptops
Description:

more » « less
Video Language:
English
Duration:
01:02:52

English subtitles

Revisions