preroll music
Herald: I’m really glad that
you’re all here and that today
I can introduce Joanna Rutkowska to you.
She will be talking about reasonably
trustworthy x86 systems.
She’s the founder and leader
of the Invisible Things Lab
and also – that’s also something you all
probably use – the Qubes OS project.
She presented numerous attacks
on Intel based systems and also
virtualization systems. But today she
will not only speak about the problems
of those machines but will present some
solutions to make them more secure.
Give it up for Joanna Rutkowska!
applause
Joanna: Okay, so, let’s start
with stating something obvious.
Personal computers have become
really the extensions of our brains.
I think most of you will
probably agree with that.
Yet the problem is that they are insecure
and untrustworthy,
which personally bothers me al lot.
And here I want to make a quick digression
for the vocabulary I’m gonna
be using during this presentation.
When we say, well, there are
three adjectives related to trust
and people will often confuse them.
When we say something is “trusted”,
that means by definition something
can compromise the security of
the whole system, which means
we don’t like things to be trusted.
“Trusted third party”, “Trusted CA”
we don’t like that.
When we say something is...
because “trusted” doesn’t necessarily
mean that it is “secure”.
So, what is secure? Secure is
something that is resistant to attacks.
Perhaps this laptop might
be resistant to attacks.
If I open [an] email attachment and the
email attachment compromises my system
or maybe that if I plug
USB for the slide changer
I might be hoping that this
action will not compromise
my whole PC.
And yet something can be
secure but not trustworthy.
A good example of this might be e. g.
Intel Management Engine (ME), that I’m
gonna be talking about more later.
So it might be very resistant to attacks,
so it might be a backdoor.
A backdoor that is
very resistant to attacks,
yet it is not acting in
the interests of the user.
So it’s not “good”, whatever
good means in your assumed,
moral reference.
So, there’s been of course a lot of work
in the last 20 years and maybe more
to build solutions that provide
security and trustworthiness
and most of this work has focused on
the application layer and things like
GnuPG and PGP first,
TOR, all the secure communication
protocols and programs.
But of course,
it is clear that any effort
on the application level
is just meaningless if we can
not assure, if we can not trust
our operating system (OS).
Because the OS is the trusted part.
So if the OS is compromised
then everything is lost.
And there has been some efforts,
notably the project I started 5 years ago
and now this is like more than a dozen
of people working on it: Qubes OS.
It tries to address the problem of OS’s
being part of the PCB,
so what we try to do is
shrink the amount of trusted
code to an absolute minimum.
There’s been also other
efforts in this area.
But OS’s is not something I’m
gonna be discussing today.
What I’m gonna be discussing today
is the final layer, is the hardware.
Because what was OS to applications
it is hardware to the OS.
Again, most of the effort so far
to create secure and trustworthy OS,
they have always been assuming
that the hardware is trusted.
That means that... usually
that means for most OS’s that
a single malicious
peripheral on this laptop,
like a malicious Wi-Fi module
or maybe embedded controller
can compromise again my whole PC,
my whole digital life.
So what to do about it?
Before we discuss what to do about it
we should quickly
recap all the problems
with present PC platforms
and specifically I’m gonna
be focusing on x86 platform
and specifically with Intel x86
platform, which means: laptops.
This picture shows how a
typical modern laptop looks like.
You can see that it consists of
a processor in the center,
and then there is memory,
some peripherals, keyboard and monitor.
Very simple.
It can be very simple, because
if we look at present Intel processors
they really integrate everything
and the kitchen sink.
Ten years ago we used to have a processor,
a Northbridge, a Southbridge
and perhaps even more discrete
elements on the motherboard.
Today nearly all these elements have been
integrated into one processor package.
This is Broadwell
and this long element
here is the CPU and GPU
and the other one it
is said to be the PCH.
PCH is what used to be
the platform controller hub,
which is what used to be called
the Southbridge and Northbridge.
The line has somehow
blurred between these.
Of course there is only one
company making these.
It’s an American
company called INTEL.
It is a completely opaque construction.
We have absolutely no ways
to examine what’s inside.
That obviously...
The advantage is that
it makes construction of computers,
of laptops very easy now.
And lots of vendors can
produce little sexy laptops,
like the one I have here.
On this picture we see now some more
elements that are in this processor.
So, when you say processor
today, it’s no longer CPU.
Processor is now CPU, GPU,
memory controller, hub, PCIe,
root, some Southbridge,
so e.g. SATA controller and so on,
as well as something
that is called Management Engine (ME),
which we discuss in a moment.
There are few more elements
here that are important.
The most important for us
is the SPI flash element.
Because what’s interesting is that
with this whole integration
that has happened to the processor
and the other peripherals,
still the storage for the firmware,
so the storage where your BIOS as
well as other firmware is stored,
is still a discrete element.
We’ll see this element in
one of the next slides.
So let’s now consider first
the problem of boot security.
Obviously everybody understands
that boot security is something
- how to start the chain of
trust for whatever software
is gonna be running later -
is of a paramount importance.
I think I’m out of range.
Connected with boot security
is malicious peripherals,
that I mentioned shortly before.
So we’ll be now thinking: Can we assure
only good code is started
and how the peripherals
might interfere here.
Again, we will look at this SPI flash.
If we're now considering the boot
security we would like to understand
what code is loaded on the platform. And
if we now think where this code is stored,
it seems that the code is stored on
the SPI Flash and potentially also
on some of the discrete elements.
Let me state it again that this whole
integrated processor package
has everything and the kitchen
sink except for the flash,
so except for the storage of the firmware.
Here we have one of the SPI flash chips.
This is from my Laptop actually.
It’s a little microcontroller
and it typically stores the firmware for
these things, that are written here.
Now the question is, let's say I
have got this laptop from a store.
How can I actually verify what
firmware is really on this chip?
Well I can perhaps boot it into some
minimal Linux and try to ask it.
But of course if there is some malicious
something on the motherboard,
not necessarily this chip,
I will not get reliable answers.
Another question: Let’s say I somehow
know that there’s something trustworthy
on this SPI chip. Can I somehow enforce
read-only’ness of this program?
There have been some efforts to do that.
Like some projects by Peter Stuge
who just took a soldering iron
and connected one of the pins - one
of these 8 pins is called “write protect”.
If you ground it, it will be telling the
chip to discard any Write commands.
But again, remember, this chip
is still a little microcontroller,
it’s a little computer. So it might
ignore whatever you requested to do.
It’s not like you are cutting off
a signal for Write commands.
You are merely asking the
processor to ignore it.
So if you don’t trust the chip in the
first place, this doesn’t provide you
a reliable way to enforce read-only’ness.
Finally can I upload my own firmware?
Can I choose to use whatever BIOS I want?
Again, we don’t seem to have luck here.
And as I mentioned, this is just
one of the places on the platform
where the state is stored.
Embedded controller would be
a whole another microcontroller
having its own internal flash.
Or if not, using another
SPI chip to get flash from.
A disk would be another
microcontroller with a small computer,
having its own - typically -
flash for its own firmware.
And perhaps the same
with the Wi-Fi module.
Now for many years, myself and
lots of other people believed that
technologies like TPM, trusted execution
technology... like UEFI Secure Boot
I never really liked, but many people
did - they believed that they could
somehow solve the problem of secure boot.
But all of these technologies
have been shown to fail horribly
in this... on this premise.
And then we have...
So these were problems,
the tip of the iceberg of
problems of the secure boot.
The short story is: Today we can
not really assure secure boot.
Maybe before we move on to
Intel ME: e.g. Intel TXT:
Trusted Execution Technology was
introduced by Intel in the hope
to put the BIOS outside of the TCB,
trusted computing base for the platform.
So, the idea was that if you use TXT
which you can think of as
a special instruction of the processor,
that was the root of trust.
So, the promise was that
when using Intel TXT
you can start the chain of trust
without trusting the BIOS.
As well as other peripherals
like Wi-Fi card, which might
be malicious perhaps.
And that was just great.
And I really like the technology.
With my team we have done
lots of research on TXT.
But one of the first attacks that we have
presented, and that was back in like 2009,
was that we could bypass TXT
by having a malicious SMM.
SMM was loaded by the BIOS.
So apparently it turned out, that the BIOS
could not be really put outside of the TCB
so easily, because if it was really
malicious it would provide a malicious SMM
and then the SMM could bypass TXT.
So the response from Intel was: “OK,
but worry not, we have a technology
called STM - Secure Transfer Monitor.”
That is a little hypervisor to sandbox
the SMM which might be malicious.
So they wanted to boot
a special dedicated...
they built it actually... they built a
special technology to sandbox this SMM.
And then it turned out
this is not so easy.
Because as usual they
were missing the details.
And it is 6 years, 6 years has passed and
we still have not seen
any real STM in the wild.
Which just is an example how
hopeless this approach in building,
in trying to provide secure
boot is for the x86 platform.
Another problem with x86 that
has risen to prominence in the recent
years is the Intel Management Engine.
One of these things, that Intel
has put into this integrated processor
is called Management Engine (ME).
And this ME is a little microcontroller
that is inside your processor.
It has its own internal RAM,
it has its own internal peripherals.
Like DMA engine, which
has access to the host RAM.
And of course, it loads only
Intel-signed firmware.
And it has also its own private ROM inside
the processor, that nobody can inspect.
And nobody knows what it does.
And it runs a whole bunch
of proprietary programs.
And it runs even Intel’s
own proprietary OS.
And this all is happening all the time
when you have some power connected
to your processor.
Even if it’s in a sleep mode.
It’s running all the time
here on my computer.
It can be doing anything it wants.
Obviously when I say something
like that the first thought for,
at least for security people
is: “This is an ideal
backdooring or rootkitting
infrastructure.” Which is true.
However there is another
problem and it’s Zombification.
I call it Zombification of personal
computing that I will discuss in a moment.
I’m just stressing these are two somehow
independent problems with this ME.
About 10 or more years ago I used to be
a very active malware researcher or
scout malware researcher. Especially
rootkit researcher and back then when,
if I was to imagine an ideal
infrastructure for writing rootkits,
I couldn’t possibly imagine
anything better than ME.
Because ME has access to essentially
everything that is important.
As I mentioned it has
unconstrained access to DRAM,
to the actual CPU, to GPU.
It can also talk to your networking card,
especially to the Ethernet card
which controller is also in the
Southbridge in the processor.
It can also talk to the SPI
flash and asks the SPI flash.
It has its own dedicated
partition on the SPI flash,
which can be used to store
whatever ME wants to store there.
This is really problematic and
we don’t know what it runs.
But the other problem,
that is perhaps less obvious,
is what I call zombification of
the General Purpose Computing.
About a year ago there
was a book published by
one of the Intel architects, one of the
architects who designed Intel ME.
I highly recommend this book.
It’s the only somehow official source
of information about Intel ME.
And what the book has made clear is that
the model of computing that
Intel envisions in the future,
is to take the model, that we have
today, which looks like this.
The size of the boxes somehow
attempts to present
the amount of logic or involvement
of each of the layers
in processing of the user data.
Obviously we have most of this
processing done in the applications.
But we also have some involvement from the
OS and also from the hardware, of course.
For example, when we want to
generate a random number
we would usually ask an OS to
return us the random number.
Because the OS can generate it using
timings and interrupts, whatever.
But again, most of the logic, most of
the code is in the application’s layer.
And this is good, because
thanks to computing being
general purpose computing,
everyone of us can write applications.
We can argue what is the best
way to implement some crypto.
Some people can write it one way, some
other people can write it another way.
And that’s good.
Now this is the model
that Intel wants to go to.
It essentially wants to eliminate
all the logic that touches data
from apps and OS even
and move it to Intel ME.
Because, remember,
Intel ME is also an OS.
It’s a separate OS. Only that this is an
OS that nobody knows how it works.
It’s an OS, that nobody
has any possibility
to look at the source code
or even reverse engineer.
Because even we can not
really analyse the binaries.
It’s the OS that is fully controlled
by Intel. And not to mention that
any functionality it offers is
also fully controlled by Intel.
Without anybody being
able to verify what they do.
That might not even be malicious.
They may not even be
doing malicious things.
Perhaps they are just
implementing something wrong.
Bugs. Security bugs, right?
But of course Intel believes
that whatever Intel writes
must be secure.
For some reason the must have missed
a number of papers that my team and others
have published in the recent 10 years.
The questions are: Can we disable Intel ME
or can we control what code runs there?
Can we see at least what code is there?
And as far as I’m aware the
answer is unfortunately: Not.
As I mentioned, I have this cool
laptop it runs Qubes OS, of course,
but still it not only runs Qubes OS.
It also runs side by side Intel ME.
Intel ME proprietary OS.
And I can’t do anything about it.
About 6 or 7 years ago my team
has done some work on Intel AMT.
I believe this was the first and probably
the only work where we managed
to actually inject code into
ME. That was back in times
when Intel ME was not in the
processor. It was in the Northbridge.
It was in the Q35 or Q45 chipset,
if I remember correctly.
So we demonstrated how we
can inject a rootkit into ME.
Of course Intel then patched it.
Now they continue to think that whatever
they write will be always secure.
But, the problem is...
For a number of years after that
presentation I used to believe
that we could use VTD
- an INTEL IMMU technology with TXT -
perhaps to effectively sandbox ME.
Because some of the specifications I saw,
suggested that VTD should be able to
sandbox ME accesses to host memory.
And because we used VTD heavily
on Qubes, thanks to Xen using it,
I was pretty much not
that worried about ME.
Unfortunately it turned out
that ME can just bypass VTD.
And this is a feature of this ME.
Which brings us to this rather
sad conclusion that perhaps
if we look at Intel x86 platform,
then the war is lost here.
It might be lost even
if we didn’t have ME.
Even if we somehow manage to
convince Intel to get rid of ME,
or at least to offer OEMs, Laptop
vendors, an option to disable it,
by fusing something.
The problem with secure boot
that I mentioned earlier,
and that I analysed in more detail
in a paper I released 2 months ago,
is that it really is hopeless,
because of the complexity
of the architecture
where we have ring 3, ring 0, okay. Then
we have SMM, then we have virtualisation,
then we have STM to sandbox SMM,
and the interactions between these.
This all doesn’t look really like
it could be solved effectively,
which of course bothers me a lot.
At least on purely egoistic reasons,
because I spent the last 5 years
of my life on this Qubes project.
And of course with such a state of
things it makes my whole Qubes project
somehow meaningless.
If the situation is so bad,
perhaps the only way to solve the problem
is to change the rules of the game.
Because you can not really
win under the old rules.
That’s why I wanted to share
this approach with you today.
That starts with recognizing that
most of the problems here is
related to the persistent state,
that is stored pretty much
everywhere on your platform,
which usually keeps the
firmware, but not only.
So let’s imagine, that we could do a
clean separation of state from hardware.
So this is the current picture.
This is your laptop.
The reddish boxes are state,
the persistent state.
That means these are places
where malware can persist.
So you reinstall the OS, but
the malware still can re-infect.
There are also places where
malware can store secrets,
once it steals them from you.
So imagine I can have malware,
that might only be stealing
my disk encryption key.
And it can store it somewhere on
the disk or maybe on SPI flash.
Or maybe in the Wi-Fi module firmware, or
maybe in the embedded controller firmware,
somewhere. Somewhere
there in those red rectangles.
Now if the malware does it,
that is a pretty fatal situation,
because if my laptop
gets stolen or seized,
perhaps then the adversary who gets
a key to the malware can
just decrypt the blobs.
And the blobs would reveal my disk
decryption key. And then the game is over.
And also another problem with
this state is that it might be
revealing many of the user and
personally identifiable information.
How ever you read this PI shortcut.
These are for example MAC addresses.
Or maybe processor serial number.
Or maybe ME serial number. Whatever!
Or maybe the list of SSID networks,
that ME has seen recently.
How do you know it’s not being
stored somewhere on your SPI flash?
You don’t know what is stored there.
Even though I can’t take off my SPI flash
or just connect a programmer to my
SPI flash - an EEPROM programmer -
I can read the contents of the SPI
flash, but all of this will be encrypted.
Now we recognize, that the
state might be problematic.
And now imagine a picture, that
we have the laptop, which has
no persistent state storage.
Which is this blue rectangle.
Let’s call it stateless laptop.
And then we have another element,
that we’re gonna call trusted stick
for lack of any more sexy name for it.
That’s gonna be keeping all the firmware,
all the platform configuration,
all the system partitions,
like boot and root,
all the user partitions.
Now we see that... and of course the
firmware and system partitions
will be exposed in a read only manner.
So even if malware, perhaps a traditional
malware, that got into my system
through a malicious attachment,
even if it found a weakness in the BIOS,
or maybe in the chipset, allowing
it to re-flash normally, allowing it
to re-flash the BIOS - we have seen plenty of
such attacks in the recent several years.
Now it would not be able
to succeed, because
the trusted stick, which gonna be a
simple FPGA implemented device,
will just be exposing
the read-only storage.
You see that firmware injection
can be prevented this way.
Also there is no places
to store stolen secrets.
Again, the same malware running in the ME
still can steal my disk encryption
key or my PGP private key.
But it has no place to store it.
So if somebody now takes my laptop,
will not be able to find it there.
You might say, maybe it will be
able to store it on the stick.
But then, again, the stick, the firmware
and system partitions are read-only.
And the user partitions
are encrypted by the stick.
So even if ME can send something to be
stored there, nobody besides the user
can really get hands on this blob.
Also we get a reliable way to
verify what firmware we use.
Or ability to choose what
firmware we want to use.
Because we can just take this stick,
plug into our trustworthy computer,
some, I don’t know, Lenovo X60 from 15
years ago, that we have running Coreboot
and we just analysed all
the elements, whatever.
So we finally a have a way to
upload firmware in a reliable way.
Thanks to the actual laptop having no
state, we can have something like Tails
finally doing what it advertises.
I can boot Tails or something like that.
I can use it, I can shut it down and there
is no more traces of my activity there.
I can give my laptop to somebody other.
Or I can boot some other environment.
Perhaps some, I don’t know,
Windows to play games, or whatever.
So what would it take to have
such a stateless laptop?
This is the simplest version which
shows that the only modification
that has been made here was
to take the SPI flash chip
and essentially put it outside
the laptop on a trusted stick.
And just route the wiring,
just 4 wires, to the trusted stick.
And that’s pretty much it.
That’s the simplest version. Oh,
and I also got rid of the disk.
And also I had to ensure, that
whatever discrete devices,
which are in that case embedded
controller and Wi-Fi module,
they do not have flash memory
but use something like OTP memory.
We can further get rid
of the Wi-Fi, and use
an external USB connected
one if that is not possible.
And for the embedded controller that
should be possible, much more easier,
because embedded controller is always
something that the OEM chooses.
So we can just talk to whatever
OEM, who would like to implement
this stateless laptop, and ask the
OEM to use an embedded controller
with essentially ROM, instead of flash.
So that’s the simplest version,
which is really simple.
This is a more complex version
where we also fit something
that I call here SPI Multiplexer.
Which allows to share the firmware
not just to the processor, but
also to the embedded controller.
And perhaps also with the disk.
Because maybe we actually
would like to have internal disk.
Because internal disk will always
be faster and will always be bigger
than whatever disk we will
put on our trusted stick.
You might object, that, come on, disk
is actually not a stateless thing! Right?
Because disk is made especially
to store state persistently.
But it’s a special disk, that I will
mention in just a few minutes.
It’s a special disk running trusted
firmware and doing read-only
and encryption for everything.
And now for the trusted stick:
As I mentioned, the trusted
stick is envisioned to have
read-only and encrypted partitions.
And the read-only partitions are for
firmware and the system code.
So the first block is something that we
would like to export over SPI, typically,
and the system partition is
something that we make visible
to the OS using something like
pretending being USB mass storage
or actually implementing
USB mass storage protocol.
And the encrypted partition
- again, the important thing here is
that encryption should be
implemented by the stick itself.
So we have some key here,
the question is how this key should be...
What input should be taken
to derive this key from.
It could be something that
is persistent to the stick.
It could be combined with a
passphrase, that the user enters
using a traditional keyboard,
plus maybe a secret from the TPM.
And when I say TPM I think about the
firmware TPM inside the processor
that is using storage provided by
encrypted firmware partition.
The optional internal disk
that I just mentioned,
it should essentially do
the same as the stick,
and because it will be
running trusted firmware
that it will be fetching
from the trusted stick itself
the disk will not have any flash memory.
So because we will trust
the hardware of the disk
and because we will trust the
firmware, we will trust the firmware
to provide read-only and encrypted
partitions just like those ones
I mentioned on the stick, which is nice
because it reveals the stick from acting
as a mass storage device, which has
practical consequences which are nice.
So there’s a picture with the internal
trusted disk, which you see just here.
As you can see, it takes also the
firmware from the trusted stick.
And there is even an open source
project, OpenSSD. And it looks like
people have already built an open hardware
open firmware SSD, a very performant disk.
So this is not just science
fiction, even for this SSD.
Okay, so that looks all very nice,
but there is one problem.
Even though malware may not have any
place on the laptop to leak the secrets,
it still might try to do
it over networking.
And let’s differentiate now between
classic malware and sophisticated malware.
Classic malware is something you get with
an attachment or some drive-by-attack
which we’ll discuss in a moment.
Now, let’s focus on sophisticated malware.
So, a hypothetical rootkit in ME.
Before we move on, for obvious
reasons, such a sophisticated malware
would not be interested
in getting caught easily.
So, it would not be establishing a
TCP connection to NSA.gov server
or whatever, right?
That would be plain stupid.
Having that in mind, let’s
consider a few scenarios.
Scenario number 0 is
an air-gapped system.
Even though it might
be an air-gapped system,
still remember there is ME running there.
If the computer is not
inside a Faraday cage,
there is still plenty of other
networks or devices around it.
Which means that ME can theoretically
use your Wi-Fi card or even speaker
to establish a covert channel with, say,
your phone, that might be just nearby.
So, in order to make such a
system truly air-gapped,
knowing that we can not get rid of the ME,
we really need to have kill-switches
for any transmitting devices,
including the speakers, and apparently
even that might not be enough,
because some people showed
covert channels that used
things like power fluctuations
or temperature fluctuations but let’s
leave that exotic examples aside.
A more interesting scenario is a
closed network of trusted nodes.
In that scenario we assume that
all these people are trusted.
Again, by definition that means that
any of these people can compromise
the security of anybody else.
We really don’t like trusted things,
but, well, let’s start with something.
Now, even though each of these trusted
peers which run state-less laptops,
even each of these have
this malicious ME in itself
because we gonna fit a small proxy
so a modification that we are...
that we should additionally do,
that I have not shown you before,
is that rather than connecting
your Wi-Fi module directly to the
processor which is not good,
because it gives full authority of the
processor over this Wi-Fi module.
Instead we would like to
connect it to some proxy.
It would be doing some kind of tunnelling.
Something like VPN or maybe TORifying
any traffic that is generated there.
So even though ME might be willing
to be sending some traffic
maybe not explicit traffic,
maybe will be piggybacking on
some user generated traffic,
by only modifying, I don’t know, TCP
initial sequence numbers, or something.
It still all will be happening
inside the tunnel.
Again, some people might be
saying “Yeah, but still ME
might be modulating the timings
of the generated packets
and this way try to convey some
information using timing.”
We can’t truly do much about that
but on the other hand it would be
extremely difficult for ME to
do that, implementation wise.
Finally a scenario where
we want to use any...
when we want to connect with anybody
not just with a trusted computer.
So, say, with some website on the internet
that might or might not be trusted.
Again, by having this proxy which by
the way might be implemented inside
this embedded controller that
we know, if you remember,
it runs the trusted firmware because it
fetches firmware from our trusted stick.
So the proxy again is tunnelling
any potential leakage from ME
which means that a malicious ISP or
any part of the infrastructure here
still can not really retrieve the
secrets that ME tries to leak.
But of course at some point we need to
terminate the stunnel that might be
the VPN server or might be a TOR exit
node or maybe the server itself if it’s,
say, maybe an onion server, in
which case anybody here can still
presumably get the secrets
from ME in this cloud here.
Which unfortunately we
can do nothing about.
Except for hoping that at least
if we used perhaps TOR
for this first part of the equation
then at least a malicious
administrator of, say, this server
would not be able to
correlate whose decryption...
whose disk decryption key it really is.
Although this is very tricky. That would
only work if this computer was
a special purpose computer. If you
used something like Qubes here
and if you wanted to have different
partitions, one for your personal
and only one of these partitions
would be special secret partitions.
That wouldn’t work, because
ME still would be able to gather
your identification information
from whatever partitions you have.
Because again, it has unconstrained access
to all the personal, all the host memory.
But still this... using this
proxy pushes adversary,
hypothetical adversary, in the
ME, into extreme difficulty
of needing to piggy-back on
some higher-level protocols
establishing exotic cover channels.
Comparing to what they can do today
one can... they simply steal the key
and store it on SPI first partition.
Or maybe on your disk. This is like
orders of magnitude more difficult
for them to do.
We mentioned the sophisticated
malware and I mentioned
the classic malware is a different story.
The classic malware doesn’t need
to be shy against leaking something
through whatever means you can think of.
Perhaps by sending email to somebody. But
obviously to address the classic malware
problem we can address it quite
reasonably well on the OS level.
For example using compartmentalization.
But here comes the problem,
is, that a malicious BIOS...
Let me get back a little bit. Because
so far we having assuming that
we don’t really need to trust the BIOS.
Because having this stateless laptop
and trusted stick, even if the BIOS was
malicious, it still, again, would not
be able to change anything in its own
firmware partition, would not be able to
store any stolen secrets
anywhere. So it’s convenient
to figure that the BIOS
might not be trusted.
But then, again, a compromised
BIOS might instead be providing
privilege escalation backdoors for
classic malware that executes on your
compartmentalised OS.
Such as to do VM escape.
Such things are trivial to implement.
And we don’t want classic malware
which means we want to ensure
that the BIOS does not
provide such backdoors.
And to make it short, we need open-source
BIOS. Something like Coreboot.
It’s great that we have Coreboot
and we could help Coreboot
to become such a BIOS
for this stateless laptop.
Even though Coreboot is not fully open-
source - it relies on so-called Intel FSP,
the firmware support package, which
is a Intel blob that is needed to
initialize your DRAM and other silicon
- still it should be reasonably easy to
ensure that FSP does not
provide SMM backdoors.
So this is a solvable problem.
Finally there’s this question:
So let’s say half a year from now
or a year from now pure reason
or somebody will tell you
here is the stateless laptop.
You can order, just a 1000 dollars.
So you got the laptop. But how do you
know it really IS a stateless laptop?
Maybe it is full of state-caring
elements. Maybe it’s full of
radio devices that are
emanating radios everywhere.
This comes down to the problem of:
How do we compare 2 different PCBs?
Two different Printed Circuit Boards?
As far as I’m aware right now our industry
has no ways to compare two different PCBs
and to state: yes they look identical.
Because if we had that, then we could
have the vendor, the laptop vendor
which would obviously have to be
open-hardware to publish the schematics
and pictures of the boards and then
anybody who ordered this laptop
would have an opportunity to always,
say, photograph the board and
have a diff tool to compare it.
If it really looks the same.
Sure we would not be able to see inside
the chips but at least the geometry-wise
comparison would be a tremendous step
to making such malicious modifications
by vendors very difficult.
This is a vision problem, kind of, right?
You take 2 photos, have 2 photos
of 2 PCBs and you have a tool to compare
it. And I believe Jake Applebaum
has already mentioned it,
some... a year ago probably,
it’s a great research project for all
you academic people sitting here.
That’s an example of a board that...
I have no idea, I got this laptop,
I opened it, I see this board. Sure, I can
identify some IC elements
like this embedded controller here...
But, really, maybe it’s connected
somehow differently,
maybe there is some other flash
elements there, I don’t know.
I would like to have an ability now to
check this with a called-in image
that some experts will analyze
in-depth and say it’s safe to use.
Many people say that perhaps we should
all give up on Intel x86 because ME e.g.
applause
Yet this is not such a nice idea.
Or maybe this is not such a
silver bullet, I should have said.
First, we have ARM. Everybody says
“Why not ARM? Let’s go to ARM!”
First: There’s no such thing as an ARM
processor. Okay?
ARM just sells the specifications, or IP.
And then the vendors, like Samsung,
Texas Instruments etc. who take this IP
and design and make very own SOC.
This is still a proprietary processor
that they can put whatever they want
inside. E.g. we have Trust Zone
that by itself is not as closed as ME.
That there is nothing that would
prevent a vendor to actually take
Trust Zone and lock it down
and end up with something
like ME very easily.
Just a matter of the
vendor willing to do that.
Also the diversity of the processors
make it difficult for OS’s like Qubes
that would like to use advanced
technologies like IMMU for isolation
to actually support all of them because
different PSOCs might be implementing
completely different versions or
even technologies doing that.
Another alternative, a much better one
is to use open-hardware processors.
Currently that means FPGA-
implemented processors.
In the future maybe we will have 3D
printers that will allow everybody
to print it. That will be great. But
probably is not coming any time,
in the coming 10 or 20 years.
And meanwhile the performance and
lack of really any security technologies
like IOMMU or virtualization doesn’t
make this a viable solution
for the coming say 5 years at least.
And even then, even if we have
such an open-source processor
this clean separation of state
still makes lots of sense.
Right? Again, because firmware
infections can be easily prevented
because malware, if it gets there
somehow still has no places
to store stolen secrets because
it provides reliable way to verify
or upload firmware. And makes it
easy to boot multiple environments.
And share laptops with others.
I know that most of you will now say:
“Yeah, that may be cool idea but
the market will never
buy into that!” Right?
Understanding that PCs are really,
as I said, extension of our brains,
we should stop thinking about
market forces as the ultimate force
shaping how our personal
computing looks like.
Just like we didn’t desert
to market forces
to give us human rights. Right?
We should not count on the market forces
to give us trustworthy personal computers.
Because that might just not be really...
applause
That just might not be in the
interest of the market forces!
So, hopefully, some legislation
could be of help here.
Maybe EU could do something here.
Because it’s really fun, when I
often talk with other engineers,
and we all know that our world
now really runs on computers,
and yet it apparently...
Almost every engineer I talked to
says something like “Yeah but the
sales people will never do that,
the business will never agree to that.”
But if the world runs on computers
shouldn’t it be us, the engineers,
who should actually have the
final say how this should...
how the computer technology
should look like?
Yeah, I’ll just leave it here with this.
Thank you very much!
final applause
postroll music
subtitles created by
c3subtitles.de in 2016