0:00:00.000,0:00:18.660
36c3 intro
0:00:18.660,0:00:23.914
Herald: Good morning again. Thanks. First[br]off for today is by Hannes Mehnert. It's
0:00:23.914,0:00:29.390
titled "Leaving Legacy Behind". It's about[br]the reduction of carbon footprint through
0:00:29.390,0:00:33.230
micro kernels in MirageOS. Give a warm[br]welcome to Hannes.
0:00:33.230,0:00:39.250
Applause
0:00:39.250,0:00:45.060
Hannes Mehnert: Thank you. So let's talk a[br]bit about legacy, so legacy we had have.
0:00:45.060,0:00:50.000
Nowadays we run services usually on a Unix[br]based operating system, which is
0:00:50.000,0:00:55.080
demonstrated here on the left a bit the[br]layering. So at the lowest layer we have
0:00:55.080,0:01:00.829
the hardware. So some physical CPU, some[br]lock devices, maybe a network interface
0:01:00.829,0:01:06.570
card and maybe some memories, some non-[br]persistent memory. On top of that, we
0:01:06.570,0:01:13.740
usually run the Unix kernel. So to say.[br]That is marked here in brown which is
0:01:13.740,0:01:19.580
which consists of a filesystem. Then it[br]has a scheduler, it has some process
0:01:19.580,0:01:25.470
management that has network stacks. So the[br]TCP/IP stack, it also has some user
0:01:25.470,0:01:32.350
management and hardware and drivers. So it[br]has drivers for the physical hard drive,
0:01:32.350,0:01:37.800
for their network interface and so on.[br]The ground stuff. So the kernel runs in
0:01:37.800,0:01:46.380
privilege mode. It exposes a system called[br]API or and/or a socket API to the
0:01:46.380,0:01:52.350
actual application where we are there to[br]run, which is here in orange. So the
0:01:52.350,0:01:56.460
actual application is on top, which is the[br]application binary and may depend on some
0:01:56.460,0:02:02.880
configuration files distributed randomly[br]across the filesystem with some file
0:02:02.880,0:02:08.119
permissions set on. Then the application[br]itself also depends likely on a programming
0:02:08.119,0:02:14.000
language runtime that may either be a Java[br]virtual machine if you run Java or Python
0:02:14.000,0:02:20.140
interpreter if you run Python, or a ruby[br]interpreter if you run Ruby and so on.
0:02:20.140,0:02:25.230
Then additionally we usually have a system[br]library. Lip C which is just runtime
0:02:25.230,0:02:30.790
library basically of the C programming[br]language and it exposes a much nicer
0:02:30.790,0:02:38.470
interface than the system calls. We may as[br]well have open SSL or another crypto
0:02:38.470,0:02:45.360
library as part of the application binary[br]which is also here in Orange. So what's a
0:02:45.360,0:02:50.200
drop of the kernel? So the brown stuff[br]actually has a virtual memory subsystem
0:02:50.200,0:02:55.110
and it should separate the orange stuff[br]from each other. So you have multiple
0:02:55.110,0:03:01.790
applications running there and the brown[br]stuff is responsible to ensure that the
0:03:01.790,0:03:07.150
orange that different pieces of orange[br]stuff don't interfere with each other so
0:03:07.150,0:03:12.601
that they are not randomly writing into[br]each other's memory and so on. Now if the
0:03:12.601,0:03:17.420
orange stuff is compromised. So if you[br]have some attacker from the network or
0:03:17.420,0:03:26.540
from wherever else who's able to find a[br]flaw in the orange stuff, the kernel is still
0:03:26.540,0:03:32.420
responsible for strict isolation between[br]the orange stuff. So as long as the
0:03:32.420,0:03:38.070
attacker only gets access to the orange[br]stuff, it should be very well contained.
0:03:38.070,0:03:42.650
But then we look at the bridge between the[br]brown and orange stuff. So between kernel
0:03:42.650,0:03:49.170
and user space and there we have an API[br]which is roughly 600 system calls at
0:03:49.170,0:03:56.360
least on my FreeBSD machine here in Sys[br]calls. So it's 600 different functions or
0:03:56.360,0:04:05.240
the width of this API is 600 different[br]functions, which is quite big. And it's
0:04:05.240,0:04:12.180
quite easy to hide some flaws in there.[br]And as soon as you're able to find a flaw
0:04:12.180,0:04:17.320
in any of those system calls, you can[br]escalate your privileges and then you
0:04:17.320,0:04:22.250
basically run into brown moats and kernel[br]mode and you have access to the raw
0:04:22.250,0:04:26.310
physical hardware. And you can also read[br]arbitrary memory from any processor
0:04:26.310,0:04:34.440
running there. So now over the years it[br]actually evolved and we added some more
0:04:34.440,0:04:39.350
layers, which is hypervisors. So at the[br]lowest layer, we still have the hardware
0:04:39.350,0:04:45.790
stack, but on top of the hardware we now[br]have a hypervisor, which responsibility it
0:04:45.790,0:04:51.300
is to split the physical hardware into[br]pieces and slice it up and run different
0:04:51.300,0:04:56.720
virtual machines. So now we have the byte[br]stuff, which is the hypervisor. And on top
0:04:56.720,0:05:04.360
of that, we have multiple brown things and[br]multiple orange things as well. So now the
0:05:04.360,0:05:12.320
hypervisor is responsible for distributing[br]the CPUs to virtual machines. And the
0:05:12.320,0:05:17.130
memory to virtual machines and so on. It[br]is also responsible for selecting which
0:05:17.130,0:05:21.660
virtual machine to run on which physical[br]CPU. So it actually includes the scheduler
0:05:21.660,0:05:28.950
as well. And the hypervisors[br]responsibility is again to isolate the
0:05:28.950,0:05:34.360
different virtual machines from each[br]other. Initially, hypervisors were done
0:05:34.360,0:05:39.889
mostly in software. Nowadays, there are a[br]lot of CPU features available, which
0:05:39.889,0:05:47.090
allows you to have some CPU support, which[br]makes them fast, and you don't have to
0:05:47.090,0:05:52.449
trust so much software anymore, but you[br]have to trust in the hardware. So that's
0:05:52.449,0:06:00.150
extended page tables and VTD and VTX[br]stuff. OK, so that's the legacy we have
0:06:00.150,0:06:08.070
right now. So when you ship a binary, you[br]actually care about some tip of the
0:06:08.070,0:06:12.229
iceberg. That is the code you actually[br]write and you care about. You care about
0:06:12.229,0:06:18.820
deeply because it should work well and you[br]want to run it. But at the bottom you have
0:06:18.820,0:06:23.830
the sole operating system and that is the[br]code. The operating system insist that you
0:06:23.830,0:06:30.180
need it. So you can't get it without the[br]bottom of the iceberg. So you will always
0:06:30.180,0:06:34.669
have a process management and user[br]management and likely as well the
0:06:34.669,0:06:41.100
filesystem around on a UNIX system. Then[br]in addition, back in May, I think their
0:06:41.100,0:06:48.900
was a blog entry from someone who analyzed[br]from Google Project Zero, which is a
0:06:48.900,0:06:54.540
security research team and red team which[br]tries to fund a lot of flaws in vitally
0:06:54.540,0:07:02.480
use applications . And they found in a[br]year maybe 110 different vulnerabilities
0:07:02.480,0:07:08.330
which they reported and so on. And someone[br]analyzed what these 110 vulnerabilities
0:07:08.330,0:07:13.660
were about and it turned out that more[br]than two thirds of them, that the root
0:07:13.660,0:07:18.940
cause of the flaw was memory corruption.[br]And memory corruption means arbitrary
0:07:18.940,0:07:22.880
reads of rights from from arbitrary[br]memory, which a process that's not
0:07:22.880,0:07:29.900
supposed to be in. So why does that[br]happen? That happens because we on the
0:07:29.900,0:07:36.160
Unix system, we mainly use program[br]languages where we have tight control over
0:07:36.160,0:07:40.199
the memory management. So we do it[br]ourselves. So we allocate the memory
0:07:40.199,0:07:44.639
ourselves and we free it ourselves. There[br]is a lot of boilerplate we need to write
0:07:44.639,0:07:53.190
down and that is also a lot of boilerplate[br]which you can get wrong. So now we talked
0:07:53.190,0:07:57.810
a bit about legacy. Let's talk about the[br]goals of this talk. The goals is on the
0:07:57.810,0:08:06.670
one side to be more secure. So to reduce[br]the attack vectors because C and languages
0:08:06.670,0:08:11.870
like that from the 70s and we may have[br]some languages from the 80s or even from
0:08:11.870,0:08:17.930
the 90s who offer you automated memory[br]management and memory safety languages
0:08:17.930,0:08:24.699
such as Java or Rust or Python or[br]something like that. But it turns out not
0:08:24.699,0:08:30.490
many people are writing operating systems[br]in those languages. Another point here is
0:08:30.490,0:08:37.159
I want to reduce the attack surface. So we[br]have seen this huge stack here and I want
0:08:37.159,0:08:45.880
to minimize the orange and the brown part.[br]Then as an implication of that. I also
0:08:45.880,0:08:50.410
want to reduce the runtime complexity[br]because that is actually pretty cumbersome
0:08:50.410,0:08:56.100
to figure out what is now wrong. Why does[br]your application not start? And if the
0:08:56.100,0:09:01.829
whole reason is because some file on your[br]harddisk has the wrong filesystem
0:09:01.829,0:09:09.560
permissions, then it's pretty hard to[br]get across if you're not yet a Unix expert
0:09:09.560,0:09:16.550
who has a lift in the system for years or[br]at least months. And then the final goal,
0:09:16.550,0:09:22.269
thanks to the topic of this conference and[br]to some analysis I did, is to actually
0:09:22.269,0:09:29.750
reduce the carbon footprint. So if you run[br]a service, you certainly that service does
0:09:29.750,0:09:37.629
some computation and this computation[br]takes some CPU takes. So it takes some CPU
0:09:37.629,0:09:44.759
time in order to be evaluated. And now[br]reducing that means if you condense down
0:09:44.759,0:09:49.860
the complexity and the code size, we also[br]reduce the amount of computation which
0:09:49.860,0:09:57.800
needs to be done. These are the goals. So[br]what are MirageOS unikernels? That is
0:09:57.800,0:10:07.459
basically the project i have been involved[br]in since six years or so. The general idea
0:10:07.459,0:10:14.309
is that each service is isolated in a[br]separate MirageOS unikernel. So your DNS
0:10:14.309,0:10:19.720
resover or your web server don't run on[br]this general purpose UNIX system as a
0:10:19.720,0:10:25.910
process, but you have a separate virtual[br]machine for each of them. So you have one
0:10:25.910,0:10:31.380
unikernel which only does DNS resolution[br]and in that unikernel you don't even need
0:10:31.380,0:10:35.759
a user management. You don't even need[br]process management because there's only a
0:10:35.759,0:10:41.720
single process. There's a DNS resolver.[br]Actually, a DNS resolver also doesn't
0:10:41.720,0:10:47.199
really need a file system. So we got rid[br]of that. We also don't really need virtual
0:10:47.199,0:10:52.259
memory because we only have one process.[br]So we don't need virtual memory and we
0:10:52.259,0:10:57.089
just use a single address space. So[br]everything is mapped in a single address
0:10:57.089,0:11:03.339
space. We use program language called[br]OCaml, which is functional programming
0:11:03.339,0:11:08.079
language which provides us with memory[br]safety. So it has automated memory
0:11:08.079,0:11:17.279
measurement and we use this memory[br]management and the isolation, which the
0:11:17.279,0:11:24.329
program manager guarantees us by its type[br]system. We use that to say, okay, we can
0:11:24.329,0:11:28.429
all live in a single address space and[br]it'll still be safe as long as the
0:11:28.429,0:11:34.579
components are safe. And as long as we[br]minimize the components which are by
0:11:34.579,0:11:42.639
definition unsafe. So we need to run some[br]C code there as well. So in addition,
0:11:42.639,0:11:47.660
well. Now, if we have a single service, we[br]only put in the libraries or the stuff we
0:11:47.660,0:11:51.699
actually need in that service. So as I[br]mentioned that the DNS resolver won't need
0:11:51.699,0:11:56.589
a user management, it doesn't need a[br]shell. Why would I need to shell? What
0:11:56.589,0:12:02.889
should I need to do there? And so on. So[br]we have a lot of libraries, a lot of OCaml
0:12:02.889,0:12:09.750
libraries which are picked by the single[br]servers or which are mixed and matched for
0:12:09.750,0:12:14.160
the different services. So libraries are[br]developed independently of the whole
0:12:14.160,0:12:20.010
system or of the unikernel and are reused[br]across the different components or across
0:12:20.010,0:12:26.910
the different services. Some further[br]limitation which I take as freedom and
0:12:26.910,0:12:32.839
simplicity is not even we have a single[br]address space. We are also only focusing
0:12:32.839,0:12:37.839
on single core and have a single process.[br]So we don't have a process. We don't know
0:12:37.839,0:12:46.679
the concept of process yet. We also don't[br]work in a preemptive way. So preemptive
0:12:46.679,0:12:52.790
means that if you run on a CPU as a[br]function or as a program, you can at any
0:12:52.790,0:12:58.019
time be interrupted because something[br]which is much more important than you can
0:12:58.019,0:13:03.970
now get access to the CPU. And we don't do[br]that. We do co-operative tasks. So we are
0:13:03.970,0:13:08.529
never interrupted. We don't even have[br]interrupts. So there are no interrupts.
0:13:08.529,0:13:13.480
And as I mentioned, it's executed as a[br]virtual machine. So how does that look
0:13:13.480,0:13:17.519
like? So now we have the same picture as[br]previously. We have at the bottom the
0:13:17.519,0:13:22.729
hypervisor. Then we have the host system,[br]which is the brownish stuff. And on top of
0:13:22.729,0:13:29.850
that we have maybe some virtual machines.[br]Some of them run via KVM and qemu UNIX
0:13:29.850,0:13:34.779
system. Using some Virtio that is on the[br]right and on the left. And in the middle
0:13:34.779,0:13:41.899
we have this MirageOS as Unicode where we[br]and the whole system don't run any qemu,
0:13:41.899,0:13:49.920
but we run a minimized so-called tender,[br]which is this solo5-hvt monitor process.
0:13:49.920,0:13:55.149
So that's something which just tries to[br]allocate or will allocate some host system
0:13:55.149,0:14:01.579
resources for the virtual machine and then[br]does interaction with the virtual machine.
0:14:01.579,0:14:06.989
So what does this solo5-hvt do in this[br]case is to set up the memory, load the
0:14:06.989,0:14:12.309
unikernel image which is a statically[br]linked ELF binary and it sets up the
0:14:12.309,0:14:17.829
virtual CPU. So the CPU needs some[br]initialization and then booting is jumped
0:14:17.829,0:14:24.740
to an address. It's already in 64 bit mode.[br]There's no need to boot via 16 or 32 bit
0:14:24.740,0:14:34.079
modes. Now solo5-hvt and the MirageOS they[br]also have an interface and the interface
0:14:34.079,0:14:38.819
is called hyper calls and that interface[br]is rather small. So it only contains in
0:14:38.819,0:14:46.019
total 14 different functions. Which main[br]function yields a way to get the argument
0:14:46.019,0:14:52.850
vector clock. Actually, two clocks, one is[br]a POSIX clock, which takes care of this
0:14:52.850,0:14:58.339
whole time stamping and timezone business[br]and another one in a monotonic clock which
0:14:58.339,0:15:06.569
by its name guarantees that time will pass[br]monotonically. Then the other console
0:15:06.569,0:15:12.510
interface. The console interface is only[br]one way. So we only output data. We never
0:15:12.510,0:15:18.149
read from console. A block device. Well a[br]block devices and network interfaces and
0:15:18.149,0:15:25.829
that's all the hyper calls we have. To[br]look a bit further down into detail of how
0:15:25.829,0:15:34.709
a MirageOS unikernel looks like. Here I[br]pictured on the left again the tender at
0:15:34.709,0:15:41.269
the bottom, and then the hyper calls. And[br]then in pink I have the pieces of code
0:15:41.269,0:15:46.939
which still contain some C code and the[br]MirageOS unikernel. And in green I have
0:15:46.939,0:15:55.140
the pieces of code which does not include[br]any C code, but only OCaml code. So
0:15:55.140,0:16:00.429
looking at the C code which is dangerous[br]because in C we have to deal with memory
0:16:00.429,0:16:05.749
management on our own, which means it's a[br]bit brittle. We need to carefully review
0:16:05.749,0:16:10.790
that code. It is definitely the OCaml[br]runtime which we have here, which is round
0:16:10.790,0:16:18.579
25 thousand lines of code. Then we have a[br]library which is called nolibc which is
0:16:18.579,0:16:24.339
basically a C library which implements[br]malloc and string compare and some
0:16:24.339,0:16:29.439
basic functions which are needed by the[br]OCaml runtime. That's roughly 8000 lines
0:16:29.439,0:16:37.060
of code. That nolibc also provides a lot[br]of stops which just exit to or return
0:16:37.060,0:16:46.850
null for the OCaml runtime because we use[br]an unmodified OCaml runtime to be able to
0:16:46.850,0:16:50.749
upgrade our software more easily. We don't[br]have any patents for The OCaml runtime.
0:16:50.749,0:16:57.419
Then we have a library called[br]solo5-bindings, which is basically
0:16:57.419,0:17:03.220
something which translates into hyper[br]calls or which can access a hyper calls
0:17:03.220,0:17:07.849
and which communicates with the host[br]system via hyper calls. That is roughly
0:17:07.849,0:17:14.910
2000 lines of code. Then we have a math[br]library for sinus and cosinus and tangents
0:17:14.910,0:17:20.940
and so on. And that is just the openlibm[br]which is originally from the freeBSD
0:17:20.940,0:17:26.980
project and has roughly 20000 lines of[br]code. So that's it. So I talked a bit
0:17:26.980,0:17:32.270
about solo5, about the bottom layer and I[br]will go a bit more into detail about the
0:17:32.270,0:17:40.120
solo5 stuff, which is really the stuff [br]which you run at the bottom
0:17:40.120,0:17:46.140
of the MirageOS. There's another choice.[br]You can also run Xen or Qubes OS at
0:17:46.140,0:17:50.870
the bottom of the MirageOS unikernel. But[br]I'm focusing here mainly on solo5. So
0:17:50.870,0:17:56.850
solo5 has a sandbox execution environment[br]for unikernels. It handles resources from
0:17:56.850,0:18:03.910
the host system, but only aesthetically.[br]So you say at startup time how much memory
0:18:03.910,0:18:09.150
it will take. How many network interfaces[br]and which ones are taken and how many
0:18:09.150,0:18:13.520
block devices and which ones are taken by[br]the virtual machine. You don't have any
0:18:13.520,0:18:19.430
dynamic resource management, so you can't[br]add at a later point in time a new network
0:18:19.430,0:18:28.040
interface. That's just not supported. And it[br]makes the code much easier. We don't even
0:18:28.040,0:18:36.360
have dynamic allocation inside of [br]solo5. We have a hyper cool interface. As I
0:18:36.360,0:18:42.330
mentioned, it's only 14 functions. We have[br]bindings for different targets. So we can
0:18:42.330,0:18:49.640
run on KVM, which is hypervisor developed[br]for the Linux project, but also for
0:18:49.640,0:18:57.060
Beehive, which is a free BSD hypervisor or[br]VMM which is openBSD hypervisor. We also
0:18:57.060,0:19:01.920
target other systems such as the g-node,[br]wich is an operating system, based on a
0:19:01.920,0:19:08.830
micro kernel written mainly in C++,[br]virtio, which is a protocol usually spoken
0:19:08.830,0:19:15.490
between the host system and the guest[br]system, and virtio is used in a lot of
0:19:15.490,0:19:22.770
cloud deployments. So it's OK. So qemu for[br]example, provides you with a virtio
0:19:22.770,0:19:29.430
protocol implementation. And a last[br]implementation of solo5 or bindings for
0:19:29.430,0:19:38.570
solo5 is seccomb. So Linux seccomb is a[br]filter in the Linux kernel where you can
0:19:38.570,0:19:47.180
restrict your process that will only use a[br]certain number or a certain amount of
0:19:47.180,0:19:53.790
system calls and we use seccomb so you can[br]deploy it without virtual machine in the
0:19:53.790,0:20:02.270
second case, but you are restricted to[br]which system calls you can use. So solo5
0:20:02.270,0:20:06.500
also provides you with the host system[br]tender where applicable. So in the virtio
0:20:06.500,0:20:11.880
case it not applicable. In the g-note case[br]it is also not applicable. In KVM we
0:20:11.880,0:20:19.220
already saw the solo5 HVT, wich is a[br]hardware virtualized tender. Which is just
0:20:19.220,0:20:25.790
a small binary because if you run qemu at[br]least hundreds of thousands of lines of
0:20:25.790,0:20:36.170
code in the solo5 HVT case, it's more like[br]thousands of lines of code. So here we
0:20:36.170,0:20:42.930
have a comparison from left to right of[br]solo5 and how the host system or the host
0:20:42.930,0:20:49.100
system kernel and the guest system works.[br]In the middle we have a virtual machine, a
0:20:49.100,0:20:54.490
common Linux qemu KVM based virtual[br]machine for example, and on the right hand
0:20:54.490,0:20:59.970
we have the host system and the container.[br]Container is also a technology where you
0:20:59.970,0:21:08.480
try to restrict as much access as you can[br]from process. So it is contained and the
0:21:08.480,0:21:14.940
potential compromise is also very isolated[br]and contained. On the left hand side we
0:21:14.940,0:21:21.270
see that solo5 is basically some bits and[br]pieces in the host system. So is the solo5
0:21:21.270,0:21:27.380
HVT and then some bits and pieces in[br]Unikernel. So is the solo5 findings I
0:21:27.380,0:21:31.200
mentioned earlier. And that is to[br]communicate between the host and the guest
0:21:31.200,0:21:37.100
system. In the middle we see that the API[br]between the host system and the virtual
0:21:37.100,0:21:41.310
machine. It's much bigger than this. And[br]commonly using virtio and virtio is really
0:21:41.310,0:21:48.920
a huge protocol which does feature[br]negotiation and all sorts of things where
0:21:48.920,0:21:54.010
you can always do something wrong, like[br]you can do something wrong and a floppy
0:21:54.010,0:21:58.650
disk driver. And that led to some[br]exploitable vulnerability, although
0:21:58.650,0:22:04.480
nowadays most operating systems don't[br]really need a floppy disk drive anymore.
0:22:04.480,0:22:08.180
And on the right hand side, you can see[br]that the whole system interface for a
0:22:08.180,0:22:12.530
container is much bigger than for a[br]virtual machine because the whole system
0:22:12.530,0:22:17.620
interface for a container is exactly those[br]system calls you saw earlier. So it's run
0:22:17.620,0:22:24.150
600 different calls. And in order to[br]evaluate the security, you need basically
0:22:24.150,0:22:32.770
to audit all of them. So that's just a[br]brief comparison between those. If we look
0:22:32.770,0:22:38.020
into more detail, what solo5 what shapes[br]it can have here on the left side. We can
0:22:38.020,0:22:43.350
see it running in a hardware virtualized[br]tender, which is you have the Linux
0:22:43.350,0:22:50.290
freebies, your openBSD at the bottom and[br]you have solo5 blob, which is a blue thing
0:22:50.290,0:22:54.590
here in the middle. And then on top you[br]have the unikernel. On the right hand side
0:22:54.590,0:23:02.850
you can see the Linux satcom process and[br]you have a much smaller solo5 blob because
0:23:02.850,0:23:06.940
it doesn't need to do that much anymore,[br]because all the hyper calls are basically
0:23:06.940,0:23:11.960
translated to system calls. So you[br]actually get rid of them and you don't
0:23:11.960,0:23:16.820
need to communicate between the host and[br]the guest system because in seccomb you
0:23:16.820,0:23:22.610
run as a whole system process so you don't[br]have this virtualization. The advantage of
0:23:22.610,0:23:29.220
using seccomb as well, but you can deploy[br]it without having access to virtualization
0:23:29.220,0:23:38.050
features of the CPU. Now to get it in even[br]smaller shape. There's another backend I
0:23:38.050,0:23:42.870
haven't talked to you about. It's called[br]the Muen. It's a separation kernel
0:23:42.870,0:23:50.870
developed in Ada. So you basically ... so[br]now we try to get rid of this huge Unix
0:23:50.870,0:23:58.320
system below it. Which is this big kernel[br]thingy here. And Muen is an open source
0:23:58.320,0:24:03.310
project developed in Switzerland in Ada,[br]as I mentioned, and that uses SPARK, which
0:24:03.310,0:24:12.620
is proof system, which guarantees the[br]memory isolation between the different
0:24:12.620,0:24:19.570
components. And Muen now goes a step[br]further and it says, "Oh yeah. For you as
0:24:19.570,0:24:23.540
a guest system, you don't do static[br]allocations and you don't do dynamic
0:24:23.540,0:24:28.210
resource management." We as a host system,[br]we as a hypervisor, we don't do any
0:24:28.210,0:24:34.350
dynamic resource allocation as well. So it[br]only does static resource management. So
0:24:34.350,0:24:39.250
at compile time of your Muen separation[br]kernel you decide how many virtual
0:24:39.250,0:24:44.460
machines or how many unikernels you are[br]running and which resources are given to
0:24:44.460,0:24:50.120
them. You even specify which communication[br]channels are there. So if one of your
0:24:50.120,0:24:55.560
virtual machines needs to talk to another[br]one, you need to specify that at
0:24:55.560,0:25:00.970
compile time and at runtime you don't have[br]any dynamic resource management. So that
0:25:00.970,0:25:08.620
again makes the code much easier, much,[br]much less complex. And you get to much
0:25:08.620,0:25:19.060
fewer lines of code. So to conclude with[br]this Mirage and how this and also the Muen
0:25:19.060,0:25:26.370
and solo5. And how that is. I like to cite[br]Antoine: "Perfection is achieved, not when
0:25:26.370,0:25:31.660
there is nothing more to add, but when[br]there is nothing left to take away." I
0:25:31.660,0:25:36.621
mean obviously the most secure system is a[br]system which doesn't exist.
0:25:36.621,0:25:40.210
Laughter
0:25:40.210,0:25:41.638
Let's look a bit further
0:25:41.638,0:25:46.440
into the decisions of MirageOS.[br]Why do you use this strange
0:25:46.440,0:25:50.960
programming language called OCaml and[br]what's it all about? And what are the case
0:25:50.960,0:25:59.170
studies? So OCaml has been around since[br]more than 20 years. It's a multi paradigm
0:25:59.170,0:26:05.890
programming language. The goal for us and[br]for OCaml is usually to have declarative
0:26:05.890,0:26:14.390
code. To achieve declarative code you need[br]to provide the developers with some
0:26:14.390,0:26:21.200
orthogonal abstraction facilities such as[br]here we have variables then functions you
0:26:21.200,0:26:24.890
likely know if you're a software[br]developer. Also higher order functions. So
0:26:24.890,0:26:31.500
that just means that the function is able[br]to take a function as input. Then in OCaml
0:26:31.500,0:26:37.270
we tried to always focus on the problem[br]and do not distract with boilerplate. So
0:26:37.270,0:26:43.510
some running example again would be this[br]memory management. We don't manually deal
0:26:43.510,0:26:52.940
with that, but we have computers to[br]actually deal with that. In OCaml you have
0:26:52.940,0:27:00.170
a very expressive and static type system,[br]which can spot a lot of invariance or
0:27:00.170,0:27:07.160
violation of invariance at build time.[br]So the program won't compile if you don't
0:27:07.160,0:27:14.196
handle all the potential return types or[br]return values of your function. So now a
0:27:14.196,0:27:20.190
type system, you know, you may know it[br]from Java is a bit painful. If you have to
0:27:20.190,0:27:24.250
express at every location where you want[br]to have a variable, which type this
0:27:24.250,0:27:31.900
variable is. What OCaml provides is type[br]inference similar to Scala and other
0:27:31.900,0:27:37.830
languages. So you don't need to type all[br]the types manually. And types are also
0:27:37.830,0:27:43.670
unlike in Java. Types are erased during[br]compilation. So types are only information
0:27:43.670,0:27:48.820
about values the compiler has at compile[br]time. But at runtime these are all erased
0:27:48.820,0:27:54.920
so they don't exist. You don't see them.[br]And OCaml compiles to native machine code,
0:27:54.920,0:28:01.580
which I think is important for security[br]and performance. Because otherwise you run
0:28:01.580,0:28:07.470
an interpreter or an abstract machine and[br]you have to emulate something else and
0:28:07.470,0:28:14.890
that is never as fast as you can. OCaml[br]has one distinct feature, which is its
0:28:14.890,0:28:21.460
module system. So you have all your[br]values, which types or functions. And now
0:28:21.460,0:28:26.840
each of those values is defined inside of[br]a so-called module. And the simplest
0:28:26.840,0:28:32.670
module is just the filename. But you can[br]nest modules so you can explicitly say, oh
0:28:32.670,0:28:39.540
yeah, this value or this binding is now[br]living in a sub module here off. So each
0:28:39.540,0:28:45.260
module you can also give it a type. So it[br]has a set of types and a set of functions
0:28:45.260,0:28:52.600
and that is called its signature, which is[br]the interface of the module. Now you have
0:28:52.600,0:28:59.600
another abstraction mechanism in OCaml[br]which is functors. And functors are
0:28:59.600,0:29:04.470
basically compile time functions from[br]module to module. So they allow a
0:29:04.470,0:29:09.990
pyramidisation. Like you can implement[br]your generic map structure and all you
0:29:09.990,0:29:18.740
require. So map is just a hash map or a[br]implementation is maybe a binary tree. And
0:29:18.740,0:29:25.980
you need to have is some comparison for[br]the keys and that is modeled in OCaml by
0:29:25.980,0:29:32.427
module. So you have a module called map[br]and you have a functor called make. And the
0:29:32.427,0:29:38.460
make takes some module which implements[br]this comparison method and then provides
0:29:38.460,0:29:45.740
you with map data structure for that key[br]type. And then MirageOS we actually use a
0:29:45.740,0:29:51.800
module system quite a bit more because we[br]have all these resources which are
0:29:51.800,0:29:58.330
different between Xen and KVM and so on.[br]So each of the different resources like a
0:29:58.330,0:30:06.740
network interface has a signature. OK, and[br]target specific implementation. So we have
0:30:06.740,0:30:11.210
the TCP/IP stack, which is much higher[br]than the network card, but it doesn't
0:30:11.210,0:30:16.920
really care if you run on Xen or if you[br]run on KVM. You just program against this
0:30:16.920,0:30:22.270
abstract interface against the interface[br]of the network device. But you don't need
0:30:22.270,0:30:27.740
to program. You don't need to write in[br]your TCP/IP stack any code to run on Xen
0:30:27.740,0:30:38.230
or to run on KVM. So MirageOS also[br]doesn't really use the complete OCaml
0:30:38.230,0:30:44.410
programming language. OCaml also provides[br]you with an object system and we barely
0:30:44.410,0:30:49.720
use that. We also have in MirageOS... well[br]OCaml also allows you for with mutable
0:30:49.720,0:30:57.610
state. And we barely used that mutable[br]state, but we use mostly immutable data
0:30:57.610,0:31:05.429
whenever sensible. We also have a value[br]passing style, so we put state and data as
0:31:05.429,0:31:12.000
inputs. So stage is just some abstract[br]state and data is just a byte vector
0:31:12.000,0:31:17.010
in a protocol implementation. And then the[br]output is also a new state which may be
0:31:17.010,0:31:22.179
modified and some reply maybe so some[br]other byte vector or some application
0:31:22.179,0:31:31.790
data. Or the output data may as well be an[br]error because the incoming data and state
0:31:31.790,0:31:38.179
may be invalid or might maybe violate some[br]some constraints. And errors are also
0:31:38.179,0:31:44.110
explicitly types, so they are declared in[br]the API and the call of a function needs
0:31:44.110,0:31:52.480
to handle all these errors explicitly. As[br]I said, the single core, but we have some
0:31:52.480,0:32:00.690
promise based or some even based[br]concurrent programming stuff. And yeah, we
0:32:00.690,0:32:04.450
have the ability to express a really[br]strong and variants like this is a read-
0:32:04.450,0:32:08.340
only buffer in the type system. And the[br]type system is, as I mentioned, only
0:32:08.340,0:32:15.161
compile time, no runtime overhead. So it's[br]all pretty nice and good. So let's take a
0:32:15.161,0:32:21.210
look at some of the case studies. The[br]first one is unikernel. So it's called the
0:32:21.210,0:32:29.740
Bitcoin Pinata. It started in 2015 when we[br]were happy with from the scratch developed
0:32:29.740,0:32:35.100
TLS stack. TLS is transport layer[br]security. So what use if you browse to
0:32:35.100,0:32:41.720
HTTPS. So we have an TLS stack in OCaml[br]and we wanted to do some marketing for
0:32:41.720,0:32:50.670
that. Bitcoin Pinata is basically[br]unikernel which uses TLS and provides you
0:32:50.670,0:32:57.790
with TLS endpoints, and it contains the[br]private key for a bitcoin wallet which is
0:32:57.790,0:33:05.790
filled with, which used to be filled with[br]10 bitcoins. And this means it's a
0:33:05.790,0:33:10.770
security bait. So if you can compromise[br]the system itself, you get the private key
0:33:10.770,0:33:16.420
and you can do whatever you want with it.[br]And being on this bitcoin block chain, it
0:33:16.420,0:33:22.880
also means it's transparent so everyone[br]can see that that has been hacked or not.
0:33:22.880,0:33:30.450
Yeah and it has been online since three years[br]and it was not hacked. But the bitcoin we
0:33:30.450,0:33:35.630
got were only borrowed from friends of us[br]and they were then reused in other
0:33:35.630,0:33:40.370
projects. It's still online. And you can[br]see here on the right that we had some
0:33:40.370,0:33:49.740
HTTP traffic, like an aggregate of maybe[br]600,000 hits there. Now I have a size
0:33:49.740,0:33:54.600
comparison of the Bitcoin Pinata on the[br]left. You can see the unikernel, which is
0:33:54.600,0:34:00.410
less than 10 megabytes in size or in[br]source code it's maybe a hundred thousand
0:34:00.410,0:34:06.000
lines of code. On the right hand side you[br]have a very similar thing, but running as
0:34:06.000,0:34:16.489
a Linux service so it runs an openSSL S[br]server, which is a minimal TLS server you
0:34:16.489,0:34:22.820
can get basically on a Linux system using[br]openSSL. And there we have mainly maybe a
0:34:22.820,0:34:29.019
size of 200 megabytes and maybe two[br]million two lines of code. So that's
0:34:29.019,0:34:36.409
roughly a vector of 25. In other examples,[br]we even got a bit less code, much bigger
0:34:36.409,0:34:45.310
effect. Performance analysis I showed that[br]... Well, in 2015 we did some evaluation
0:34:45.310,0:34:50.659
of our TLS stack and it turns out we're in[br]the same ballpark as other
0:34:50.659,0:34:56.769
implementations. Another case study is[br]CalDAV server, which we developed last
0:34:56.769,0:35:04.729
year with a grant from Prototypefund which[br]is a German government funding. It is
0:35:04.729,0:35:09.279
intolerable with other clients. It stores[br]data in a remote git repository. So we
0:35:09.279,0:35:14.140
don't use any block device or persistent[br]storage, but we store it in a git
0:35:14.140,0:35:18.599
repository so whenever you add the[br]calendar event, it does actually a git
0:35:18.599,0:35:24.829
push. And we also recently got some[br]integration with CalDAV web, which is a
0:35:24.829,0:35:30.980
JavaScript user interface doing in[br]JavaScript, doing a user interface. And we
0:35:30.980,0:35:36.940
just bundle that with the thing. It's[br]online, open source, there is a demo
0:35:36.940,0:35:42.440
server and the data repository online.[br]Yes, some statistics and I zoom in
0:35:42.440,0:35:47.970
directly to the CPU usage. So we had the[br]luck that we for half of a month, we used
0:35:47.970,0:35:56.170
it as a process on a freeBSD system. And[br]that happened roughly the first half until
0:35:56.170,0:36:01.420
here. And then at some point we thought,[br]oh, yeah, let's migrated it to MirageOS
0:36:01.420,0:36:06.329
unikernel and don't run the freeBSD system[br]below it. And you can see here on the x
0:36:06.329,0:36:11.460
axis the time. So that was the month of[br]June, starting with the first of June on
0:36:11.460,0:36:16.950
the left and the last of June on the[br]right. And on the y axis, you have the
0:36:16.950,0:36:22.829
number of CPU seconds here on the left or[br]the number of CPU ticks here on the right.
0:36:22.829,0:36:28.650
The CPU ticks are virtual CPU ticks[br]which debug counters from the hypervisor.
0:36:28.650,0:36:33.430
So from beehive and freeBSD here in that[br]system. And what you can see here is this
0:36:33.430,0:36:39.460
massive drop by a factor of roughly 10.[br]And that is when we switched from a Unix
0:36:39.460,0:36:46.040
virtual machine with the process to a[br]freestanding Unikernel. So we actually use
0:36:46.040,0:36:50.910
much less resources. And if we look into[br]the bigger picture here, we also see that
0:36:50.910,0:36:57.710
the memory dropped by a factor of 10 or[br]even more. This is now a logarithmic scale
0:36:57.710,0:37:03.039
here on the y axis, the network bandwidth[br]increased quite a bit because now we do
0:37:03.039,0:37:09.549
all the monitoring traffic, also via net[br]interface and so on. Okay, that's CalDAV.
0:37:09.549,0:37:16.759
Another case study is authoritative DNS[br]servers. And I just recently wrote a
0:37:16.759,0:37:22.329
tutorial on that. Which I will skip[br]because I'm a bit short on time. Another
0:37:22.329,0:37:27.210
case study is a firewall for QubesOS.[br]QubesOS is a reasonable, secure operating
0:37:27.210,0:37:33.390
system which uses Xen for isolation of[br]workspaces and applications such as PDF
0:37:33.390,0:37:38.609
reader. So whenever you receive a PDF, you[br]start your virtual machine, which is only
0:37:38.609,0:37:48.160
run once and you, well which is just run to[br]open and read your PDF. And Qubes Mirage
0:37:48.160,0:37:54.039
firewall is now small or a tiny[br]replacement for the Linux based firewall
0:37:54.039,0:38:02.160
written in OCaml now. And instead of[br]roughly 300mb, you only use 32mb
0:38:02.160,0:38:09.259
of memory. There's now also recently[br]some support for dynamic firewall rules
0:38:09.259,0:38:16.760
as defined by Qubes 4.0. And that is not[br]yet merged into master, but it's under
0:38:16.760,0:38:23.480
review. Libraries in MirageOS yeah we have[br]since we write everything from scratch and
0:38:23.480,0:38:29.750
in OCaml we don't have now. We don't have[br]every protocol, but we have quite a few
0:38:29.750,0:38:35.280
protocols. There are also more unikernels[br]right now, which you can see here in the
0:38:35.280,0:38:41.849
slides. Also online in the Fahrplan so you[br]can click on the links later. Repeaters
0:38:41.849,0:38:47.509
were built. So for security purposes we[br]don't get shipped binaries. But I plan to
0:38:47.509,0:38:51.540
ship binaries and in order to ship[br]binaries. I don't want to ship non
0:38:51.540,0:38:56.549
reputable binaries. What is reproducible[br]builds? Well it means that if you have the
0:38:56.549,0:39:05.960
same source code, you should get the[br]binary identical output. And issues are
0:39:05.960,0:39:14.640
temporary filenames and timestamps and so[br]on. In December we managed in MirageOS to
0:39:14.640,0:39:21.270
get some tooling on track to actually test[br]the reproducibility of unikernels and we
0:39:21.270,0:39:27.839
fixed some issues and now all the tests in[br]MirageOS unikernels reporducable, which
0:39:27.839,0:39:34.009
are basically most of them from this list.[br]Another topic, a supply chain security,
0:39:34.009,0:39:42.210
which is important, I think, and we have[br]this is still a work in progress. We still
0:39:42.210,0:39:48.859
haven't deployed that widely. But there[br]are some test repositories out there to
0:39:48.859,0:39:56.869
provide more, to provide signatures signed[br]by the actual authors of a library and
0:39:56.869,0:40:02.670
getting you across until the use of the[br]library can verify that. And some
0:40:02.670,0:40:09.390
decentralized authorization and delegation[br]of that. What about deployment? Well, in
0:40:09.390,0:40:15.999
conventional orchestration systems such as[br]Kubernetes and so on. We don't yet have
0:40:15.999,0:40:24.220
a proper integration of MirageOS, but we[br]would like to get some proper integration
0:40:24.220,0:40:31.700
there. If you already generate some[br]libvirt.xml files from Mirage. So for each
0:40:31.700,0:40:37.690
unikernel you get the libvirt.xml and you[br]can do that and run that in your libvirt
0:40:37.690,0:40:44.529
based orchestration system. For Xen, we[br]also generate those .xl and .xe files,
0:40:44.529,0:40:49.500
which I personally don't really[br]know much about, but that's it. On the
0:40:49.500,0:40:56.289
other side, I developed an orchestration[br]system called Albatross because I was a
0:40:56.289,0:41:02.529
bit worried if I now have those tiny[br]unikernels which are megabytes in size
0:41:02.529,0:41:09.089
and now I should trust the big Kubernetes,[br]which is maybe a million lines of code
0:41:09.089,0:41:15.730
running on the host system with[br]privileges. So I thought, oh well let's
0:41:15.730,0:41:21.339
try to come up with a minimal[br]orchestration system which allows me some
0:41:21.339,0:41:26.630
console access. So I want to see the debug[br]messages or whenever it fails to boot I
0:41:26.630,0:41:32.099
want to see the output of the console.[br]Want to get some metrics like the Graphana
0:41:32.099,0:41:38.930
screenshot you just saw. And that's[br]basically it. Then since I developed also
0:41:38.930,0:41:45.329
a TLS stack, I thought, oh yeah, well why[br]not just use it for remote deployment? So
0:41:45.329,0:41:51.499
in TLS you have mutual authentication, you[br]can have client certificates and
0:41:51.499,0:41:57.460
certificate itself is more or less an[br]authenticated key value store because you
0:41:57.460,0:42:03.859
have those extensions and X 509 version 3[br]and you can put arbitrary data in there
0:42:03.859,0:42:09.190
with keys being so-called object[br]identifiers and values being whatever
0:42:09.190,0:42:16.539
else. TLS certificates have this great[br]advantage that or X 509 certificates have
0:42:16.539,0:42:23.550
the advantage that during a TLS handshake[br]they are transferred on the wire in not
0:42:23.550,0:42:33.950
base64 or PEM encoding as you usually see[br]them, but in basic encoding which is much
0:42:33.950,0:42:41.049
nicer to the amount of bits you transfer.[br]So it's not transferred in base64, but
0:42:41.049,0:42:45.820
directly in raw basically. And with[br]Alabtross you can basically do a TLS
0:42:45.820,0:42:50.769
handshake and in that client certificate[br]you present, you already have the
0:42:50.769,0:42:58.359
unikernel image and the name and the boot[br]arguments and you just deploy it directly.
0:42:58.359,0:43:04.229
You can alter an X 509. You have a chain[br]of certificate authorities, which you send
0:43:04.229,0:43:09.150
with and this chain of certificate[br]authorities also contain some extensions
0:43:09.150,0:43:14.720
in order to specify which policies are[br]active. So how many virtual machines are
0:43:14.720,0:43:21.599
you able to deploy on my system? How much[br]memory you you have access to and which
0:43:21.599,0:43:26.930
bridges or which network interfaces you[br]have access to? So Albatross is really a
0:43:26.930,0:43:33.779
minimal orchestration system running as a[br]family of Unix processes. It's maybe 3000
0:43:33.779,0:43:41.319
lines of code or so. OCaml code. But using[br]then the TLS stack and so on. But yeah, it
0:43:41.319,0:43:46.630
seems to work pretty well. I at least use[br]it for more than two dozen unikernels at
0:43:46.630,0:43:52.191
any point in time. What about the[br]community? Well the whole MirageOS project
0:43:52.191,0:43:57.930
started around 2008 at University of[br]Cambridge, so it used to be a research
0:43:57.930,0:44:03.819
project with which still has a lot of[br]ongoing student projects at University of
0:44:03.819,0:44:10.559
Cambridge. But now it's an open source[br]permissive license, mostly BSD licensed
0:44:10.559,0:44:20.769
thing, where we have community event every[br]half a year and a retreat in Morocco where
0:44:20.769,0:44:25.819
we also use our own unikernels like the[br]DHTP server and the DNS resolve and so on.
0:44:25.819,0:44:31.700
We just use them to test them and to see[br]how does it behave and does it work for
0:44:31.700,0:44:40.170
us? We have quite a lot of open source[br]computer contributors from all over and
0:44:40.170,0:44:46.420
some of the MirageOS libraries have also[br]been used or are still used in this Docker
0:44:46.420,0:44:51.810
technology, Docker for Mac and Docker for[br]Windows, which emulates the guest system
0:44:51.810,0:45:02.089
or which needs some wrappers. And there is[br]a lot of OCaml code is used. So to finish
0:45:02.089,0:45:07.319
my talk, I would like to have another[br]side, which is that Rome wasn't built in a
0:45:07.319,0:45:14.920
day. So where we are is to conclude here[br]we have a radical approach to operating
0:45:14.920,0:45:22.089
systems development. We have a security[br]from the ground up with much fewer code
0:45:22.089,0:45:30.079
and we also have much fewer attack vectors[br]because we use a memory safe
0:45:30.079,0:45:39.079
language. So we have reduced the carbon[br]footprint, as I mentioned in the start of
0:45:39.079,0:45:45.619
the talk, because we use much less CPU[br]time, but also much less memory. So we use
0:45:45.619,0:45:53.190
less resources. MirageOS itself and O'Caml[br]have a reasonable performance. We have
0:45:53.190,0:45:56.979
seen some statistics about the TLS stack[br]that it was in the same ballpark as
0:45:56.979,0:46:05.519
OpenSSL and PolarSSL, which is nowadays[br]MBed TLS, and MirageOS unikernels, since
0:46:05.519,0:46:10.589
they don't really need to negotiate[br]features and wait for the Scottie Pass and
0:46:10.589,0:46:14.759
so on. They actually do it in[br]milliseconds, not in seconds, so they do
0:46:14.759,0:46:21.939
not hardware probing and so on. But they[br]know that startup time what they expect. I
0:46:21.939,0:46:27.489
would like to thank everybody who is and[br]was involved in this whole technology
0:46:27.489,0:46:32.769
stack because I myself I program quite a[br]bit of O'Caml, but I wouldn't have been
0:46:32.769,0:46:39.009
able to do that on my own. It is just a[br]bit too big. MirageOS currently spends
0:46:39.009,0:46:45.490
around maybe 200 different git[br]repositories with the libraries, mostly
0:46:45.490,0:46:52.500
developed on GitHub and open source. I[br]am at the moment working in a nonprofit
0:46:52.500,0:46:56.890
company in Germany, which is called the[br]Center for the Cultivation of Technology
0:46:56.890,0:47:02.650
with a project called robur. So we work in[br]a collective way to develop full-stack
0:47:02.650,0:47:08.030
MirageOS unikernels. That's why I'm happy[br]to do that from Dublin. And if you're
0:47:08.030,0:47:14.450
interested, please talk to us. I have some[br]selected related talks, there are much
0:47:14.450,0:47:20.869
more talks about MirageOS. But here is[br]just a short list of something, if you're
0:47:20.869,0:47:29.529
interested in some certain aspects, please[br]help yourself to view them.
0:47:29.529,0:47:31.761
That's all from me.
0:47:31.761,0:47:37.380
Applause
0:47:37.380,0:47:46.440
Herald: Thank you very much. There's a bit[br]over 10 minutes of time for questions. If
0:47:46.440,0:47:50.010
you have any questions go to the[br]microphone. There's several microphones
0:47:50.010,0:47:54.210
around the room. Go ahead.[br]Question: Thank you very much for the talk
0:47:54.210,0:47:57.210
-[br]Herald: Writ of order. Thanking the
0:47:57.210,0:48:01.109
speaker can be done afterwards. Questions[br]are questions, so short sentences ending
0:48:01.109,0:48:05.989
with a question mark. Sorry, do go ahead.[br]Question: If I want to try this at home,
0:48:05.989,0:48:08.989
what do I need? Is a raspi sufficient? No,[br]it isn't.
0:48:08.989,0:48:15.309
Hannes: That is an excellent question. So[br]I usually develop it on such a thinkpad
0:48:15.309,0:48:23.019
machine, but we actually support also[br]ARM64 mode. So if you have a Raspberry Pi
0:48:23.019,0:48:28.890
3+, which I think has the virtualization[br]bits and the Linux kernel, which is reason
0:48:28.890,0:48:35.249
enough to support KVM on that Raspberry Pi[br]3+, then you can try it out there.
0:48:35.249,0:48:41.789
Herald: Next question.[br]Question: Well, currently most MirageOS
0:48:41.789,0:48:51.719
unikernels are used for running server[br]applications. And so obviously this all
0:48:51.719,0:48:58.230
static preconfiguration of OCaml and[br]maybe Ada SPARK is fine for that. But what
0:48:58.230,0:49:03.819
do you think about... Will it ever be[br]possible to use the same approach with all
0:49:03.819,0:49:10.009
this static reconfiguration for these very[br]dynamic end user desktop systems, for
0:49:10.009,0:49:15.220
example, like which at least currently use[br]quite a lot of plug-and-play.
0:49:15.220,0:49:19.430
Hannes: Do you have an example? What are[br]you thinking about?
0:49:19.430,0:49:26.410
Question: Well, I'm not that much into[br]the topic of its SPARK stuff, but you said
0:49:26.410,0:49:32.239
that all the communication's paths have to[br]be defined in advance. So especially with
0:49:32.239,0:49:37.779
plug-and-play devices like all this USB[br]stuff, we either have to allow everything
0:49:37.779,0:49:46.549
in advance or we may have to reboot parts[br]of the unikernels in between to allow
0:49:46.549,0:49:54.660
rerouting stuff.[br]Hannes: Yes. Yes. So I mean if you want to
0:49:54.660,0:50:01.119
design a USB plug-and-play system, you can[br]think of it as you plug in somewhere the
0:50:01.119,0:50:07.839
USB stick and then you start the unikernel[br]which only has access to that USB stick.
0:50:07.839,0:50:15.319
But having a unikernel... Well I wouldn't[br]design a unikernel which randomly does
0:50:15.319,0:50:23.569
plug and play with the the outer world,[br]basically. So. And one of the applications
0:50:23.569,0:50:30.800
I've listed here is at the top is a[br]picture viewer, which is a unikernel that
0:50:30.800,0:50:37.400
also at the moment, I think has static[br]embedded data in it. But is able on Qubes
0:50:37.400,0:50:43.819
OS or on Unix and SDL to display the[br]images and you can think of some way we
0:50:43.819,0:50:48.670
are a network or so to access the images[br]actually. So you didn't need to compile
0:50:48.670,0:50:54.380
the images in, but you can have a good[br]repository or TCP server or whatever in
0:50:54.380,0:51:01.079
order to receive the images. So I am[br]saying. So what I didn't mention is that
0:51:01.079,0:51:05.759
MirageOS instead of being general purpose[br]and having a shell and you can do
0:51:05.759,0:51:11.279
everything with it, it is that each[br]service, each unikernel is a single
0:51:11.279,0:51:16.529
service thing. So you can't do everything[br]with it. And I think that is an advantage
0:51:16.529,0:51:23.309
from a lot of points of view. I agree[br]that if you have a highly dynamic system,
0:51:23.309,0:51:27.680
that you may have some trouble on how to[br]integrate that.
0:51:27.680,0:51:38.679
Herald: Are there any other questions? [br]No, it appears not. In which case,
0:51:38.679,0:51:41.111
thank you again, Hannes. [br]Warm applause for Hannes.
0:51:41.111,0:51:44.529
Applause
0:51:44.529,0:51:49.438
Outro music
0:51:49.438,0:52:12.000
subtitles created by c3subtitles.de[br]in the year 2020. Join, and help us!