WEBVTT 00:00:00.000 --> 00:00:18.660 36c3 intro 00:00:18.660 --> 00:00:23.914 Herald: Good morning again. Thanks. First off for today is by Hannes Mehnert. It's 00:00:23.914 --> 00:00:29.390 titled "Leaving Legacy Behind". It's about the reduction of carbon footprint through 00:00:29.390 --> 00:00:33.230 micro kernels in MirageOS. Give a warm welcome to Hannes. 00:00:33.230 --> 00:00:39.250 Applause 00:00:39.250 --> 00:00:45.060 Hannes Mehnert: Thank you. So let's talk a bit about legacy, so legacy we had have. 00:00:45.060 --> 00:00:50.000 Nowadays we run services usually on a Unix based operating system, which is 00:00:50.000 --> 00:00:55.080 demonstrated here on the left a bit the layering. So at the lowest layer we have 00:00:55.080 --> 00:01:00.829 the hardware. So some physical CPU, some lock devices, maybe a network interface 00:01:00.829 --> 00:01:06.570 card and maybe some memories, some non- persistent memory. On top of that, we 00:01:06.570 --> 00:01:13.740 usually run the Unix kernel. So to say. That is marked here in brown which is 00:01:13.740 --> 00:01:19.580 which consists of a filesystem. Then it has a scheduler, it has some process 00:01:19.580 --> 00:01:25.470 management that has network stacks. So the TCP/IP stack, it also has some user 00:01:25.470 --> 00:01:32.350 management and hardware and drivers. So it has drivers for the physical hard drive, 00:01:32.350 --> 00:01:37.800 for their network interface and so on. The ground stuff. So the kernel runs in 00:01:37.800 --> 00:01:46.380 privilege mode. It exposes a system called API or and/or a socket API to the 00:01:46.380 --> 00:01:52.350 actual application where we are there to run, which is here in orange. So the 00:01:52.350 --> 00:01:56.460 actual application is on top, which is the application binary and may depend on some 00:01:56.460 --> 00:02:02.880 configuration files distributed randomly across the filesystem with some file 00:02:02.880 --> 00:02:08.119 permissions set on. Then the application itself also depends likely on a programming 00:02:08.119 --> 00:02:14.000 language runtime that may either be a Java virtual machine if you run Java or Python 00:02:14.000 --> 00:02:20.140 interpreter if you run Python, or a ruby interpreter if you run Ruby and so on. 00:02:20.140 --> 00:02:25.230 Then additionally we usually have a system library. Lip C which is just runtime 00:02:25.230 --> 00:02:30.790 library basically of the C programming language and it exposes a much nicer 00:02:30.790 --> 00:02:38.470 interface than the system calls. We may as well have open SSL or another crypto 00:02:38.470 --> 00:02:45.360 library as part of the application binary which is also here in Orange. So what's a 00:02:45.360 --> 00:02:50.200 drop of the kernel? So the brown stuff actually has a virtual memory subsystem 00:02:50.200 --> 00:02:55.110 and it should separate the orange stuff from each other. So you have multiple 00:02:55.110 --> 00:03:01.790 applications running there and the brown stuff is responsible to ensure that the 00:03:01.790 --> 00:03:07.150 orange that different pieces of orange stuff don't interfere with each other so 00:03:07.150 --> 00:03:12.601 that they are not randomly writing into each other's memory and so on. Now if the 00:03:12.601 --> 00:03:17.420 orange stuff is compromised. So if you have some attacker from the network or 00:03:17.420 --> 00:03:26.540 from wherever else who's able to find a flaw in the orange stuff, the kernel is still 00:03:26.540 --> 00:03:32.420 responsible for strict isolation between the orange stuff. So as long as the 00:03:32.420 --> 00:03:38.070 attacker only gets access to the orange stuff, it should be very well contained. 00:03:38.070 --> 00:03:42.650 But then we look at the bridge between the brown and orange stuff. So between kernel 00:03:42.650 --> 00:03:49.170 and user space and there we have an API which is roughly 600 system calls at 00:03:49.170 --> 00:03:56.360 least on my FreeBSD machine here in Sys calls. So it's 600 different functions or 00:03:56.360 --> 00:04:05.240 the width of this API is 600 different functions, which is quite big. And it's 00:04:05.240 --> 00:04:12.180 quite easy to hide some flaws in there. And as soon as you're able to find a flaw 00:04:12.180 --> 00:04:17.320 in any of those system calls, you can escalate your privileges and then you 00:04:17.320 --> 00:04:22.250 basically run into brown moats and kernel mode and you have access to the raw 00:04:22.250 --> 00:04:26.310 physical hardware. And you can also read arbitrary memory from any processor 00:04:26.310 --> 00:04:34.440 running there. So now over the years it actually evolved and we added some more 00:04:34.440 --> 00:04:39.350 layers, which is hypervisors. So at the lowest layer, we still have the hardware 00:04:39.350 --> 00:04:45.790 stack, but on top of the hardware we now have a hypervisor, which responsibility it 00:04:45.790 --> 00:04:51.300 is to split the physical hardware into pieces and slice it up and run different 00:04:51.300 --> 00:04:56.720 virtual machines. So now we have the byte stuff, which is the hypervisor. And on top 00:04:56.720 --> 00:05:04.360 of that, we have multiple brown things and multiple orange things as well. So now the 00:05:04.360 --> 00:05:12.320 hypervisor is responsible for distributing the CPUs to virtual machines. And the 00:05:12.320 --> 00:05:17.130 memory to virtual machines and so on. It is also responsible for selecting which 00:05:17.130 --> 00:05:21.660 virtual machine to run on which physical CPU. So it actually includes the scheduler 00:05:21.660 --> 00:05:28.950 as well. And the hypervisors responsibility is again to isolate the 00:05:28.950 --> 00:05:34.360 different virtual machines from each other. Initially, hypervisors were done 00:05:34.360 --> 00:05:39.889 mostly in software. Nowadays, there are a lot of CPU features available, which 00:05:39.889 --> 00:05:47.090 allows you to have some CPU support, which makes them fast, and you don't have to 00:05:47.090 --> 00:05:52.449 trust so much software anymore, but you have to trust in the hardware. So that's 00:05:52.449 --> 00:06:00.150 extended page tables and VTD and VTX stuff. OK, so that's the legacy we have 00:06:00.150 --> 00:06:08.070 right now. So when you ship a binary, you actually care about some tip of the 00:06:08.070 --> 00:06:12.229 iceberg. That is the code you actually write and you care about. You care about 00:06:12.229 --> 00:06:18.820 deeply because it should work well and you want to run it. But at the bottom you have 00:06:18.820 --> 00:06:23.830 the sole operating system and that is the code. The operating system insist that you 00:06:23.830 --> 00:06:30.180 need it. So you can't get it without the bottom of the iceberg. So you will always 00:06:30.180 --> 00:06:34.669 have a process management and user management and likely as well the 00:06:34.669 --> 00:06:41.100 filesystem around on a UNIX system. Then in addition, back in May, I think their 00:06:41.100 --> 00:06:48.900 was a blog entry from someone who analyzed from Google Project Zero, which is a 00:06:48.900 --> 00:06:54.540 security research team and red team which tries to fund a lot of flaws in vitally 00:06:54.540 --> 00:07:02.480 use applications . And they found in a year maybe 110 different vulnerabilities 00:07:02.480 --> 00:07:08.330 which they reported and so on. And someone analyzed what these 110 vulnerabilities 00:07:08.330 --> 00:07:13.660 were about and it turned out that more than two thirds of them, that the root 00:07:13.660 --> 00:07:18.940 cause of the flaw was memory corruption. And memory corruption means arbitrary 00:07:18.940 --> 00:07:22.880 reads of rights from from arbitrary memory, which a process that's not 00:07:22.880 --> 00:07:29.900 supposed to be in. So why does that happen? That happens because we on the 00:07:29.900 --> 00:07:36.160 Unix system, we mainly use program languages where we have tight control over 00:07:36.160 --> 00:07:40.199 the memory management. So we do it ourselves. So we allocate the memory 00:07:40.199 --> 00:07:44.639 ourselves and we free it ourselves. There is a lot of boilerplate we need to write 00:07:44.639 --> 00:07:53.190 down and that is also a lot of boilerplate which you can get wrong. So now we talked 00:07:53.190 --> 00:07:57.810 a bit about legacy. Let's talk about the goals of this talk. The goals is on the 00:07:57.810 --> 00:08:06.670 one side to be more secure. So to reduce the attack vectors because C and languages 00:08:06.670 --> 00:08:11.870 like that from the 70s and we may have some languages from the 80s or even from 00:08:11.870 --> 00:08:17.930 the 90s who offer you automated memory management and memory safety languages 00:08:17.930 --> 00:08:24.699 such as Java or Rust or Python or something like that. But it turns out not 00:08:24.699 --> 00:08:30.490 many people are writing operating systems in those languages. Another point here is 00:08:30.490 --> 00:08:37.159 I want to reduce the attack surface. So we have seen this huge stack here and I want 00:08:37.159 --> 00:08:45.880 to minimize the orange and the brown part. Then as an implication of that. I also 00:08:45.880 --> 00:08:50.410 want to reduce the runtime complexity because that is actually pretty cumbersome 00:08:50.410 --> 00:08:56.100 to figure out what is now wrong. Why does your application not start? And if the 00:08:56.100 --> 00:09:01.829 whole reason is because some file on your harddisk has the wrong filesystem 00:09:01.829 --> 00:09:09.560 permissions, then it's pretty hard to get across if you're not yet a Unix expert 00:09:09.560 --> 00:09:16.550 who has a lift in the system for years or at least months. And then the final goal, 00:09:16.550 --> 00:09:22.269 thanks to the topic of this conference and to some analysis I did, is to actually 00:09:22.269 --> 00:09:29.750 reduce the carbon footprint. So if you run a service, you certainly that service does 00:09:29.750 --> 00:09:37.629 some computation and this computation takes some CPU takes. So it takes some CPU 00:09:37.629 --> 00:09:44.759 time in order to be evaluated. And now reducing that means if you condense down 00:09:44.759 --> 00:09:49.860 the complexity and the code size, we also reduce the amount of computation which 00:09:49.860 --> 00:09:57.800 needs to be done. These are the goals. So what are MirageOS unikernels? That is 00:09:57.800 --> 00:10:07.459 basically the project i have been involved in since six years or so. The general idea 00:10:07.459 --> 00:10:14.309 is that each service is isolated in a separate MirageOS unikernel. So your DNS 00:10:14.309 --> 00:10:19.720 resover or your web server don't run on this general purpose UNIX system as a 00:10:19.720 --> 00:10:25.910 process, but you have a separate virtual machine for each of them. So you have one 00:10:25.910 --> 00:10:31.380 unikernel which only does DNS resolution and in that unikernel you don't even need 00:10:31.380 --> 00:10:35.759 a user management. You don't even need process management because there's only a 00:10:35.759 --> 00:10:41.720 single process. There's a DNS resolver. Actually, a DNS resolver also doesn't 00:10:41.720 --> 00:10:47.199 really need a file system. So we got rid of that. We also don't really need virtual 00:10:47.199 --> 00:10:52.259 memory because we only have one process. So we don't need virtual memory and we 00:10:52.259 --> 00:10:57.089 just use a single address space. So everything is mapped in a single address 00:10:57.089 --> 00:11:03.339 space. We use program language called OCaml, which is functional programming 00:11:03.339 --> 00:11:08.079 language which provides us with memory safety. So it has automated memory 00:11:08.079 --> 00:11:17.279 measurement and we use this memory management and the isolation, which the 00:11:17.279 --> 00:11:24.329 program manager guarantees us by its type system. We use that to say, okay, we can 00:11:24.329 --> 00:11:28.429 all live in a single address space and it'll still be safe as long as the 00:11:28.429 --> 00:11:34.579 components are safe. And as long as we minimize the components which are by 00:11:34.579 --> 00:11:42.639 definition unsafe. So we need to run some C code there as well. So in addition, 00:11:42.639 --> 00:11:47.660 well. Now, if we have a single service, we only put in the libraries or the stuff we 00:11:47.660 --> 00:11:51.699 actually need in that service. So as I mentioned that the DNS resolver won't need 00:11:51.699 --> 00:11:56.589 a user management, it doesn't need a shell. Why would I need to shell? What 00:11:56.589 --> 00:12:02.889 should I need to do there? And so on. So we have a lot of libraries, a lot of OCaml 00:12:02.889 --> 00:12:09.750 libraries which are picked by the single servers or which are mixed and matched for 00:12:09.750 --> 00:12:14.160 the different services. So libraries are developed independently of the whole 00:12:14.160 --> 00:12:20.010 system or of the unikernel and are reused across the different components or across 00:12:20.010 --> 00:12:26.910 the different services. Some further limitation which I take as freedom and 00:12:26.910 --> 00:12:32.839 simplicity is not even we have a single address space. We are also only focusing 00:12:32.839 --> 00:12:37.839 on single core and have a single process. So we don't have a process. We don't know 00:12:37.839 --> 00:12:46.679 the concept of process yet. We also don't work in a preemptive way. So preemptive 00:12:46.679 --> 00:12:52.790 means that if you run on a CPU as a function or as a program, you can at any 00:12:52.790 --> 00:12:58.019 time be interrupted because something which is much more important than you can 00:12:58.019 --> 00:13:03.970 now get access to the CPU. And we don't do that. We do co-operative tasks. So we are 00:13:03.970 --> 00:13:08.529 never interrupted. We don't even have interrupts. So there are no interrupts. 00:13:08.529 --> 00:13:13.480 And as I mentioned, it's executed as a virtual machine. So how does that look 00:13:13.480 --> 00:13:17.519 like? So now we have the same picture as previously. We have at the bottom the 00:13:17.519 --> 00:13:22.729 hypervisor. Then we have the host system, which is the brownish stuff. And on top of 00:13:22.729 --> 00:13:29.850 that we have maybe some virtual machines. Some of them run via KVM and qemu UNIX 00:13:29.850 --> 00:13:34.779 system. Using some Virtio that is on the right and on the left. And in the middle 00:13:34.779 --> 00:13:41.899 we have this MirageOS as Unicode where we and the whole system don't run any qemu, 00:13:41.899 --> 00:13:49.920 but we run a minimized so-called tender, which is this solo5-hvt monitor process. 00:13:49.920 --> 00:13:55.149 So that's something which just tries to allocate or will allocate some host system 00:13:55.149 --> 00:14:01.579 resources for the virtual machine and then does interaction with the virtual machine. 00:14:01.579 --> 00:14:06.989 So what does this solo5-hvt do in this case is to set up the memory, load the 00:14:06.989 --> 00:14:12.309 unikernel image which is a statically linked ELF binary and it sets up the 00:14:12.309 --> 00:14:17.829 virtual CPU. So the CPU needs some initialization and then booting is jumped 00:14:17.829 --> 00:14:24.740 to an address. It's already in 64 bit mode. There's no need to boot via 16 or 32 bit 00:14:24.740 --> 00:14:34.079 modes. Now solo5-hvt and the MirageOS they also have an interface and the interface 00:14:34.079 --> 00:14:38.819 is called hyper calls and that interface is rather small. So it only contains in 00:14:38.819 --> 00:14:46.019 total 14 different functions. Which main function yields a way to get the argument 00:14:46.019 --> 00:14:52.850 vector clock. Actually, two clocks, one is a POSIX clock, which takes care of this 00:14:52.850 --> 00:14:58.339 whole time stamping and timezone business and another one in a monotonic clock which 00:14:58.339 --> 00:15:06.569 by its name guarantees that time will pass monotonically. Then the other console 00:15:06.569 --> 00:15:12.510 interface. The console interface is only one way. So we only output data. We never 00:15:12.510 --> 00:15:18.149 read from console. A block device. Well a block devices and network interfaces and 00:15:18.149 --> 00:15:25.829 that's all the hyper calls we have. To look a bit further down into detail of how 00:15:25.829 --> 00:15:34.709 a MirageOS unikernel looks like. Here I pictured on the left again the tender at 00:15:34.709 --> 00:15:41.269 the bottom, and then the hyper calls. And then in pink I have the pieces of code 00:15:41.269 --> 00:15:46.939 which still contain some C code and the MirageOS unikernel. And in green I have 00:15:46.939 --> 00:15:55.140 the pieces of code which does not include any C code, but only OCaml code. So 00:15:55.140 --> 00:16:00.429 looking at the C code which is dangerous because in C we have to deal with memory 00:16:00.429 --> 00:16:05.749 management on our own, which means it's a bit brittle. We need to carefully review 00:16:05.749 --> 00:16:10.790 that code. It is definitely the OCaml runtime which we have here, which is round 00:16:10.790 --> 00:16:18.579 25 thousand lines of code. Then we have a library which is called nolibc which is 00:16:18.579 --> 00:16:24.339 basically a C library which implements malloc and string compare and some 00:16:24.339 --> 00:16:29.439 basic functions which are needed by the OCaml runtime. That's roughly 8000 lines 00:16:29.439 --> 00:16:37.060 of code. That nolibc also provides a lot of stops which just exit to or return 00:16:37.060 --> 00:16:46.850 null for the OCaml runtime because we use an unmodified OCaml runtime to be able to 00:16:46.850 --> 00:16:50.749 upgrade our software more easily. We don't have any patents for The OCaml runtime. 00:16:50.749 --> 00:16:57.419 Then we have a library called solo5-bindings, which is basically 00:16:57.419 --> 00:17:03.220 something which translates into hyper calls or which can access a hyper calls 00:17:03.220 --> 00:17:07.849 and which communicates with the host system via hyper calls. That is roughly 00:17:07.849 --> 00:17:14.910 2000 lines of code. Then we have a math library for sinus and cosinus and tangents 00:17:14.910 --> 00:17:20.940 and so on. And that is just the openlibm which is originally from the freeBSD 00:17:20.940 --> 00:17:26.980 project and has roughly 20000 lines of code. So that's it. So I talked a bit 00:17:26.980 --> 00:17:32.270 about solo5, about the bottom layer and I will go a bit more into detail about the 00:17:32.270 --> 00:17:40.120 solo5 stuff, which is really the stuff which you run at the bottom 00:17:40.120 --> 00:17:46.140 of the MirageOS. There's another choice. You can also run Xen or Qubes OS at 00:17:46.140 --> 00:17:50.870 the bottom of the MirageOS unikernel. But I'm focusing here mainly on solo5. So 00:17:50.870 --> 00:17:56.850 solo5 has a sandbox execution environment for unikernels. It handles resources from 00:17:56.850 --> 00:18:03.910 the host system, but only aesthetically. So you say at startup time how much memory 00:18:03.910 --> 00:18:09.150 it will take. How many network interfaces and which ones are taken and how many 00:18:09.150 --> 00:18:13.520 block devices and which ones are taken by the virtual machine. You don't have any 00:18:13.520 --> 00:18:19.430 dynamic resource management, so you can't add at a later point in time a new network 00:18:19.430 --> 00:18:28.040 interface. That's just not supported. And it makes the code much easier. We don't even 00:18:28.040 --> 00:18:36.360 have dynamic allocation inside of solo5. We have a hyper cool interface. As I 00:18:36.360 --> 00:18:42.330 mentioned, it's only 14 functions. We have bindings for different targets. So we can 00:18:42.330 --> 00:18:49.640 run on KVM, which is hypervisor developed for the Linux project, but also for 00:18:49.640 --> 00:18:57.060 Beehive, which is a free BSD hypervisor or VMM which is openBSD hypervisor. We also 00:18:57.060 --> 00:19:01.920 target other systems such as the g-node, wich is an operating system, based on a 00:19:01.920 --> 00:19:08.830 micro kernel written mainly in C++, virtio, which is a protocol usually spoken 00:19:08.830 --> 00:19:15.490 between the host system and the guest system, and virtio is used in a lot of 00:19:15.490 --> 00:19:22.770 cloud deployments. So it's OK. So qemu for example, provides you with a virtio 00:19:22.770 --> 00:19:29.430 protocol implementation. And a last implementation of solo5 or bindings for 00:19:29.430 --> 00:19:38.570 solo5 is seccomb. So Linux seccomb is a filter in the Linux kernel where you can 00:19:38.570 --> 00:19:47.180 restrict your process that will only use a certain number or a certain amount of 00:19:47.180 --> 00:19:53.790 system calls and we use seccomb so you can deploy it without virtual machine in the 00:19:53.790 --> 00:20:02.270 second case, but you are restricted to which system calls you can use. So solo5 00:20:02.270 --> 00:20:06.500 also provides you with the host system tender where applicable. So in the virtio 00:20:06.500 --> 00:20:11.880 case it not applicable. In the g-note case it is also not applicable. In KVM we 00:20:11.880 --> 00:20:19.220 already saw the solo5 HVT, wich is a hardware virtualized tender. Which is just 00:20:19.220 --> 00:20:25.790 a small binary because if you run qemu at least hundreds of thousands of lines of 00:20:25.790 --> 00:20:36.170 code in the solo5 HVT case, it's more like thousands of lines of code. So here we 00:20:36.170 --> 00:20:42.930 have a comparison from left to right of solo5 and how the host system or the host 00:20:42.930 --> 00:20:49.100 system kernel and the guest system works. In the middle we have a virtual machine, a 00:20:49.100 --> 00:20:54.490 common Linux qemu KVM based virtual machine for example, and on the right hand 00:20:54.490 --> 00:20:59.970 we have the host system and the container. Container is also a technology where you 00:20:59.970 --> 00:21:08.480 try to restrict as much access as you can from process. So it is contained and the 00:21:08.480 --> 00:21:14.940 potential compromise is also very isolated and contained. On the left hand side we 00:21:14.940 --> 00:21:21.270 see that solo5 is basically some bits and pieces in the host system. So is the solo5 00:21:21.270 --> 00:21:27.380 HVT and then some bits and pieces in Unikernel. So is the solo5 findings I 00:21:27.380 --> 00:21:31.200 mentioned earlier. And that is to communicate between the host and the guest 00:21:31.200 --> 00:21:37.100 system. In the middle we see that the API between the host system and the virtual 00:21:37.100 --> 00:21:41.310 machine. It's much bigger than this. And commonly using virtio and virtio is really 00:21:41.310 --> 00:21:48.920 a huge protocol which does feature negotiation and all sorts of things where 00:21:48.920 --> 00:21:54.010 you can always do something wrong, like you can do something wrong and a floppy 00:21:54.010 --> 00:21:58.650 disk driver. And that led to some exploitable vulnerability, although 00:21:58.650 --> 00:22:04.480 nowadays most operating systems don't really need a floppy disk drive anymore. 00:22:04.480 --> 00:22:08.180 And on the right hand side, you can see that the whole system interface for a 00:22:08.180 --> 00:22:12.530 container is much bigger than for a virtual machine because the whole system 00:22:12.530 --> 00:22:17.620 interface for a container is exactly those system calls you saw earlier. So it's run 00:22:17.620 --> 00:22:24.150 600 different calls. And in order to evaluate the security, you need basically 00:22:24.150 --> 00:22:32.770 to audit all of them. So that's just a brief comparison between those. If we look 00:22:32.770 --> 00:22:38.020 into more detail, what solo5 what shapes it can have here on the left side. We can 00:22:38.020 --> 00:22:43.350 see it running in a hardware virtualized tender, which is you have the Linux 00:22:43.350 --> 00:22:50.290 freebies, your openBSD at the bottom and you have solo5 blob, which is a blue thing 00:22:50.290 --> 00:22:54.590 here in the middle. And then on top you have the unikernel. On the right hand side 00:22:54.590 --> 00:23:02.850 you can see the Linux satcom process and you have a much smaller solo5 blob because 00:23:02.850 --> 00:23:06.940 it doesn't need to do that much anymore, because all the hyper calls are basically 00:23:06.940 --> 00:23:11.960 translated to system calls. So you actually get rid of them and you don't 00:23:11.960 --> 00:23:16.820 need to communicate between the host and the guest system because in seccomb you 00:23:16.820 --> 00:23:22.610 run as a whole system process so you don't have this virtualization. The advantage of 00:23:22.610 --> 00:23:29.220 using seccomb as well, but you can deploy it without having access to virtualization 00:23:29.220 --> 00:23:38.050 features of the CPU. Now to get it in even smaller shape. There's another backend I 00:23:38.050 --> 00:23:42.870 haven't talked to you about. It's called the Muen. It's a separation kernel 00:23:42.870 --> 00:23:50.870 developed in Ada. So you basically ... so now we try to get rid of this huge Unix 00:23:50.870 --> 00:23:58.320 system below it. Which is this big kernel thingy here. And Muen is an open source 00:23:58.320 --> 00:24:03.310 project developed in Switzerland in Ada, as I mentioned, and that uses SPARK, which 00:24:03.310 --> 00:24:12.620 is proof system, which guarantees the memory isolation between the different 00:24:12.620 --> 00:24:19.570 components. And Muen now goes a step further and it says, "Oh yeah. For you as 00:24:19.570 --> 00:24:23.540 a guest system, you don't do static allocations and you don't do dynamic 00:24:23.540 --> 00:24:28.210 resource management." We as a host system, we as a hypervisor, we don't do any 00:24:28.210 --> 00:24:34.350 dynamic resource allocation as well. So it only does static resource management. So 00:24:34.350 --> 00:24:39.250 at compile time of your Muen separation kernel you decide how many virtual 00:24:39.250 --> 00:24:44.460 machines or how many unikernels you are running and which resources are given to 00:24:44.460 --> 00:24:50.120 them. You even specify which communication channels are there. So if one of your 00:24:50.120 --> 00:24:55.560 virtual machines needs to talk to another one, you need to specify that at 00:24:55.560 --> 00:25:00.970 compile time and at runtime you don't have any dynamic resource management. So that 00:25:00.970 --> 00:25:08.620 again makes the code much easier, much, much less complex. And you get to much 00:25:08.620 --> 00:25:19.060 fewer lines of code. So to conclude with this Mirage and how this and also the Muen 00:25:19.060 --> 00:25:26.370 and solo5. And how that is. I like to cite Antoine: "Perfection is achieved, not when 00:25:26.370 --> 00:25:31.660 there is nothing more to add, but when there is nothing left to take away." I 00:25:31.660 --> 00:25:36.621 mean obviously the most secure system is a system which doesn't exist. 00:25:36.621 --> 00:25:40.210 Laughter 00:25:40.210 --> 00:25:41.638 Let's look a bit further 00:25:41.638 --> 00:25:46.440 into the decisions of MirageOS. Why do you use this strange 00:25:46.440 --> 00:25:50.960 programming language called OCaml and what's it all about? And what are the case 00:25:50.960 --> 00:25:59.170 studies? So OCaml has been around since more than 20 years. It's a multi paradigm 00:25:59.170 --> 00:26:05.890 programming language. The goal for us and for OCaml is usually to have declarative 00:26:05.890 --> 00:26:14.390 code. To achieve declarative code you need to provide the developers with some 00:26:14.390 --> 00:26:21.200 orthogonal abstraction facilities such as here we have variables then functions you 00:26:21.200 --> 00:26:24.890 likely know if you're a software developer. Also higher order functions. So 00:26:24.890 --> 00:26:31.500 that just means that the function is able to take a function as input. Then in OCaml 00:26:31.500 --> 00:26:37.270 we tried to always focus on the problem and do not distract with boilerplate. So 00:26:37.270 --> 00:26:43.510 some running example again would be this memory management. We don't manually deal 00:26:43.510 --> 00:26:52.940 with that, but we have computers to actually deal with that. In OCaml you have 00:26:52.940 --> 00:27:00.170 a very expressive and static type system, which can spot a lot of invariance or 00:27:00.170 --> 00:27:07.160 violation of invariance at build time. So the program won't compile if you don't 00:27:07.160 --> 00:27:14.196 handle all the potential return types or return values of your function. So now a 00:27:14.196 --> 00:27:20.190 type system, you know, you may know it from Java is a bit painful. If you have to 00:27:20.190 --> 00:27:24.250 express at every location where you want to have a variable, which type this 00:27:24.250 --> 00:27:31.900 variable is. What OCaml provides is type inference similar to Scala and other 00:27:31.900 --> 00:27:37.830 languages. So you don't need to type all the types manually. And types are also 00:27:37.830 --> 00:27:43.670 unlike in Java. Types are erased during compilation. So types are only information 00:27:43.670 --> 00:27:48.820 about values the compiler has at compile time. But at runtime these are all erased 00:27:48.820 --> 00:27:54.920 so they don't exist. You don't see them. And OCaml compiles to native machine code, 00:27:54.920 --> 00:28:01.580 which I think is important for security and performance. Because otherwise you run 00:28:01.580 --> 00:28:07.470 an interpreter or an abstract machine and you have to emulate something else and 00:28:07.470 --> 00:28:14.890 that is never as fast as you can. OCaml has one distinct feature, which is its 00:28:14.890 --> 00:28:21.460 module system. So you have all your values, which types or functions. And now 00:28:21.460 --> 00:28:26.840 each of those values is defined inside of a so-called module. And the simplest 00:28:26.840 --> 00:28:32.670 module is just the filename. But you can nest modules so you can explicitly say, oh 00:28:32.670 --> 00:28:39.540 yeah, this value or this binding is now living in a sub module here off. So each 00:28:39.540 --> 00:28:45.260 module you can also give it a type. So it has a set of types and a set of functions 00:28:45.260 --> 00:28:52.600 and that is called its signature, which is the interface of the module. Now you have 00:28:52.600 --> 00:28:59.600 another abstraction mechanism in OCaml which is functors. And functors are 00:28:59.600 --> 00:29:04.470 basically compile time functions from module to module. So they allow a 00:29:04.470 --> 00:29:09.990 pyramidisation. Like you can implement your generic map structure and all you 00:29:09.990 --> 00:29:18.740 require. So map is just a hash map or a implementation is maybe a binary tree. And 00:29:18.740 --> 00:29:25.980 you need to have is some comparison for the keys and that is modeled in OCaml by 00:29:25.980 --> 00:29:32.427 module. So you have a module called map and you have a functor called make. And the 00:29:32.427 --> 00:29:38.460 make takes some module which implements this comparison method and then provides 00:29:38.460 --> 00:29:45.740 you with map data structure for that key type. And then MirageOS we actually use a 00:29:45.740 --> 00:29:51.800 module system quite a bit more because we have all these resources which are 00:29:51.800 --> 00:29:58.330 different between Xen and KVM and so on. So each of the different resources like a 00:29:58.330 --> 00:30:06.740 network interface has a signature. OK, and target specific implementation. So we have 00:30:06.740 --> 00:30:11.210 the TCP/IP stack, which is much higher than the network card, but it doesn't 00:30:11.210 --> 00:30:16.920 really care if you run on Xen or if you run on KVM. You just program against this 00:30:16.920 --> 00:30:22.270 abstract interface against the interface of the network device. But you don't need 00:30:22.270 --> 00:30:27.740 to program. You don't need to write in your TCP/IP stack any code to run on Xen 00:30:27.740 --> 00:30:38.230 or to run on KVM. So MirageOS also doesn't really use the complete OCaml 00:30:38.230 --> 00:30:44.410 programming language. OCaml also provides you with an object system and we barely 00:30:44.410 --> 00:30:49.720 use that. We also have in MirageOS... well OCaml also allows you for with mutable 00:30:49.720 --> 00:30:57.610 state. And we barely used that mutable state, but we use mostly immutable data 00:30:57.610 --> 00:31:05.429 whenever sensible. We also have a value passing style, so we put state and data as 00:31:05.429 --> 00:31:12.000 inputs. So stage is just some abstract state and data is just a byte vector 00:31:12.000 --> 00:31:17.010 in a protocol implementation. And then the output is also a new state which may be 00:31:17.010 --> 00:31:22.179 modified and some reply maybe so some other byte vector or some application 00:31:22.179 --> 00:31:31.790 data. Or the output data may as well be an error because the incoming data and state 00:31:31.790 --> 00:31:38.179 may be invalid or might maybe violate some some constraints. And errors are also 00:31:38.179 --> 00:31:44.110 explicitly types, so they are declared in the API and the call of a function needs 00:31:44.110 --> 00:31:52.480 to handle all these errors explicitly. As I said, the single core, but we have some 00:31:52.480 --> 00:32:00.690 promise based or some even based concurrent programming stuff. And yeah, we 00:32:00.690 --> 00:32:04.450 have the ability to express a really strong and variants like this is a read- 00:32:04.450 --> 00:32:08.340 only buffer in the type system. And the type system is, as I mentioned, only 00:32:08.340 --> 00:32:15.161 compile time, no runtime overhead. So it's all pretty nice and good. So let's take a 00:32:15.161 --> 00:32:21.210 look at some of the case studies. The first one is unikernel. So it's called the 00:32:21.210 --> 00:32:29.740 Bitcoin Pinata. It started in 2015 when we were happy with from the scratch developed 00:32:29.740 --> 00:32:35.100 TLS stack. TLS is transport layer security. So what use if you browse to 00:32:35.100 --> 00:32:41.720 HTTPS. So we have an TLS stack in OCaml and we wanted to do some marketing for 00:32:41.720 --> 00:32:50.670 that. Bitcoin Pinata is basically unikernel which uses TLS and provides you 00:32:50.670 --> 00:32:57.790 with TLS endpoints, and it contains the private key for a bitcoin wallet which is 00:32:57.790 --> 00:33:05.790 filled with, which used to be filled with 10 bitcoins. And this means it's a 00:33:05.790 --> 00:33:10.770 security bait. So if you can compromise the system itself, you get the private key 00:33:10.770 --> 00:33:16.420 and you can do whatever you want with it. And being on this bitcoin block chain, it 00:33:16.420 --> 00:33:22.880 also means it's transparent so everyone can see that that has been hacked or not. 00:33:22.880 --> 00:33:30.450 Yeah and it has been online since three years and it was not hacked. But the bitcoin we 00:33:30.450 --> 00:33:35.630 got were only borrowed from friends of us and they were then reused in other 00:33:35.630 --> 00:33:40.370 projects. It's still online. And you can see here on the right that we had some 00:33:40.370 --> 00:33:49.740 HTTP traffic, like an aggregate of maybe 600,000 hits there. Now I have a size 00:33:49.740 --> 00:33:54.600 comparison of the Bitcoin Pinata on the left. You can see the unikernel, which is 00:33:54.600 --> 00:34:00.410 less than 10 megabytes in size or in source code it's maybe a hundred thousand 00:34:00.410 --> 00:34:06.000 lines of code. On the right hand side you have a very similar thing, but running as 00:34:06.000 --> 00:34:16.489 a Linux service so it runs an openSSL S server, which is a minimal TLS server you 00:34:16.489 --> 00:34:22.820 can get basically on a Linux system using openSSL. And there we have mainly maybe a 00:34:22.820 --> 00:34:29.019 size of 200 megabytes and maybe two million two lines of code. So that's 00:34:29.019 --> 00:34:36.409 roughly a vector of 25. In other examples, we even got a bit less code, much bigger 00:34:36.409 --> 00:34:45.310 effect. Performance analysis I showed that ... Well, in 2015 we did some evaluation 00:34:45.310 --> 00:34:50.659 of our TLS stack and it turns out we're in the same ballpark as other 00:34:50.659 --> 00:34:56.769 implementations. Another case study is CalDAV server, which we developed last 00:34:56.769 --> 00:35:04.729 year with a grant from Prototypefund which is a German government funding. It is 00:35:04.729 --> 00:35:09.279 intolerable with other clients. It stores data in a remote git repository. So we 00:35:09.279 --> 00:35:14.140 don't use any block device or persistent storage, but we store it in a git 00:35:14.140 --> 00:35:18.599 repository so whenever you add the calendar event, it does actually a git 00:35:18.599 --> 00:35:24.829 push. And we also recently got some integration with CalDAV web, which is a 00:35:24.829 --> 00:35:30.980 JavaScript user interface doing in JavaScript, doing a user interface. And we 00:35:30.980 --> 00:35:36.940 just bundle that with the thing. It's online, open source, there is a demo 00:35:36.940 --> 00:35:42.440 server and the data repository online. Yes, some statistics and I zoom in 00:35:42.440 --> 00:35:47.970 directly to the CPU usage. So we had the luck that we for half of a month, we used 00:35:47.970 --> 00:35:56.170 it as a process on a freeBSD system. And that happened roughly the first half until 00:35:56.170 --> 00:36:01.420 here. And then at some point we thought, oh, yeah, let's migrated it to MirageOS 00:36:01.420 --> 00:36:06.329 unikernel and don't run the freeBSD system below it. And you can see here on the x 00:36:06.329 --> 00:36:11.460 axis the time. So that was the month of June, starting with the first of June on 00:36:11.460 --> 00:36:16.950 the left and the last of June on the right. And on the y axis, you have the 00:36:16.950 --> 00:36:22.829 number of CPU seconds here on the left or the number of CPU ticks here on the right. 00:36:22.829 --> 00:36:28.650 The CPU ticks are virtual CPU ticks which debug counters from the hypervisor. 00:36:28.650 --> 00:36:33.430 So from beehive and freeBSD here in that system. And what you can see here is this 00:36:33.430 --> 00:36:39.460 massive drop by a factor of roughly 10. And that is when we switched from a Unix 00:36:39.460 --> 00:36:46.040 virtual machine with the process to a freestanding Unikernel. So we actually use 00:36:46.040 --> 00:36:50.910 much less resources. And if we look into the bigger picture here, we also see that 00:36:50.910 --> 00:36:57.710 the memory dropped by a factor of 10 or even more. This is now a logarithmic scale 00:36:57.710 --> 00:37:03.039 here on the y axis, the network bandwidth increased quite a bit because now we do 00:37:03.039 --> 00:37:09.549 all the monitoring traffic, also via net interface and so on. Okay, that's CalDAV. 00:37:09.549 --> 00:37:16.759 Another case study is authoritative DNS servers. And I just recently wrote a 00:37:16.759 --> 00:37:22.329 tutorial on that. Which I will skip because I'm a bit short on time. Another 00:37:22.329 --> 00:37:27.210 case study is a firewall for QubesOS. QubesOS is a reasonable, secure operating 00:37:27.210 --> 00:37:33.390 system which uses Xen for isolation of workspaces and applications such as PDF 00:37:33.390 --> 00:37:38.609 reader. So whenever you receive a PDF, you start your virtual machine, which is only 00:37:38.609 --> 00:37:48.160 run once and you, well which is just run to open and read your PDF. And Qubes Mirage 00:37:48.160 --> 00:37:54.039 firewall is now small or a tiny replacement for the Linux based firewall 00:37:54.039 --> 00:38:02.160 written in OCaml now. And instead of roughly 300mb, you only use 32mb 00:38:02.160 --> 00:38:09.259 of memory. There's now also recently some support for dynamic firewall rules 00:38:09.259 --> 00:38:16.760 as defined by Qubes 4.0. And that is not yet merged into master, but it's under 00:38:16.760 --> 00:38:23.480 review. Libraries in MirageOS yeah we have since we write everything from scratch and 00:38:23.480 --> 00:38:29.750 in OCaml we don't have now. We don't have every protocol, but we have quite a few 00:38:29.750 --> 00:38:35.280 protocols. There are also more unikernels right now, which you can see here in the 00:38:35.280 --> 00:38:41.849 slides. Also online in the Fahrplan so you can click on the links later. Repeaters 00:38:41.849 --> 00:38:47.509 were built. So for security purposes we don't get shipped binaries. But I plan to 00:38:47.509 --> 00:38:51.540 ship binaries and in order to ship binaries. I don't want to ship non 00:38:51.540 --> 00:38:56.549 reputable binaries. What is reproducible builds? Well it means that if you have the 00:38:56.549 --> 00:39:05.960 same source code, you should get the binary identical output. And issues are 00:39:05.960 --> 00:39:14.640 temporary filenames and timestamps and so on. In December we managed in MirageOS to 00:39:14.640 --> 00:39:21.270 get some tooling on track to actually test the reproducibility of unikernels and we 00:39:21.270 --> 00:39:27.839 fixed some issues and now all the tests in MirageOS unikernels reporducable, which 00:39:27.839 --> 00:39:34.009 are basically most of them from this list. Another topic, a supply chain security, 00:39:34.009 --> 00:39:42.210 which is important, I think, and we have this is still a work in progress. We still 00:39:42.210 --> 00:39:48.859 haven't deployed that widely. But there are some test repositories out there to 00:39:48.859 --> 00:39:56.869 provide more, to provide signatures signed by the actual authors of a library and 00:39:56.869 --> 00:40:02.670 getting you across until the use of the library can verify that. And some 00:40:02.670 --> 00:40:09.390 decentralized authorization and delegation of that. What about deployment? Well, in 00:40:09.390 --> 00:40:15.999 conventional orchestration systems such as Kubernetes and so on. We don't yet have 00:40:15.999 --> 00:40:24.220 a proper integration of MirageOS, but we would like to get some proper integration 00:40:24.220 --> 00:40:31.700 there. If you already generate some libvirt.xml files from Mirage. So for each 00:40:31.700 --> 00:40:37.690 unikernel you get the libvirt.xml and you can do that and run that in your libvirt 00:40:37.690 --> 00:40:44.529 based orchestration system. For Xen, we also generate those .xl and .xe files, 00:40:44.529 --> 00:40:49.500 which I personally don't really know much about, but that's it. On the 00:40:49.500 --> 00:40:56.289 other side, I developed an orchestration system called Albatross because I was a 00:40:56.289 --> 00:41:02.529 bit worried if I now have those tiny unikernels which are megabytes in size 00:41:02.529 --> 00:41:09.089 and now I should trust the big Kubernetes, which is maybe a million lines of code 00:41:09.089 --> 00:41:15.730 running on the host system with privileges. So I thought, oh well let's 00:41:15.730 --> 00:41:21.339 try to come up with a minimal orchestration system which allows me some 00:41:21.339 --> 00:41:26.630 console access. So I want to see the debug messages or whenever it fails to boot I 00:41:26.630 --> 00:41:32.099 want to see the output of the console. Want to get some metrics like the Graphana 00:41:32.099 --> 00:41:38.930 screenshot you just saw. And that's basically it. Then since I developed also 00:41:38.930 --> 00:41:45.329 a TLS stack, I thought, oh yeah, well why not just use it for remote deployment? So 00:41:45.329 --> 00:41:51.499 in TLS you have mutual authentication, you can have client certificates and 00:41:51.499 --> 00:41:57.460 certificate itself is more or less an authenticated key value store because you 00:41:57.460 --> 00:42:03.859 have those extensions and X 509 version 3 and you can put arbitrary data in there 00:42:03.859 --> 00:42:09.190 with keys being so-called object identifiers and values being whatever 00:42:09.190 --> 00:42:16.539 else. TLS certificates have this great advantage that or X 509 certificates have 00:42:16.539 --> 00:42:23.550 the advantage that during a TLS handshake they are transferred on the wire in not 00:42:23.550 --> 00:42:33.950 base64 or PEM encoding as you usually see them, but in basic encoding which is much 00:42:33.950 --> 00:42:41.049 nicer to the amount of bits you transfer. So it's not transferred in base64, but 00:42:41.049 --> 00:42:45.820 directly in raw basically. And with Alabtross you can basically do a TLS 00:42:45.820 --> 00:42:50.769 handshake and in that client certificate you present, you already have the 00:42:50.769 --> 00:42:58.359 unikernel image and the name and the boot arguments and you just deploy it directly. 00:42:58.359 --> 00:43:04.229 You can alter an X 509. You have a chain of certificate authorities, which you send 00:43:04.229 --> 00:43:09.150 with and this chain of certificate authorities also contain some extensions 00:43:09.150 --> 00:43:14.720 in order to specify which policies are active. So how many virtual machines are 00:43:14.720 --> 00:43:21.599 you able to deploy on my system? How much memory you you have access to and which 00:43:21.599 --> 00:43:26.930 bridges or which network interfaces you have access to? So Albatross is really a 00:43:26.930 --> 00:43:33.779 minimal orchestration system running as a family of Unix processes. It's maybe 3000 00:43:33.779 --> 00:43:41.319 lines of code or so. OCaml code. But using then the TLS stack and so on. But yeah, it 00:43:41.319 --> 00:43:46.630 seems to work pretty well. I at least use it for more than two dozen unikernels at 00:43:46.630 --> 00:43:52.191 any point in time. What about the community? Well the whole MirageOS project 00:43:52.191 --> 00:43:57.930 started around 2008 at University of Cambridge, so it used to be a research 00:43:57.930 --> 00:44:03.819 project with which still has a lot of ongoing student projects at University of 00:44:03.819 --> 00:44:10.559 Cambridge. But now it's an open source permissive license, mostly BSD licensed 00:44:10.559 --> 00:44:20.769 thing, where we have community event every half a year and a retreat in Morocco where 00:44:20.769 --> 00:44:25.819 we also use our own unikernels like the DHTP server and the DNS resolve and so on. 00:44:25.819 --> 00:44:31.700 We just use them to test them and to see how does it behave and does it work for 00:44:31.700 --> 00:44:40.170 us? We have quite a lot of open source computer contributors from all over and 00:44:40.170 --> 00:44:46.420 some of the MirageOS libraries have also been used or are still used in this Docker 00:44:46.420 --> 00:44:51.810 technology, Docker for Mac and Docker for Windows, which emulates the guest system 00:44:51.810 --> 00:45:02.089 or which needs some wrappers. And there is a lot of OCaml code is used. So to finish 00:45:02.089 --> 00:45:07.319 my talk, I would like to have another side, which is that Rome wasn't built in a 00:45:07.319 --> 00:45:14.920 day. So where we are is to conclude here we have a radical approach to operating 00:45:14.920 --> 00:45:22.089 systems development. We have a security from the ground up with much fewer code 00:45:22.089 --> 00:45:30.079 and we also have much fewer attack vectors because we use a memory safe 00:45:30.079 --> 00:45:39.079 language. So we have reduced the carbon footprint, as I mentioned in the start of 00:45:39.079 --> 00:45:45.619 the talk, because we use much less CPU time, but also much less memory. So we use 00:45:45.619 --> 00:45:53.190 less resources. MirageOS itself and O'Caml have a reasonable performance. We have 00:45:53.190 --> 00:45:56.979 seen some statistics about the TLS stack that it was in the same ballpark as 00:45:56.979 --> 00:46:05.519 OpenSSL and PolarSSL, which is nowadays MBed TLS, and MirageOS unikernels, since 00:46:05.519 --> 00:46:10.589 they don't really need to negotiate features and wait for the Scottie Pass and 00:46:10.589 --> 00:46:14.759 so on. They actually do it in milliseconds, not in seconds, so they do 00:46:14.759 --> 00:46:21.939 not hardware probing and so on. But they know that startup time what they expect. I 00:46:21.939 --> 00:46:27.489 would like to thank everybody who is and was involved in this whole technology 00:46:27.489 --> 00:46:32.769 stack because I myself I program quite a bit of O'Caml, but I wouldn't have been 00:46:32.769 --> 00:46:39.009 able to do that on my own. It is just a bit too big. MirageOS currently spends 00:46:39.009 --> 00:46:45.490 around maybe 200 different git repositories with the libraries, mostly 00:46:45.490 --> 00:46:52.500 developed on GitHub and open source. I am at the moment working in a nonprofit 00:46:52.500 --> 00:46:56.890 company in Germany, which is called the Center for the Cultivation of Technology 00:46:56.890 --> 00:47:02.650 with a project called robur. So we work in a collective way to develop full-stack 00:47:02.650 --> 00:47:08.030 MirageOS unikernels. That's why I'm happy to do that from Dublin. And if you're 00:47:08.030 --> 00:47:14.450 interested, please talk to us. I have some selected related talks, there are much 00:47:14.450 --> 00:47:20.869 more talks about MirageOS. But here is just a short list of something, if you're 00:47:20.869 --> 00:47:29.529 interested in some certain aspects, please help yourself to view them. 00:47:29.529 --> 00:47:31.761 That's all from me. 00:47:31.761 --> 00:47:37.380 Applause 00:47:37.380 --> 00:47:46.440 Herald: Thank you very much. There's a bit over 10 minutes of time for questions. If 00:47:46.440 --> 00:47:50.010 you have any questions go to the microphone. There's several microphones 00:47:50.010 --> 00:47:54.210 around the room. Go ahead. Question: Thank you very much for the talk 00:47:54.210 --> 00:47:57.210 - Herald: Writ of order. Thanking the 00:47:57.210 --> 00:48:01.109 speaker can be done afterwards. Questions are questions, so short sentences ending 00:48:01.109 --> 00:48:05.989 with a question mark. Sorry, do go ahead. Question: If I want to try this at home, 00:48:05.989 --> 00:48:08.989 what do I need? Is a raspi sufficient? No, it isn't. 00:48:08.989 --> 00:48:15.309 Hannes: That is an excellent question. So I usually develop it on such a thinkpad 00:48:15.309 --> 00:48:23.019 machine, but we actually support also ARM64 mode. So if you have a Raspberry Pi 00:48:23.019 --> 00:48:28.890 3+, which I think has the virtualization bits and the Linux kernel, which is reason 00:48:28.890 --> 00:48:35.249 enough to support KVM on that Raspberry Pi 3+, then you can try it out there. 00:48:35.249 --> 00:48:41.789 Herald: Next question. Question: Well, currently most MirageOS 00:48:41.789 --> 00:48:51.719 unikernels are used for running server applications. And so obviously this all 00:48:51.719 --> 00:48:58.230 static preconfiguration of OCaml and maybe Ada SPARK is fine for that. But what 00:48:58.230 --> 00:49:03.819 do you think about... Will it ever be possible to use the same approach with all 00:49:03.819 --> 00:49:10.009 this static reconfiguration for these very dynamic end user desktop systems, for 00:49:10.009 --> 00:49:15.220 example, like which at least currently use quite a lot of plug-and-play. 00:49:15.220 --> 00:49:19.430 Hannes: Do you have an example? What are you thinking about? 00:49:19.430 --> 00:49:26.410 Question: Well, I'm not that much into the topic of its SPARK stuff, but you said 00:49:26.410 --> 00:49:32.239 that all the communication's paths have to be defined in advance. So especially with 00:49:32.239 --> 00:49:37.779 plug-and-play devices like all this USB stuff, we either have to allow everything 00:49:37.779 --> 00:49:46.549 in advance or we may have to reboot parts of the unikernels in between to allow 00:49:46.549 --> 00:49:54.660 rerouting stuff. Hannes: Yes. Yes. So I mean if you want to 00:49:54.660 --> 00:50:01.119 design a USB plug-and-play system, you can think of it as you plug in somewhere the 00:50:01.119 --> 00:50:07.839 USB stick and then you start the unikernel which only has access to that USB stick. 00:50:07.839 --> 00:50:15.319 But having a unikernel... Well I wouldn't design a unikernel which randomly does 00:50:15.319 --> 00:50:23.569 plug and play with the the outer world, basically. So. And one of the applications 00:50:23.569 --> 00:50:30.800 I've listed here is at the top is a picture viewer, which is a unikernel that 00:50:30.800 --> 00:50:37.400 also at the moment, I think has static embedded data in it. But is able on Qubes 00:50:37.400 --> 00:50:43.819 OS or on Unix and SDL to display the images and you can think of some way we 00:50:43.819 --> 00:50:48.670 are a network or so to access the images actually. So you didn't need to compile 00:50:48.670 --> 00:50:54.380 the images in, but you can have a good repository or TCP server or whatever in 00:50:54.380 --> 00:51:01.079 order to receive the images. So I am saying. So what I didn't mention is that 00:51:01.079 --> 00:51:05.759 MirageOS instead of being general purpose and having a shell and you can do 00:51:05.759 --> 00:51:11.279 everything with it, it is that each service, each unikernel is a single 00:51:11.279 --> 00:51:16.529 service thing. So you can't do everything with it. And I think that is an advantage 00:51:16.529 --> 00:51:23.309 from a lot of points of view. I agree that if you have a highly dynamic system, 00:51:23.309 --> 00:51:27.680 that you may have some trouble on how to integrate that. 00:51:27.680 --> 00:51:38.679 Herald: Are there any other questions? No, it appears not. In which case, 00:51:38.679 --> 00:51:41.111 thank you again, Hannes. Warm applause for Hannes. 00:51:41.111 --> 00:51:44.529 Applause 00:51:44.529 --> 00:51:49.438 Outro music 00:51:49.438 --> 00:52:12.000 subtitles created by c3subtitles.de in the year 2020. Join, and help us!