Welcome back, the next talk will be Jan Kiszka on Getting more Debian into our civil infrastructure. Thank you Michael. So my name is Jan Kiszka, you may not know me, I'm not a Debian Developer, not a Debian Maintainer. I'm just an upstream hacker. I'm working for Siemens and part of the Linux team there for now 10 years actually, more than 10 years. We are supporting our business units in getting Linux into the products successfully for that long time, even longer actually. Today, I'm representing a collaborative project that has some relationship with Debian, and more soon. First of all, maybe a surprise to some of you, our civilization is heavily running on Linux and you may now think about this kind of devices where some kind of Linux inside, or you may think of the cloud servers running Linux inside. But actually, this is about devices closer to us. In all our infrastructure, there are control systems, there are management systems included and many many of them run Linux inside. Maybe if you are traveling with Deutsche Bahn to this event these days, there was some Linux system on the train as well, as they were on the ???, so on the control side. Energy generation. Power plants, they are also run with Linux in very interesting ways, in positive ways Industry automation, the factories, they have control systems inside and quite a few are running Linux inside. And also other systems like health care, diagnostic systems. These big balls up there, they're magnetic resonance imaging systems, they're running on Linux for over a decade now. Building automation, not at home but in the professional building area. Actually, as I said, the train systems are going to be more on Debian soon. We have Debian for quite a while in power generation. "We", in this case, Siemens. We have the box underneath, on the third row, the industrial switch there is running Debian. And the health care device is still on Ubuntu, but soon will be Debian as well. Just to give some examples. These are the areas where we, as a group, and we, as Siemens, are active. But there are some problems with this. Just take an example from a railway system. Usually, this kind of devices installation, they have a lifetime of 25, 30 years. It used to be quite simple with these old devices, simple in the sense that it was mechanic, it was pretty robust I was once told that one of these locking systems, they were basically left in a box out there for 50 years and no one entered the ??? No one touched the whole thing for 50 years These times are a little bit over. Nowadays, we have more electronic systems in these systems and they contain of course software. What does it mean? Just to give you an idea, how this kind of development looks like in this domain. So ??? development takes quite a long time until the product is ready, 3 to 5 years. Then, in the railway domain, it's mostly about customizing the systems for specific installations of the railway systems, not only in Europe, they are kind of messy regarding the differences. So you have specific requirements of the customer, the railway operators to adjust these systems for their needs. And you see by then, after 5 years already, a Debian version would be out of maintenance and if you add an other year, you can start over again. So, in the development time, you may change still the system but later on, it's getting hard to change the system ??? because then the interesting parts start in this domain, not only in this domain, that's safety and security assessment and approval for these systems. And that also takes time. For example, in Germany, you go for the Eisenbahn ??? and you ask to get a permission to run that train on the track and if they say "Mmh, not happy with it", you do it over again and it takes time and if you change something in the system, it becomes interesting because some of these certification aspects become invalid, you have to redo it. And then of course, these trains on the installation, the have a long life as I mentioned before. So how do you deal with this in an electronic device and in software-driven devices over this long phase? That's our challenge and just one example and there are more in this area. At the same time, what we see now is these fancy buzzwords from cloud business entering our conservative, slowly moving domain. We talk about IoT, industrial IoT, so connected devices. We talk about edge computing, it means getting the power of the cloud to the device in the field, closer to where the real things happen. So, networking becomes a topic. In the past, you basically built a system, you locked it up physically you never touched it again, except the customer complains that there were some bug inside. These days, the customer asks us to do a frequent update. And actually the customers ??? ask for this. So you have to have some security maintenance concept in this which means regular updates, regular fixes and that is of course ??? for this kind of doing the way you have slow running and long running support cycles. To summarize, there's a very long time we have to maintain our devices in the field and so far, this was mostly done individually. So each company, and sometimes quite frequently also inside the company, each product group, development ??? did it individually. So everyone was having their own kernel, everyone was having their own base system, it was easy to build up so it should be easy to maintain. Of course it's not. This was one thing, one important thing. And then, of course, we not always are completely happy with what the free software gives us. There are some needs to make things more robust, to make things more secure, reliable. So we have to work with these components and improve them, mostly upstream, and that, of course, is not a challenge we have to address in this area. And catch up with a trend coming in from the service space on the cloud space. So with this challenge… it was the point where we, in this case, a number of big users of industrial open source systems, came together and created a new collaborative project. That's what you do in the open source area. This project is called Civil Infrastructure Platform. It's under the umbrella of the Linux Foundation, there are many projects of the Linux Foundation you may have seen, but most of them are more in the area of cloud computing or in the area of media. Automotive computing, this one is actually even more conservative than the other ones and it's also comparably small. Our goal is to build this open source base layer for these application scenarios based on free software, based on Linux. We started two years ago. That's basically our structure, to give you an idea. Member companies, the 3 on the top are founding platinum companies, Hitachi, Toshiba and Siemens. We have Codethink and Plat'Home on board, we had them on board for the first time as well. Renesas joined us and just recently also Moxa. So if you compare this with other collaborative projects, it's a pretty small one, comparatively small one, so our budget is also limited. It's still decent enough, but, well, we are growing. And based on this budget, we have some developers being paid, Ben is paid this way, you will see later on why. And we have people working from the companies in the communities and we are ramping up on working with communities to improve the base layers for our needs. Everything is open source, we have a GitLab repo as well and you can look up there what's going on there. So, the main areas of activities where we are working on right now. 4 areas. Kernel maintenance, we started with declaring one kernel as the CIP kernel to have an extended support phase for this kernel of 10 years. This is what we're aiming for, which is feasible already for some enterprise distros in a specific area but here we are talking about an industrial area, an embedded area so there is some challenge. I'm saying 10 years, there's sometimes written 15 years, we will see after 10 years if we follow on to this. Along with this, of course, comes the need for real time support. Currently, it's a separated branch, but it's going to be integrated eventually to have the PREEMPT_RT branch ??? doing this. As I mentioned before, Ben is currently our 4.4 CIP kernel maintainer. This is the core, basically where we started activities. We continued in extending this on test infrastructure, so we invested a bit in improving on ??? infrastructure, we are now ramping up an internal ??? just to enable the kernel testing of course.