called the Sunlight Foundation. I do government transparency and accountability. Who's paying whom how much, who's voting which way, stuff like that. I'm been doing Ubuntu stuff for a while, Debian stuff for a little while, since 2009. I'm on the FTP Team and I'm like whatever else I've got other stuff that I'm doing but that's not really as important. Oh great, link not found! Cool! That is a giant beautiful picture of the Sunlight logo this is just the generic intro to any of my slides, so I apologise but if anyone's interested in Sunlight, feel free to talk to me. I will gladly tell you all about how awesome Sunlight is and how much fun I have working there. Right, so, Docker. What is Docker? This is sort of like the existential question No one quite knows, right? Everyone is using Docker for all these different things and it's kind of super confusing and that's really disappointing. Basically, Docker is a process-level isolation framework. That's all it is. It uses the Linux kernel namespacing to isolate a process, a single process and tracks the processes inside the container using cgroups. Also I forgot to mention this because I dove right in, this talk is going to be on the short side, because I'm hoping that we're going to have a bit of discussion about Docker's role in Debian and in what ways, us as the docker maintenance team, can help Debian and in what ways this can flow back and forth Also, I do work at Docker, I'm just the Debian hacker. I do this for fun, so I will cover some of the cons that maybe people don't talk about as much. Right, so Docker provides a whole bunch of tools used to manage and wrap these processes. For instance, docker run, which let's you run code inside a container. Remember, it's for a single process, it's not a virtual machine You just spawn it up and it wraps a process and keeps it semi-isolated so that it runs properly or stuff to pull images, if you have images on a central location on the index So if you docker pull paultag/postgres then you get my particular Postgres flavour Docker is higher-level than lxc, but lower level than something like ansible, vagrant or fabric Docker provides these primitives to work on the system, these things to allow you run processes in kind of a sane and normal way but it's not there to solve all of the configuration management problems and there's definitely configuration management to do, once a Docker install is on your system. So, one technique that I have is: all my containers are read-only, and then any data that changes in the container so for instance Postgres you have /var/lib/postgres that's volume mounted, which is like a bind mount out of the container. and that's on the host system in /srv and then I can just snapshot that and keep that backed up. I often use ansible, to provision the stuff that's in /srv So I won't provision anything inside the container, because it's only running a single process, you can't ssh in there but using something like ansible, vagrant or fabric you can coordinate a whole bunch of Docker containers to do some stuff that's pretty powerful. Originally Docker wrapped lxc, just to kind of give you a place of where it is on the stack, but that ended up getting reimplemented, just sort of in raw Go So it no longer uses the lxc backend by default. I think you might be able to still turn that on. You can but don't do it. [laughs] It's probably going to end up breaking stuff in a nasty way. There were a whole bunch of incompatibilities after a couple of versions. So basically Docker is slightly above lxc, but not quite at the level of vagrant or anything like that. [question]: Is Docker in jessie at the moment [Paul]: Docker is currently in jessie, yeah. Docker 1.0, which upstream assured me was stable but they've never released security or patch-fixes So, we're probably going to upload 1.2 pretty soon because there was a bug in golang 1.3 which affected us until then I'm waiting on a package in the NEW queue, ironically. [laughter] Yeah, it's embarrassing. What Docker is not: Docker is not a virtual machine, and I cannot beat this point home enough It should be a single process, if you start stressing that weird stuff's going to start to happen and you're going to be in for a bad time Some people use supervised, or whatever else, to manage a whole bunch of different stuff and if you're careful, that's fine. If you know what you're doing, fine.. but in general, if you're just trying to "Dockerize" something, use a single process per container. So I have a Postgres container, then a webapp container and they're linked so they can talk to one another. So that's usually the architecture of standard by the book deployments. It's not a process for the entire application. It's not like I 'docker run', what do people run nowadays, etherpad? That might be kind of outdated. Yeah, there was a question. [question]: do you mean a single process or? [Paul]: So the question was, "Single process, question mark?" and the answer is "yes". Single PID. Actually sorry, no, actually that's wrong. The Docker instance should be starting a single PID but that can spawn other things. Perfectly fine. If for instance you have wsgi and that has a whole bunch of workers, spawning off a whole bunch of workers Totally reasonable if that's how it operates. But not for something like etherpad and you're having a database and the application in the same container. Sort of like logically the same stuff. Docker is not perfect isolation from the host, the goal is to isolated processes and not prevent exploits. The docker group is root-equivalent, so if you're part of the docker group and you can launch docker containers, it is trivial to get root on the host. Because you can just start a new container with the root of the filesystem mounted inside the container which you can chroot into and then be root. So don't think of this as a one size fits all security system, it's just providing basic wrapping around the process to make sure it's running in an environment that you can hold down for a minute. Basically this let's my unstable server run Postgres from stable. The web-apps that I'm really tightly controlling, because they're all running on python 3.4, are in unstable containers. So I can have different things for different daemons, which is kind of neat. So, why? Which is the bigger question. Why are we wasting all of our time with wrapping all of this stuff in Docker containers. and that's a good question. Basically it lets you abstract the idea of process and really not care about the host environment too much. So when I test something locally, on my local machine and deploy it to one of my VPSs, I can be pretty sure that that process is going to run in roughly the same way. Obviously there might be differences in the kernel, if there are difference in the kernel, okay fine that's going to have some problems But basically it lets you contain and abstract these processes and it lets my trivially move stuff between servers or test on my local machine. Reproduce the environment with a very lightweight environment. Contrasted to something like a virtual machine, where you have the entire overhead of the entire operating system and you're actually virtualizing the entire OS and all of the daemons included with that, and that's not entirely necessary in a lot of situations. Essentially it doesn't matter what- that's a typo, yeah "containenr" [laughs] I definitely wrote these kind of late, so I apologise Essentially it means I can deploy my stuff on whatever host I'm given because I'm cheap and I really don't like paying for servers So if someone decides they want to give my access to a Fedora host, I can host my stuff inside a stable containers and not stress about it too much, there's a little bit of stuff you have to worry about but yeah essentially this let's me run stable Postgres, using Python 3.4, play around with code, isolate it, move it around. That sort of thing. The comparison that upstream makes a lot, where the name comes from, is ISO containers that you see on trucks, the big metal things, they're super cool. Hipsters are living in them now. [laughter] Basically you can just put stuff in them, just pack it full of whatever, seal it up and it doesn't matter if you put it into a boat or a truck, it's just going somewhere, you don't really care. The comparison here, is that these are the ISO containers of the future with processes and computers. Docker itself, is the big docker ship full of ISO containers So you can basically create hosts that host all of your code, without really caring what's inside, because they all look the same to you. They all have the same docker run interface, they all have the same docker pull interface. Then inside the container you can just be concerned about how you pack it but the host doesn't care, and that ends up being pretty important. For super complex and hard to setup software, this can help remove a lot of complexity in actual initial setup. Because sometimes setting up processes like these, can be extremely difficult. As I'm sure everyone here knows. So if you have this weird historic way of setting up this application or it requires some weird configuration files but they're mostly kind of standard, then you can just make sure that's all in place. In fact at work, I've Dockerized a whole bunch of scrapers. So a large part of my day job is scraping terrible government websites, that have about 3 or 4 tags, I'm not joking! [laughter] We're all laughing, but this is my life! [laughter] The sites are all complicated and one of them times out every five minutes, so even if you're a human browsing it, it kicks you back to the main page. Augh, it's terrible! A lot of the scrape infrastructure is kind of gnarly, and setting up the actual scrapers can be a bit of a pain. Making sure that those scrapers run the same way in development and production is super handy and while it's easy for me to get them going, because I wrote a huge chunk of code, it's not as accessible for other people. So one of the things I did recently, is all of the scrape infrastructure, I'm currently working on a daemon that will run the scrapers inside Docker containers. So essentially I've packaged up the particular scrapers they have; so I have state scrapers, which are state legislative scrapers and so nightly it will 'docker run paultag/scrapers-us-state-alabama' and it'll go off the Alabama, scrape all of the data down and insert into Postgres This let's us build continuously from git, so as soon as I push, it'll rebuild the image and that image will be used in the actual run later on in the day. So it doesn't require mucking around, with like- I don't know if anyone's used bamboo, which is some non-free Atlassian stuff. which is what we're using, what I guess we're still using. Essentially it makes you rebuild an AMI, one of the Amazon images, everytime you update the environment, which is horrendous. And it has a 30 minute, like Indiana Jones, wall that's coming down After 30 minutes it shuts the machine down, because it thinks it's idle. So you have to make the change in 30 minutes, and then it shuts down and then you're like "God, gotta do it again" Before that we used Jenkins which was good enough, but kind of a pain too. It's just everything's running in the same environment, it can be a bit of a pain. So by Dockerizing all of this, essentially I can give anyone this scraper and if they're interested in having the data, they can just docker run this command and everything just kind of works. It's like OpenGov in a box. Which is pretty awesome. I've been working on trying to Dockerize more and more of the periodic jobs that get run. and so far, it's pretty thrilling and the results are really really promising and I hope that we're going to continue to develop Docker to that point that becomes a better use case. Because I think it's a really good one. Now for the fun part, my opinions! [laughter] Docker can let you get away with murder, you can do some pretty gnarly stuff and people do some pretty gnarly stuff. So I'm just going to brand up a couple of the things I care about. For instance, I only run my Docker containers off systemd unit files, actually I do use upstart on a couple of machines. Essentially they look like this, here's the spec file for one of them. Basically, the spec file declares that this is for my nginx config and so right there we've got 'docker start nginx' if it already exists otherwise it has the set up of the actual Docker container, so it says mount that into /serve, play around with /srv/pault.ag/nginx/serve with the image, which is paultag/nginx, and the binary it is running, /usr/sbin/nginx, with a couple of flags the stop command is 'docker stop -t 5 nginx' which means terminate after 5 seconds. That's kind of a lot and it's kind of ugly, I understand that but that's okay. This basically let's the nginx in Docker be treated like any other system level process This means that nginx inside Docker is treated identically in nearly everything else that I do, because I just do 'sudo service nginx restart' What does it matter, it's just launching commands? And the commands are happening to isolate it in Docker. Basically the same thing here for upstart, slightly cleaner actually, which is awesome But basically start on filesystem and start Docker source the file to do some work and launch essentially the same thing and these are nearly identical I really don't like deploying Docker unless there's a start up script in place I want all of my machines to be able to hard-shutdown in the middle of whatever they're doing - sometime in the transient ether, have all of my Docker containers disappear and when the machine starts up, have it be back up in which I can use it. And having unit files and spec files like this, really saves you a lot. As for whether or not systemd will replace Docker, I have no idea. I'm sure the systemd people think that. So 'sudo service docker restart' plays around with the Docker containers. Any questions on that part in particular, because I feel like I moved a little fast. Asheesh, yes [Asheesh]: How did you put nginx into that Docker instance, and what is running in there, Debian something? [Paul]: Yeah, totally. Thanks Asheesh. Essentially- let's see if I have my Docker files around. Ah yes, right fonts. Unfortunately xfce-terminal does not let me use Ctrl++, which is disappointing. That's too big, that should be pretty good. Okay, this is gigantic, but we should be able to do something. [laughter] Aah, that's a little bit too big (inaudible) CoC. Come on man, CoC. This is a little bit better, I'm just going to go a little bit smaller, sorry. Right, this'll do. So essentially, you declare from what base image you start from. This can be any arbitrary image. So I'm saying, FROM the debian:unstable image. The debian:unstable image is maintained here by tianon, upstream. It's roughly similar to what you get from a debootstrap, There are some differences, the differences are documented in the creation script, that's also shipped with Docker itself, it you want to create yourselves. The actual modifications, we've talked about moving them into a .deb before but nothing really came of that yet. It anyone's interested in making sure the differences in the Debian Docker image are better documented, I'm sure the Docker upstream, tianon in particular and myself would love to talk to you about how to make that possible. Thumbs up! So I haven't said anything entirely wrong. Great! So I'm saying, FROM the debian:unstable image. The first part is the name of the image, the second part is a tag. They are very similar to how git tags work, except you're encourage to change them often [laughter] Tags essentially point to a given layer and you essentially can use them for nearly whatever you want MAINTAINER, useless bit of metadata, not really important here RUN means when you're creating this image, run the following command 'apt-get update' and then 'apt-get install -y nginx' which will actually install nginx into the container, that we're currently building in and then I 'rm -rf' all of the nginx/sites-*/* stuff because that gets volume mounted in, from my filesystem. So that when I configure a new app, I just drop the file in the hosts' /srv/docker/nginx and then I just kick the container and then it sees it in its /etc/nginx/sites-enabled/ and then the CMD, which is the default command that's run if no arguments are given there's also ENTRYPOINT, ENTRYPOINT is sort of like RUN except it is a little bit harder and it's also put before RUN Confusing these two can get confusing [laughs] Essentially it's this declarative style, it's sort of a declarative style and it's powerful enough to basically do what you want you can actually do this stuff manually in a container and then tag the resulting container but it's generally good practice to use docker files so that you can create what are known as automated builds. They used to be called 'trusted builds' but that name's terrible and automated builds are basically builds that are being done on the Docker index routinely. [question]: Just a quick question for those who haven't played around with Docker yet Does that mean if your machine crashes and you boot up again, it'll detect "Okay, I need this image" so you get new versions of packages from unstable and all hell breaks loose, or is it fixed at some point? Somehow. [Paul]: Yes, there are two different concepts here that I think I've failed to delineate, essentially there's a concept of an image and there's a concept of a container. A container is an instance of an image, so container's always started from an image. So this declarative style of build things is building an image and the resulting image here is called paultag/nginx When I run this with a 'docker run paultag/nginx', it's assigned a pseudo-random name built on words so like, "feisty_turing" or "angry_stallman" [laughter] 'stallman' was added recently, it's amazing! [laughs] The actual individual containers are given these opaque names. So you start an image and your given a container that's from that image. If my machine was to shutdown and everything was to start up again, it would still be using the same image unless I rebuild it in the meantime in which case, that's probably expected behaviour. Any other questions about this stuff so far? [question]: So how do you deal with security updates? [Paul]: Yes, security updates. That's great. Best practice here is to continuously rebuild your images and the Docker index has support for this, give it a repo and it'll watch for changes, post-commit hooks. When you change something, it'll rebuild the image and put it up on the index At which point you just pull it and kick your containers. If you don't use something like that, you're building it locally, you can have something on a cron that rebuilds the images and kicks the containers that are currently active. So the idea is that by using something declarative like this, that every time that debian:unstable image updates, it's going to have the latest security fixes so that when we rerun this and retag the image locally then we're going to get the security updates as well. Essentially containers should be, in my opinion, always read-only ephemeral. So you shouldn't be making any changes inside the containers, if you're writing anything that should be mounted onto the host. So that at any point, I can just trash all of the containers, start them up again and they then have the latest version. With minor interruptions. Which is similar enough. It's sort of the difference immutable versus mutable think of virtual machines as sort of mutable, you can update them, you can change their state. Docker container, really, they should be immutable. When you replace them, they should be an atomic replace. So Lisp versus Python, who's ready? [laughter] Any questions about this so far? Okay cool, I'm going to continue talking. Basically the only reason I gave this talk was to use the Unicode heart (âĽ), to see if any of the software would crash. It didn't, which was a huge disappointment, so hopefully this turns into more of a discussion pretty soon Again, another strong opinion from myself: that you should really only use container linking, SkyDock used to be something that I was preferring but it ended up being really buggy and ate up all of the free memory on my system and OOM-killed nearly everything, which was not fun, it ended up taking about 2GB. That kind of was a bad day. So I generally use container linking, all Docker containers when they're spawned are given a private IP address on a docker0 interface so they all can talk to each other behind the docker0 interface and when you bind to a port in a container it's bound to a container-local IP. Container linking basically rewrites the /etc/hosts, which is a bit of a hack, but it works. It essentially rewrites the /etc/hosts to point to another container's IP address. So it has the other 127.xx.xx.xx IP address, and this let's two containers talk to each other So my Postgres container's up but it's not bound to my public IP, it's bound to it's container IP. Then other containers will talk to it, using container linking. So it'll mean my web-apps know about Postgres, so you can connect to postgres://postgres@postgres:postgres.postgres.postgres/ [laughter] The Docker API. I have so many things to say about it, it's not great. Essentially, more and more stuff has been duct-taped to it, as time has been going on. So to correctly tell it what ports to map, I think you need to find it in two places, which is the host config and the run config. Which you need to pass during 2 different POSTs And it's kind of a pain with mounting stuff in volumes. The API that Docker exposes is very much an implementation detail more than a public facing thing you should be playing around with. I've written plenty of Docker API clients, they are not fun. So if I could basically dissuade you in any way, I really want to. If you really want to play with on, put on a helmet. It's seriously good advice, this API can probably- for a while 'id' was spelt 3 different ways: there was all uppercase 'ID', 'Id' and 'id' all lower case. Docker images are super cheap, they're all built on each other. Essentially you have different layers on an image. Every time you perform an action, you're pulling from all of the images below it So when I say FROM debian:unstable, it's basing all of your changes from the debian:unstable layer So if you only make a couple of minimal changes, it's really cheap. So the more and more layers you add, it's not really that bad. So if you extend FROM debian:unstable in a couple of places, it's not actually duplicating that material on disk, it's just all in that one place, that one layer. You should definitely use images for as much as you can. Having good images is definitely a huge improvement over trying to do this stuff raw. Asheesh has a question. [Asheesh]: How are they cheap? Is it using copy-on-write, is it using aufs? Is it using a custom block layer? What? Huh? [Paul]: Great. Thanks, Asheesh. [laughter] Yes, so they are written to the filesystem and mounted on top of each other in a variety of fun ways. You can either use device-mapper, you can use aufs or you can use btrfs. device-mapper should not be used under any circumstances. I don't know why it's still in the tree, it's pretty bad. I used it on my- What's that? [comment]: (inaudible), aufs Compared to-, yes- aufs is not great, but it is much better than device-mapper. So it is what I'm using until btrfs becomes a bit more stable I want to switch to it, but I haven't had the chance to switch my VPS to btrfs. So right now the most stable backend, in my opinion is aufs, yes, it's deprecated and there are plenty of operating systems that don't ship aufs bundled anymore, like Arch. So that turns out to be a problem. But whatever you do, avoid device-mapper. So essentially it uses copy-on-write for everything, including the containers. Everything is mounted on top of each another using a variety of different methods So it's definitely definitely cheap. Yes basically ensure you can hard reboot your machine, kill all the offline containers and start everything back up and have it work. [Russ]: So there's a question on IRC: If everything layered on top of a base layer, what happens when you upgrade the base layer? Does everything on top of it break? [Paul]: Yes, so this is- No, this is great. So every time that you create a new image, it's given a new hash, it's given a new layer ID. So you're recreating the image from something new, so essentially the immutability principle holds, you'll have the old layers, which you'll still be based on. But they'll basically unreferenced tags, commits that are hanging out that aren't being referenced by anything. They are given a super descriptive name in the Docker images output, which is "" [laughs] [laughter] and these are essentially layers that are sitting around that have kind of moved on. So if you FROM debian:unstable and debian:unstable updates, then you're going to have an image based on IDs that aren't referenced by debian:unstable in a couple of weeks, which is why people like to continuously upgrade these things. Hopefully that answers the question. OK, be sure you can start everything back up and have everything just work. The easiest way of doing this, is treating them all as ephemeral read-only process wrappers. Some of the most interesting stuff- That was just a small review of Docker for anyone who doesn't know. Now this is the good part. Docker is totally installable by running 'sudo apt-get install docker.io' All of you guys should do that, it's great. Upstream, tianon in particular, has a super stripped down Debian image which is really good to base stuff off of, it's super lightweight and it's pullable from stock Docker. If you're interested in the changes from debootstrap, again they're documented in a shell script: /usr/share/docker.io/contrib/mkimage-debootstrap.sh Which I think might be the deprecated version, I can't remember. If you're doing a lot with Docker, feel free to check out what that's doing and make your own image. For Debian development, because I feel like this is going to start coming up, don't use the Docker image from the index. Don't dput stuff that you build with that image. If you're really trying to use Docker to package stuff, build the base image yourself. I think that's pretty sane advice. Just like pbuilder or sbuild, you wouldn't trust a chroot that you wget, don't just trust a Docker image that you're pulling from the Internet. Which brings me to another fun point. dbuilder, or something like that. Someone should totally do that. Having a backend that's as flexible as Docker would be really interesting. Having something with a pbuilder like interface that uses Docker containers on the backend is something I've been interested for a long time. You can even tag images with build-deps installed, so you don't have to have that warm-up time everytime. All sorts of crazy stuff. If anyone's interested in doing that, I'd love to talk with you about how to do that. Essentially I want to turn this BoF into, "What can Docker do with Debian? What can Debian do with Docker?" Because that's sort of what I'm interested in, I see a potential and hopefully other people do too. A quick overview of some future plans before we start more discussion, nightly builds, check-ish. We have nightly builds going to PPAs. I need to set up a debile cluster, to get nightly builds for Debian. These are mostly useful for myself and other people interested in testing nightlies, and making sure packaging works continuously. That's something I've been interested in, something that's mostly kind of working, props to tianon. Backports, we have a lot of stuff backported in a PPA. We need to upload that to pretty Debian soon, but it involves backporting Go. Which means that we need to commit to maintaining Go in stable. So as you can probably guess, I'm not super on top of that. I would love to see more Debian people push for content-based IDs of layers. So those layers I was talking about aren't actually given IDs based on the content of the layer, they're just IDs. If we had content based IDs, then we could do better stuff such as verifying the integrity of an image, or signing of images, which would be really cool so that we could gpg sign an image and then assert that it's the image that we have. Or set up a Docker daemon somewhere that only runs images that are pgp signed. Which would be awesome. Basically limit the stuff, to only stuff I've signed. Potentially trusted Debian image, somehow? I'm not sure what that would look like, what the logistics of that would look like. For now I think decentralising this and pushing it to all the people probably makes sense. Docker 1.2.0 has been released this week and I plan to upload it into unstable as soon as md2man is through NEW. So that should be really soon now. [laughs] OK, right. Who's ready to flame. [question4]: I've kind of been following Docker upstream development and I've noticed the version numbers were 9 months ago 0.2, 0.3, 0.4 just jumping and we're already at 1.2 we're talking about a jessie freeze maybe this year. How do you plan to maintain that going forward or keep up with upstream, do you have any thoughts there? [Paul]: I don't think there's a good answer for that. The 1.0 release was supposed to be something a little more stable and more maintained, it's not turned out that way. 1.2 is much more stable and much better supported than 1.0 right now. I can't imagine that's true in the future, but I'm hoping if we can sync Ubuntu and Debian on a particular version, the collected user base will be enough to pressure upstream. Which I think would be something worthwhile. The Docker upstream is super friendly and they're all really awesome. I love them all dearly. I poke fun at them plenty and I've definitely poked fun at them in this talk and I'm sure I'm going to hear about it. They're definitely amazing and want good things for the world. So I think if there was definitely a use case in which this made sense and I think a stable release of Debian and a couple of versions of Ubuntu maybe then I think we could probably pull off some support. It's a good point, fair point. But Docker 1.2 outclasses 1.0 in nearly every way. So it's definitely not worth keeping us on a "stable" version, that's not better in any way. Oh come on. Flame! [laughter] [question5]: So you said it's not suitable to prevent exploits Is it basically the design of Docker as in the tool or is it rather the underlying interfaces provided by the kernel that are not sufficient to run, say student admissions when assessing student work? [Paul]: I'm trying to live-exploit Docker in front of you. [question5]: I guess I wasn't quite clear. Something running inside a Docker container [Paul]: Oh inside Docker. See, now I'm root on the host. [question5]: Yes but you screwed up by calling Docker. So if I'm calling Docker in a sensible way is it reasonable to run untrusted code inside a well-prepared Docker container? [Paul]: Oh I see. Yes, if you change the user off root in the Docker container there is much less of an attack surface. And yes, if you're not a user with permissions, it's a lot harder to do this. and it definitely provides some level of isolation, it's just the kernel namespacing stuff I don't think was meant to provide bullet-proof security. It was meant to provide rough security. And I think it definitely does that pretty well and if you keep users as non-root it's pretty trivial to exploit this. So yeah, you're right, this particular exploit is because I can run Docker and the docker group is root-equivalent. But yeah, you should be fine. [question6]: Just a quick comment on that. If you are running developer's code on production systems, you probably want to use SELinux in combination with Docker [Paul]: Yeah, that's good advice. [question7]: With OpenShift, they use SELinux to isolate the containers from other things. [Paul]: Awesome. Yes, SELinux sounds like it could be a solution. [question8]: As somebody who helped maintain SELinux for a while, please don't trust SELinux for a single source of security. [laughter] I don't recommend it, it's a great thing as a part of a defence and dev strategy But if it's the only thing lying between you and remote root, you're going to have a bad day. [Paul]: So all software's terrible. [laughs] [Russ]: So have you experimented any with the various privilege isolation, system call limitation and similar privilege separation stuff in systemd? Because you're using unit files around Docker, have tried playing with adding that stuff in to do the containerisation. [Paul]: I have not, and that's a great idea. That would be awesome. Who else has great ideas on how to break Debian with Docker? [question9]: aufs backend for Docker has a 42 layer limit [Paul]: Well that's fun [question9]: Yeah, you obviously haven't hit that one yet. [Paul]: 127? [question9]: 127 now [Paul]: I'm so confused. [laughs] So I guess, it it hurts, don't poke it. [laughter] [question?]: Trying to attract more flames. Would it be reasonable to expect all Debian infrastructure to have Docker run commands? So we could run them on all machines easily and develop on them? [Paul]: So I've been playing around with Dockerizing dak. Yeah right, um. [laughs] I haven't spent too much time on it, but it's definitely a goal of my to 'docker run' 3 containers and have a working dak/debile set up. That will let you dput packages in source form, to a directory and end up with an apt-get able .deb directory somewhere else. It's something I'm definitely interested in, Dockerizing more of Debian infrastructure so people can run it, test it locally. and have the steps it takes to set it up in a Docker file, is like perfect. It's exactly what I love Docker for. Having something like that, where you can make some changes and then do a 'docker build' of the current directory that you're working in and being able to test it, without worrying about setting it up on the host. That would be key, that would be awesome. I'd love to play with that. [Asheesh]: Just to make the flame temperature increase. It seems like Docker, by promoting a world of process-based isolation, decreases the importance of things like Debian Policy which are all about having programs be co-installable and not step on each others toes and this seems sort of consistent with, I don't know, the way that the San Francisco bay area based development community operates (of which I'm now a part) where we just sort of install some sort of base operating system and then just pour files all over the system [laughter] But I guess I'm supposed to ask a question, so the question is: [laughter] [Paul]: Please form your flame in the form of a question. [Asheesh]: Yeah, but the question is really Should Debian take more seriously the idea that things like Policy may be less important over the next 2-15 years and alter Debian packaging accordingly? Russ! [Russ]: So there- [laughs] [laughter] So there are several pieces to what Policy does for you. What I would say is, there's a set of problems that Debian has tried to deal with for many years that are a bunch of the things that are in Policy which are you say, are about being able to install a bunch of stuff that prior to Debian putting a bunch of work into it, would have actually conflicted with each other and given all that Debian did, now they don't conflict with each other. There's a bunch of stuff like alternatives and diversions and all that kind of thing. I think that stuff is still going to be useful in a lot of cases, it's possible that will not be useful inside the little Docker containers that you're using to run production infrastructure. I think we'd all be pretty happy to see that happen, those are often workarounds for problems that are not as good as just having the one thing installed. Like for example, one of the things I want to use Docker for is to set up test MIT KDC and Heimdal KDC, so I can test Kerberos code against both of them and right now the packages conflict because of a bunch of reason. and you can kind of fix that with alternatives, except you can't really fix that with alternatives, because kadmin index is completely different and then you get into a big argument. So there are parts of Policy like that that will be less important I think that even when you put everything inside Docker, having all of the binaries in /var/tmp is still not useful when something goes wrong and you want to find the command that went wrong and you didn't think to look in /var/tmp for the command. [laughs] So I think there's still some role for, "I installed this thing, now where the hell did all the bits of it go?" and I want to configure this thing, I would like all of the configuration files to be in the configuration file directory and not scattered off in root's home directory. So that part of Policy I don't think really changes. [question?]: So what Paul gave us, was a bunch of recommendations on top of what Docker, if you can calls it that, describes. Isn't that something that would be useful as part of a Debian Docker policy as in, how do you Dockerize applications for Debian and in that case, what you can have is you can still have alternatives and diversions and everything else that actually allows you to have packages co-exist inside that debian unstable base image and you still need that to build your base images, or any images for Docker. You could have some sane recommendations on how to lay things out with Docker on top of that. [Paul]: Yeah, interesting. I hadn't really thought about that too much. If people would be happy with documenting best practices in Debian with Docker, I'd be happy to spend time and effort I don't know if me dictating that kind of thing is the best idea I think if other people want to try coherent thoughts around this that would be a lot of fun. Oh come on, you've got more than that! [question?]: Within the next 20 minutes, can we Dockerize Subsurface? [Paul]: I've got 5 minutes left. [question?]: There's a man here in 20 minutes who's going to be upset about the fact that Surface isn't using static linking [Paul]: Run 'sudo apt-get install subsurface' Should be good. [question?]: We solve all of our static linking problems that way? [laughs] [question?]: You said that you were using, Docker was using aufs Did you have some problems with the stability of aufs? [Paul]: I have not. Most of my problems have been using non-aufs backends. As a matter of fact, I can't even get the kernel to run on Linode because the kernel is built without aufs on it. I actually have a blog post where I load from Xen grub, to grub 0.9 to grub 2.0 to the Debian kernel because the old grub Xen doesn't support .xz compression. Which is great. [laughter] Yeah it is, so if someone wants to get aufs working on Linode, there's a blog post somewhere. Alright, I think I'm out of time but we can keep talking about Docker stuff. Cool. [applause]