< Return to Video

Garden City Ruby 2014 - Zero Downtime Deployments with Docker

  • 0:25 - 0:30
    VAMSEE KANAKALA: Hello everyone. Very good
    afternoon.
  • 0:30 - 0:33
    I really enjoyed the lightning talks, thanks
  • 0:33 - 0:38
    everyone. And I hope you guys are slightly
    awake.
  • 0:38 - 0:42
    So, this is slightly off the center topic,
    so,
  • 0:42 - 0:45
    this is more about dev-ops rather than Ruby
    or
  • 0:45 - 0:47
    Rails, per se. So I'd like to know who
  • 0:47 - 0:52
    all are already familiar with Docker, have
    tried out
  • 0:52 - 0:55
    or just know what it does and stuff like
  • 0:55 - 1:02
    that. OK, not too many. Sorry about that.
    So
  • 1:03 - 1:06
    I've been a web developer for eight, for quite
  • 1:06 - 1:08
    some time. And a good part of it is
  • 1:08 - 1:12
    Rails. Actually the, I actually remember the
    days when
  • 1:12 - 1:16
    the fifteen minute video came out. So all
    through
  • 1:16 - 1:19
    these years I have also kind of ended up
  • 1:19 - 1:21
    setting up servers for the Ruby teams that
    I
  • 1:21 - 1:26
    worked with, and have been the Linux guy for
  • 1:26 - 1:28
    as long as I can remember, as in the
  • 1:28 - 1:33
    professional life of mine. And so I thought,
    so
  • 1:33 - 1:38
    lately I've been observing what's happening
    on the production
  • 1:38 - 1:40
    side of things, so I guess a bunch of
  • 1:40 - 1:43
    you- so how many of you have set up
  • 1:43 - 1:48
    your own Rails servers, maintained them? Oh,
    quite a
  • 1:48 - 1:50
    few. OK. Then, this should be relevent to
    what
  • 1:50 - 1:53
    you're doing. So the point of, so we're gonna
  • 1:53 - 1:57
    talk about zero downtime deployments with
    Docker. So the
  • 1:57 - 2:02
    point of this is, so what is Docker? So
  • 2:02 - 2:04
    the first thing that Docker does is it basically
  • 2:04 - 2:10
    commoditizes LXC. LXC is Linux Containers.
    So the containers,
  • 2:10 - 2:14
    you can think of them as something like chroot
  • 2:14 - 2:17
    jail, change root jail, you have in Linux.
    So
  • 2:17 - 2:20
    what it, what it basically does is it gives
  • 2:20 - 2:22
    you a separate folder as a root, and you
  • 2:22 - 2:26
    can run off your processes from there, and
    all
  • 2:26 - 2:29
    the children are only allowed to access that
    part
  • 2:29 - 2:33
    of the directory, as a root directory. A container
  • 2:33 - 2:38
    extends this concept by giving you isolation
    on the
  • 2:38 - 2:41
    memory level, it gives you isolation on network
    level,
  • 2:41 - 2:44
    it gives you isolation on hard disk level.
    So
  • 2:44 - 2:51
    LXC is a fairly old technology in Linux, but
  • 2:51 - 2:55
    it has been mostly in the realm of people
  • 2:55 - 2:58
    who understand Linux pretty well, or sysadmins
    who are
  • 2:58 - 3:02
    trying to achieve something on the production
    side. So
  • 3:02 - 3:07
    Docker basically makes this accessible to
    all of us.
  • 3:07 - 3:12
    So it makes portable deployments across machines
    possible, so
  • 3:12 - 3:17
    you can basically have, you have, I have,
    suppose
  • 3:17 - 3:21
    a vagrant box, which runs Erlang?? [00:03:19]
    install, but
  • 3:21 - 3:26
    you can also, you can actually run production
    systems,
  • 3:26 - 3:29
    production installs of, out of, say at 12
    point
  • 3:29 - 3:33
    0 precise. So you get a lot of flexibility
  • 3:33 - 3:39
    of moving your production images around. So
    there is
  • 3:39 - 3:43
    efficient and quick provisioning, so it saves
    on a
  • 3:43 - 3:47
    disk, how much disk it uses, by doing copy-on-write
  • 3:47 - 3:51
    installs. I'll talk about it a little bit
    later.
  • 3:51 - 3:57
    There's, it's near-native performance, it's
    basically process virtualization. So
  • 3:57 - 4:01
    you're not, you're not doing hardware virtualization,
    or, you're
  • 4:01 - 4:04
    not trying to support different OSs and stuff
    like
  • 4:04 - 4:08
    that, so the way you, the speed at which
  • 4:08 - 4:11
    your Docker instant boots up is very quick.
    So
  • 4:11 - 4:16
    you almost have no overhead at all. It also
  • 4:16 - 4:20
    has Git-like versioning of images. So you
    have basically
  • 4:20 - 4:23
    a base image, which is Ubuntu, and you can
  • 4:23 - 4:27
    install something like emacs on it and you
    can
  • 4:27 - 4:29
    commit it and it'll save it and it'll make
  • 4:29 - 4:33
    it into another image. Which'll also - I'll
    show
  • 4:33 - 4:35
    you a little bit more about that. So that
  • 4:35 - 4:39
    enables a lot of reuse. So you can basically
  • 4:39 - 4:43
    push these images to a public depository that
    Docker
  • 4:43 - 4:47
    maintains, index dot docker dot IO, so if
    they're
  • 4:47 - 4:50
    public, if they're open for sharing, so you
    can
  • 4:50 - 4:53
    push them out there and people can use your
  • 4:53 - 4:59
    own configuration how you, however you configured
    your image.
  • 4:59 - 5:03
    And the major difference between how LXC operates,
    or
  • 5:03 - 5:07
    how LXC is talked about and how Docker encourages
  • 5:07 - 5:12
    you to think about containers is that, so
    LXC
  • 5:12 - 5:17
    was initially thought of as, you know, lightweight
    servers,
  • 5:17 - 5:20
    which, where you install basically everything
    and put them
  • 5:20 - 5:22
    up and treat them as, just like any other
  • 5:22 - 5:26
    server. Docker kind of encourages you to look
    at
  • 5:26 - 5:30
    containers as an application. So you install,
    say your
  • 5:30 - 5:33
    database mask in one container, you install
    your app
  • 5:33 - 5:38
    server as another container, you install your,
    your RDB,
  • 5:38 - 5:45
    you know, so, in another container. So, what
    is
  • 5:46 - 5:49
    LXC? I actually wanted to take out this slide,
  • 5:49 - 5:55
    probably it's a little too advanced for this
    talk.
  • 5:55 - 5:57
    But let - I'll try to cover this quickly.
  • 5:57 - 6:02
    So at the basic level, it provides OS-level
    virtualization
  • 6:02 - 6:06
    for Linux. So compared to, say, what virtual
    box
  • 6:06 - 6:10
    does for you, or KVM, or Zen, so these
  • 6:10 - 6:15
    are all hardware virtualization, so OS-level
    virtualization - much
  • 6:15 - 6:20
    faster, much lightweight. Perfect for production
    deployment, so. It
  • 6:20 - 6:25
    basically does this. So LXC basically provides
    you one
  • 6:25 - 6:29
    process space, one network interface, and
    your own init
  • 6:29 - 6:33
    framework. So your, you can be running on
    Ubuntu,
  • 6:33 - 6:38
    which uses Upstart as- and your container
    can use
  • 6:38 - 6:41
    systemD. That's not a problem at all. So the
  • 6:41 - 6:44
    basic isolation is achieved with cgroups.
    Cgroups are control
  • 6:44 - 6:50
    groups. So what cgroups gives you is that
    it
  • 6:50 - 6:53
    lets you put limits on the resource usage,
    basically.
  • 6:53 - 6:56
    Whether it's network or off your disk or your
  • 6:56 - 7:02
    process usage, so cgroups gives you this nice
    little
  • 7:02 - 7:04
    interface where you can do - this is definitely
  • 7:04 - 7:06
    the Linux geek- do not worry, you don't have
  • 7:06 - 7:09
    to worry about that. So the only catch is
  • 7:09 - 7:13
    that it shares the kernel with the host, so
  • 7:13 - 7:16
    you can do stuff like having an x64 image
  • 7:16 - 7:20
    and put it on i36 or vice versa. So
  • 7:20 - 7:23
    that's pretty much the only catch here and
    it's
  • 7:23 - 7:26
    probably not very, not much of a catch at
  • 7:26 - 7:29
    all. So a typical docker image kind of looks
  • 7:29 - 7:32
    like this. At the most basic level you will
  • 7:32 - 7:36
    see the kernel, and you have cgroups, you
    have
  • 7:36 - 7:42
    name spaces and device mapper. So Docker kind
    of
  • 7:42 - 7:46
    achieves this git-like portioning through,
    through a unioning file
  • 7:46 - 7:52
    system. Right now, debian-based installs use,
    a UFS, which
  • 7:52 - 7:55
    is quite popular, but it has some limitations,
    which
  • 7:55 - 8:00
    is, it's integrated into a debian kernel,
    a debian-based
  • 8:00 - 8:04
    distros kernels, but it's not really available
    in others,
  • 8:04 - 8:06
    like Santos and dev-hats ?? of the world.
    [00:08:06]
  • 8:06 - 8:11
    So recently they have switched, created a
    storage IO
  • 8:11 - 8:14
    which kind of lets you swap out AUFS with
  • 8:14 - 8:17
    the device-mapper, and has plans for integrating
    RFS and
  • 8:17 - 8:22
    BTRFS and stuff like that. So beyond that,
    you
  • 8:22 - 8:24
    see the base image, which is shipped out of
  • 8:24 - 8:29
    docker registry, and you also have images.
    So I
  • 8:29 - 8:33
    installed emacs and, committed, it becomes
    a read-only- so
  • 8:33 - 8:38
    bootSF is basically read-only, and once you
    boot up
  • 8:38 - 8:42
    that container you'll get a writable part.
    So once
  • 8:42 - 8:46
    you commit it, it'll become, again, read-only.
    We'll go
  • 8:46 - 8:49
    through that. Workflow class- So the basic
    workflow, I
  • 8:49 - 8:53
    will do it now again, so you basically pull
  • 8:53 - 8:56
    docker images from the public registry and
    you run
  • 8:56 - 8:59
    it on your host, and you add your own
  • 8:59 - 9:01
    changes on top of it, push it back to
  • 9:01 - 9:04
    share them, or you could also build it from
  • 9:04 - 9:09
    the ground up using debootstrap and tools
    like that.
  • 9:09 - 9:11
    So you also have a docker file which lets
  • 9:11 - 9:16
    you build your own images. Apart from that,
    you
  • 9:16 - 9:19
    can set up a private regist- repor- registry
    -
  • 9:19 - 9:23
    sorry. So the idea of private registry is
    that
  • 9:23 - 9:24
    you have your own work groups and you want
  • 9:24 - 9:27
    to share these images within your company
    and they're
  • 9:27 - 9:32
    not really useful for the public usage. So
    this
  • 9:32 - 9:35
    is a very public registry, private registry
    comes in
  • 9:35 - 9:38
    and, this is just a simple Python app, you
  • 9:38 - 9:40
    can run it on your own server and set
  • 9:40 - 9:42
    it up. So, oh, you can also sign up
  • 9:42 - 9:45
    for something like quail dot IO, which also
    lets
  • 9:45 - 9:48
    you push, well, you can have your own account,
  • 9:48 - 9:51
    pay for it, and push your private images there,
  • 9:51 - 9:55
    and it's locked up. So before we go, go
  • 9:55 - 9:57
    into the Docker file part, so let me show
  • 9:57 - 10:04
    you a simple - workflow. So you have images,
  • 10:07 - 10:11
    so you can probably ignore a bunch of these.
  • 10:11 - 10:14
    Look at the last ones, which is basically
    -
  • 10:14 - 10:21
    should I move that a little bit? OK. So
  • 10:24 - 10:26
    at the most basic level, when you pull from
  • 10:26 - 10:33
    Docker, say, so it will try to, so, it's
  • 10:37 - 10:39
    not gonna pull anything really, it'll just
    check for
  • 10:39 - 10:42
    the layers that are available on my system,
    and
  • 10:42 - 10:47
    it'll just, adjust itself. So you see several
    layers
  • 10:47 - 10:51
    there, so if you look at the Ubuntu part,
  • 10:51 - 10:55
    it's actually, the Ubuntu images actually
    comprised of precise
  • 10:55 - 10:57
    and quantiles, and you can use any of those
  • 10:57 - 11:01
    to take of your container. So kicking off
    a
  • 11:01 - 11:08
    container is probably as simple as -
    the end.
  • 11:11 - 11:14
    You can give it, you have to give it,
  • 11:14 - 11:17
    I'm taking the Ubuntu image, a basic image,
    and
  • 11:17 - 11:21
    I have to give it an entry point. So
  • 11:21 - 11:26
    it will drop me into a root prompt, and
  • 11:26 - 11:31
    basically I can do, I can run app, get
  • 11:31 - 11:34
    updates, and I can install my own stuff. I'll
  • 11:34 - 11:37
    just install a small, a very tiny package,
    so,
  • 11:37 - 11:44
    in the interest of time. Wow. That takes-
    So
  • 11:48 - 11:51
    the basic starter image is pretty much very
    stripped
  • 11:51 - 11:53
    down, you don't have most of the components
    that
  • 11:53 - 11:55
    you would need. So the idea is that it
  • 11:55 - 11:57
    should be very lightweight to deploy, and
    you can
  • 11:57 - 12:02
    basically add your own, your own software
    on top
  • 12:02 - 12:05
    of it and commit and push it up. So
  • 12:05 - 12:12
    let's say I install nano. There you go, right.
  • 12:15 - 12:19
    That shouldn't take too long. Yeah. So you
    have
  • 12:19 - 12:22
    nano here, and if I switch to the other
  • 12:22 - 12:29
    window, you can see. Docker ps. So among the
  • 12:35 - 12:38
    other ones, are you- you can see the last
  • 12:38 - 12:40
    one, which is being run here, and if I
  • 12:40 - 12:47
    actually switch, use nano there, you can see
    -
  • 12:47 - 12:54
    docker - So each of the, each of the
  • 12:54 - 12:57
    containers will have its own name, so you
    can
  • 12:57 - 13:00
    do - cranky - you can also set the
  • 13:00 - 13:03
    names, which is a recent feature. But otherwise
    it'll
  • 13:03 - 13:10
    just regen- it'll just generate - oh, sorry.
    So
  • 13:14 - 13:17
    it, it gives you what is happening inside
    the
  • 13:17 - 13:19
    container. So you have a basic idea of what's
  • 13:19 - 13:24
    running inside the container. So you can also
    commit
  • 13:24 - 13:30
    this. So you exit it, and you see, so
  • 13:30 - 13:32
    cranky curie is on there, so you can do
  • 13:32 - 13:39
    something like docker commit cranky curie.
    Sorry. So you
  • 13:53 - 13:55
    can commit it, and it'll show up in your
  • 13:55 - 14:02
    images. Oh, I have to give it a name.
  • 14:04 - 14:11
    Cranky_curie as varsee nano. So, docker images.
    You'll see
  • 14:14 - 14:17
    the one on top. It has fancy name. And
  • 14:17 - 14:19
    you can push it, you can push it to
  • 14:19 - 14:23
    the public registry, the public registry kind
    of looks
  • 14:23 - 14:30
    like this. So you can search for your own,
  • 14:33 - 14:36
    whatever images that you might need for your
    deployment
  • 14:36 - 14:42
    and stuff like that. So I've, you know, I've
  • 14:42 - 14:43
    been playing around with it a little bit and
  • 14:43 - 14:46
    stuff like that. So this is the public depository.
  • 14:46 - 14:49
    You can install the same thing on your private
  • 14:49 - 14:53
    servers and secure it from outside. And you
    can
  • 14:53 - 14:56
    have your own image. So that's basically the
    workflow
  • 14:56 - 14:59
    that you would work with. And there's a second
  • 14:59 - 15:01
    part to it. What I've done so far is
  • 15:01 - 15:05
    the manual, so I've logged into a container
    and
  • 15:05 - 15:07
    I've installed stuff in it. So you basically
    automate
  • 15:07 - 15:14
    it with something called a Docker file. Oh,
    sorry.
  • 15:24 - 15:27
    So Docker file, so is, it's very similar to
  • 15:27 - 15:31
    what you have reg file or make files in
  • 15:31 - 15:35
    your projects. So it's a default way to build
  • 15:35 - 15:39
    it from base image. Basically upload a file
    script,
  • 15:39 - 15:43
    but definitely easier to maintain. And you
    have directives
  • 15:43 - 15:48
    like from, run, command, expose. So from is
    basically
  • 15:48 - 15:51
    which, based on which image do I want to
  • 15:51 - 15:54
    build my docker. And you have run, basically
    has,
  • 15:54 - 15:57
    or app to get install commands, whatever is
    done
  • 15:57 - 16:00
    manually. And command and entry point are
    very similar.
  • 16:00 - 16:03
    So I've entered into the container through
    bin bash,
  • 16:03 - 16:07
    so I've basically put that in entrypoint.
    And the
  • 16:07 - 16:11
    command is what passes, you pass some options
    into
  • 16:11 - 16:13
    that. So I'll show you a docker file anyway,
  • 16:13 - 16:16
    so that will, that should put this, all this
  • 16:16 - 16:19
    in context. So you have, you can even add
  • 16:19 - 16:22
    files, you can copy config files from your
    host
  • 16:22 - 16:27
    system into your container. You can have volumes,
    volumes
  • 16:27 - 16:32
    are basically mount points. You just mount
    a whole
  • 16:32 - 16:36
    directory as either read-only or read-write,
    it's up to
  • 16:36 - 16:39
    you. And on the whole there are about a
  • 16:39 - 16:42
    dozen commands. It's very simple to get started
    with.
  • 16:42 - 16:48
    No nonsense. And doesn't need a lot of time
  • 16:48 - 16:51
    to learn the whole thing. So, and you can
  • 16:51 - 16:55
    create your own base images using debootstrap
    in centOS,
  • 16:55 - 16:58
    it's, debootstrap is basically a building
    tool with how
  • 16:58 - 17:02
    you build a base image. But in that other
  • 17:02 - 17:04
    distros you have, you know you can do it
  • 17:04 - 17:09
    with other tools So zero downtime deployment.
    So why,
  • 17:09 - 17:13
    why do we need that? So the most important
  • 17:13 - 17:18
    part being, you have, you know, things like
    continuous
  • 17:18 - 17:23
    delivery and continuous deployments, right.
    So they're subtly different
  • 17:23 - 17:25
    from each other, they're very similar concepts,
    of course
  • 17:25 - 17:30
    you have continous delivery where you send
    stuff, you
  • 17:30 - 17:34
    deliver your software on a regular basis and
    you
  • 17:34 - 17:37
    have tight communication loops with your clients
    and all
  • 17:37 - 17:39
    that good stuff - ?? and stuff. [00:17:38]
    And
  • 17:39 - 17:42
    continous deployment is basically taking it
    one step, and
  • 17:42 - 17:45
    I think Chad did a really good example of
  • 17:45 - 17:51
    that yesterday. So you know instead of making
    your
  • 17:51 - 17:53
    deployments, say, once a week, or you know,
    once
  • 17:53 - 17:56
    every few days, the idea is to make them
  • 17:56 - 18:00
    as continuously as possible with least amount
    of angst
  • 18:00 - 18:05
    around making deployments. So you basically
    have, I'm sure
  • 18:05 - 18:08
    you're all used to long deploys in Rails.
    Migrations,
  • 18:08 - 18:11
    you know, when migrations are happening, you're
    changing the
  • 18:11 - 18:15
    schema, other requests, you usually put in
    a maintenance
  • 18:15 - 18:19
    page and when other requests comes in you,
    if
  • 18:19 - 18:22
    you don't put up a maintenance page, you already,
  • 18:22 - 18:25
    you can get some errors and stuff like that.
  • 18:25 - 18:28
    And obviously you know about asset compilation,
    it takes
  • 18:28 - 18:31
    way too long. So but these problems are not,
  • 18:31 - 18:34
    really not limited to Rails, per se. I'm sure
  • 18:34 - 18:37
    you have the same issues when you're deploying
    Jangle
  • 18:37 - 18:41
    container, so Docker is basically a framework
    diagnostic you
  • 18:41 - 18:44
    can run any apps on it, and. So I'm
  • 18:44 - 18:48
    trying to lay out a problem, so there are
  • 18:48 - 18:50
    two parts to this problem. So one is with
  • 18:50 - 18:54
    migrations and without migrations. Without
    migrations it's usually a
  • 18:54 - 18:58
    little easier because you don't have to worry
    about
  • 18:58 - 19:02
    making sure the databases are in sync and
    stuff
  • 19:02 - 19:04
    like that. So with databases it's a more complex
  • 19:04 - 19:07
    scenario where you have to take a master DB
  • 19:07 - 19:11
    slave, DB, make sure they're sync, and you
    have
  • 19:11 - 19:15
    something like ZooKeeper kind of keeping track
    of who's
  • 19:15 - 19:17
    master, who's slave, and you switch. So I'll
    try
  • 19:17 - 19:22
    to walk you through the simpler case, so we
  • 19:22 - 19:26
    can extend this to, you know, DB level. I
  • 19:26 - 19:29
    don't think I can cover other DB stuff here.
  • 19:29 - 19:34
    So you basically have a HaProxy. HaProxy is
    basically
  • 19:34 - 19:38
    a reverse proxy but on steroids. So it's a
  • 19:38 - 19:41
    load balancer, to be exact. But what it does
  • 19:41 - 19:43
    is very similar to what engine x does for
  • 19:43 - 19:46
    you, you have like multiple instances, and
    you are,
  • 19:46 - 19:50
    they're running on different multiple instances
    of your app
  • 19:50 - 19:54
    server, they're running on different ports.
    And basically enginex,
  • 19:54 - 19:56
    once a request comes enginex will do a round
  • 19:56 - 19:59
    up and allotment of you know the servers.
    So
  • 19:59 - 20:02
    HaProxy does that, but also a lot more. It
  • 20:02 - 20:05
    also lets you do funky stuff like what I'm
  • 20:05 - 20:07
    gonna talk about, there's a back-up server,
    and active
  • 20:07 - 20:11
    server, which you can use cleverly to do zero-downtime
  • 20:11 - 20:15
    deployments. But it also has, if you have
    time,
  • 20:15 - 20:18
    I would suggest you go through the configuration
    file,
  • 20:18 - 20:20
    which is very dense and long, but very interesting
  • 20:20 - 20:24
    stuff. So what we're gonna use, in HaProxy
    here,
  • 20:24 - 20:28
    is that you have basically two types of, you
  • 20:28 - 20:31
    can set up two types of servers. And the
  • 20:31 - 20:34
    like, a bunch of active servers and there
    are
  • 20:34 - 20:36
    like a bunch of back-up servers, and the idea
  • 20:36 - 20:40
    is that the back-up servers are not used until
  • 20:40 - 20:43
    all the active servers are down, right. So
    the
  • 20:43 - 20:45
    request won't come through to back-up servers
    until all
  • 20:45 - 20:50
    the active servers are down. So what we are
  • 20:50 - 20:53
    gonna, how we are gonna use that, sorry, the
  • 20:53 - 20:59
    slides are very basic, so. So, you basically
    kick
  • 20:59 - 21:03
    off the deploy, kick off the image build with
  • 21:03 - 21:07
    docker, and you take down the back-up servers.
    At
  • 21:07 - 21:10
    this point your HaProxy is still serving from
    your
  • 21:10 - 21:15
    active servers, right. So now you bring up
    the
  • 21:15 - 21:18
    new back-up server, new back-up servers with
    your new
  • 21:18 - 21:20
    image, which is just being build when the
    deploy
  • 21:20 - 21:24
    happened. So, and then you take down the active
  • 21:24 - 21:29
    servers. So after all the active servers are
    down,
  • 21:29 - 21:31
    the requests will come in to the back-up ones,
  • 21:31 - 21:34
    right, so which is now serving your new code,
  • 21:34 - 21:37
    which has just been deployed. So after that
    you
  • 21:37 - 21:40
    restart your active servers, you're back to
    normal again.
  • 21:40 - 21:44
    So at the most basic level, so at least
  • 21:44 - 21:47
    you will definitely won't be able to do migrations
  • 21:47 - 21:49
    with this set-up. You have to go a little
  • 21:49 - 21:52
    bit advanced for that. But at least you'll
    be
  • 21:52 - 21:56
    able to avoid frustrations with stuff like
    long asset
  • 21:56 - 22:02
    recompilation, you know, long deploys that
    you usually get.
  • 22:02 - 22:05
    So let me walk you through the whole thing,
  • 22:05 - 22:12
    quickly. So I was actually quite upset that
    the
  • 22:12 - 22:15
    talk, the pamphlet which is being given, which
    had
  • 22:15 - 22:18
    shortcuts for sublime and whim, but it doesn't
    have
  • 22:18 - 22:25
    shortcuts for emacs. Which is, I object! So
    the
  • 22:25 - 22:27
    idea is, OK let me start you off with
  • 22:27 - 22:30
    the simple docker file. So this should, this
    should
  • 22:30 - 22:37
    - oh, OK. Yeah. Let me restart it. I'll
  • 22:51 - 22:57
    just show you on my- I think this is
  • 22:57 - 23:04
    a lot more easier to show you. So app
  • 23:05 - 23:12
    server, you have docker file. So at the most
  • 23:18 - 23:20
    basic level, I am picking it up from the
  • 23:20 - 23:25
    Ubuntu image, and basically running some adaptation
    of my
  • 23:25 - 23:30
    source's list, and you have app-get update,
    app-get install,
  • 23:30 - 23:33
    y. So let me run the deploy first, and
  • 23:33 - 23:35
    then I will talk about this, because I don't
  • 23:35 - 23:42
    think we'll have enough time to actually wait
    afterwards.
  • 24:01 - 24:05
    OK. So this will, this will run the deploy
  • 24:05 - 24:09
    process in the background. Let me talk about
    what
  • 24:09 - 24:16
    it actually does out here. OK. So let's get
  • 24:29 - 24:31
    back to our docker file. So what it does
  • 24:31 - 24:34
    - this is almost like the shell script that
  • 24:34 - 24:38
    you use for everyday automation, so, but it
    adds
  • 24:38 - 24:42
    a little bit more fun to it, I guess.
  • 24:42 - 24:46
    So what I'm doing is pretty straightforward.
    I'm installing
  • 24:46 - 24:50
    chruby, I hate RBM, especially for production
    it sucks,
  • 24:50 - 24:54
    I mean, yeah. There are other opinions about
    it.
  • 24:54 - 24:56
    But at least I have, I've thought it's the
  • 24:56 - 24:58
    easiest way to get started. So I'm basically
    installing
  • 24:58 - 25:03
    some default gems, like bundler and puma,
    and I'm
  • 25:03 - 25:06
    installing other dependencies. So the reason,
    I am actually
  • 25:06 - 25:11
    splitting this into two docker files. So you'll
    also
  • 25:11 - 25:18
    have stuff like, so. What I'm doing here is
  • 25:21 - 25:24
    that, so when I'm doing actually a deploy,
    I
  • 25:24 - 25:27
    am only running this. So I'm picking up, I'm
  • 25:27 - 25:30
    installing all my dependencies in my earlier
    image, and
  • 25:30 - 25:33
    I'm just reusing it for deploys I want, because
  • 25:33 - 25:36
    I want them to be pretty fast. So what
  • 25:36 - 25:38
    this does is pretty simple. It copies over
    the
  • 25:38 - 25:43
    database configurations and it does a bundle,
    and it
  • 25:43 - 25:48
    does a db migrate. I'm just using sqlite here,
  • 25:48 - 25:52
    so yeah. It exposes a port. So how the
  • 25:52 - 25:55
    containers talk to each other within a docker,
    in
  • 25:55 - 26:01
    your host, is that through exposing these
    ports. And
  • 26:01 - 26:04
    like I mentioned earlier, my entrypoint is
    basically I'm
  • 26:04 - 26:10
    starting Puma there. And I'm running it in
    a
  • 26:10 - 26:16
    ?? [00:26:07]. Yeah. So if you look at the,
  • 26:16 - 26:23
    if you. If you look at the deployment, I'm
  • 26:29 - 26:36
    sorry- Yeah. So I don't know how much of
  • 26:36 - 26:39
    this actually makes sense. I'll just show
    you the,
  • 26:39 - 26:43
    our deployment code. Sorry, the cap file,
    so that
  • 26:43 - 26:50
    should make a little bit of more, yeah. OK.
  • 27:02 - 27:06
    So if you see at the bottom, you'll see
  • 27:06 - 27:10
    that I am just doing a bunch of stuff
  • 27:10 - 27:15
    there, so I'm linking from my current deploy
    to
  • 27:15 - 27:22
    the vagrant, sorry, the docker container build
    directory. And
  • 27:22 - 27:25
    I'm starting from the back-end servers. So
    I'll, I
  • 27:25 - 27:30
    should also show you the, my HaProxy configuration.
    So
  • 27:30 - 27:35
    it starts with your ports set up and you
  • 27:35 - 27:38
    actually search for those docker containers
    and take them
  • 27:38 - 27:40
    down. So the build takes a- a little long,
  • 27:40 - 27:42
    so I've kind of commented it out for now,
  • 27:42 - 27:44
    but I can show you outside if you want
  • 27:44 - 27:47
    to see how that works. And it's pretty simple.
  • 27:47 - 27:50
    So at the end of it I'm just restarting
  • 27:50 - 27:53
    all my containers, so you can basically look
    at
  • 27:53 - 27:59
    them here. Docker ps a. You'll see that these
  • 27:59 - 28:01
    top ones are only up for two minutes. These
  • 28:01 - 28:05
    are recently deployed. So all through, if
    you look
  • 28:05 - 28:10
    at your HaProxy page, so you basically have
    two
  • 28:10 - 28:15
    active ones here, and two back-up servers
    here. So
  • 28:15 - 28:22
    like, and I also should show you the HaProxy
  • 28:28 - 28:35
    stuff, right. So you can pretty much ignore
    all
  • 28:37 - 28:40
    this stuff. The most important part is the
    last
  • 28:40 - 28:42
    two ones. So as you can see the web
  • 28:42 - 28:47
    01 and web 02 are active servers, and web
  • 28:47 - 28:50
    03 and web 04 are back-up servers. So that's
  • 28:50 - 28:54
    all it takes. So you can basically segment
    your,
  • 28:54 - 28:58
    the servers like that and go at it. So
  • 28:58 - 29:05
    that's basically it. So I hope- and, a couple
  • 29:05 - 29:09
    of helpful links if, sorry if it's not very
  • 29:09 - 29:12
    visible. There's docker dot io, where you
    can find
  • 29:12 - 29:15
    all of the documentation and stuff, there's
    haproxy -
  • 29:15 - 29:17
    go look at it if you are doing deployments
  • 29:17 - 29:21
    through your regular day-to-day developer
    life. This is a
  • 29:21 - 29:26
    lifesaving tool to learn well. And there's
    dockerbook, if
  • 29:26 - 29:28
    you're - it's written by James Turnbull one
    of
  • 29:28 - 29:33
    my favorite technical authors. He's written
    Pro Puppet, which
  • 29:33 - 29:36
    is still quite one of my favorite books. And
  • 29:36 - 29:38
    if you want to know a little bit more
  • 29:38 - 29:43
    about the Linux part of what Docker does,
    like
  • 29:43 - 29:47
    the internals of Docker, you can listen to
    Jerome
  • 29:47 - 29:49
    Petazzoni, who is part of the Docker team.
    So
  • 29:49 - 29:52
    he's given a really good talk, in-depth talk
    about
  • 29:52 - 29:54
    it, at our next conference you should look
    at
  • 29:54 - 29:56
    the video. And there are like a bunch of
  • 29:56 - 29:59
    tools you can probably look at. There's Dokku
    which
  • 29:59 - 30:03
    is a PaaS. PaaS is a platform as a
  • 30:03 - 30:05
    service, what- essentially what HaDokku does,
    you can build
  • 30:05 - 30:09
    your own HaDokku [00:30:06] with the Docker.
    And Flynn
  • 30:09 - 30:12
    dot io, CoreOS is also very import- very interesting
  • 30:12 - 30:15
    tool. CoreOS kind of bundles Docker with a
    service
  • 30:15 - 30:18
    disovery thing, like, kind of like ZooKeeper,
    but it
  • 30:18 - 30:22
    is called ATCD. And it bundle system ??[00:20:21]
    in
  • 30:22 - 30:25
    its framework, so if you're into deployments
    this is
  • 30:25 - 30:29
    a very interesting ecosystem to look at. And
    Quay
  • 30:29 - 30:33
    dot io I mentioned. It's, you can basically
    upload
  • 30:33 - 30:37
    your private images there and get started.
    So they're
  • 30:37 - 30:39
    like a bunch of tools. I don't know if
  • 30:39 - 30:42
    I have any time for questions, but you can
  • 30:42 - 30:47
    catch me. Sorry, but I'm available, you can
    catch
  • 30:47 - 30:54
    me at any time. Thanks a lot.
Title:
Garden City Ruby 2014 - Zero Downtime Deployments with Docker
Description:

more » « less
Duration:
31:23

English subtitles

Revisions