< Return to Video

RailsConf 2014 - Heroku 2014: A Year in Review by Terence Lee & Richard Schneeman

  • 0:18 - 0:25
    RICHARD SCHNEEMAN: All right. OK. Hello everyone.
  • 0:25 - 0:26
    AUDIENCE: Hello.
  • 0:26 - 0:29
    R.S.: Thank you. Thank you. Welcome to, welcome,
    let
  • 0:29 - 0:31
    me be the first to welcome you to RailsConf.
  • 0:31 - 0:36
    So, our, our talk today is Heroku 2014: A
  • 0:36 - 0:38
    Year in Review. It is gonna be a play
  • 0:38 - 0:44
    in six acts, featuring Terrance Lee and Richard
    Schneeman.
  • 0:44 - 0:46
    So, of course this is a year in review,
  • 0:46 - 0:49
    and Heroku does measure their years by RailsConf.
    So
  • 0:49 - 0:54
    this is from Portland to Chicago RailsConf
    year. The
  • 0:54 - 0:55
    Standard RailsConf Year.
  • 0:55 - 0:58
    As, as some of you might know, we are
  • 0:58 - 1:01
    on the Ruby Task Force, and, in fact, that
  • 1:01 - 1:06
    makes us Ruby Task Force members. And, of
    course,
  • 1:06 - 1:08
    this was a big year. We're gonna be talking
  • 1:08 - 1:12
    a little bit about app performance, some Heroku
    features,
  • 1:12 - 1:16
    and community features. So, first up to the
    stage,
  • 1:16 - 1:19
    I'm gonna be introducing the one, the only
    Mister
  • 1:19 - 1:23
    Terrance Lee. You might have recognized him
    in some
  • 1:23 - 1:28
    other roles. He hails from Austin, Texas,
    which has,
  • 1:28 - 1:32
    undoubtedly, the best tacos in the entire
    world. So,
  • 1:32 - 1:32
    he-
  • 1:32 - 1:34
    AUDIENCE: [indecipherable]
  • 1:34 - 1:39
    R.S.: Them's fightin' words, friend. So that
    he, he's
  • 1:39 - 1:42
    also sometimes known as the Chief Taco Officer.
    Or,
  • 1:42 - 1:46
    or the CTO. And something, something very
    interesting about
  • 1:46 - 1:49
    Terrance is, recently, he was inducted into
    Ruby Core,
  • 1:49 - 1:52
    so congratulations to, to Terrance. All right.
  • 1:52 - 1:56
    So, without further ado, Act 1: Deploy Speed.
  • 1:56 - 2:02
    TERRANCE LEE: Thank you, Richard. So, at the
    beginning
  • 2:02 - 2:06
    of the year Rails Standard Year, we focused
    a
  • 2:06 - 2:08
    lot on deployment speed. We got a lot of
  • 2:08 - 2:12
    feedback and realized deployment was not as
    fast as
  • 2:12 - 2:15
    it could be. And we wanted to make it
  • 2:15 - 2:16
    faster. So, the first thing we set out to
  • 2:16 - 2:18
    do was to actually do a bunch of measurement
  • 2:18 - 2:22
    and profiling to look at where things were
    slow,
  • 2:22 - 2:24
    and how we could make it better, and to
  • 2:24 - 2:28
    kind of gage, like, the before and after and
  • 2:28 - 2:31
    know when the good points were to kind of
  • 2:31 - 2:35
    stop and move on to other things. Cause you
  • 2:35 - 2:37
    can never make, you can never, you will never
  • 2:37 - 2:40
    be done with, like, performance improvements.
  • 2:40 - 2:44
    So, after about six months of work on this,
  • 2:44 - 2:47
    we managed to cut down the deploy speeds for,
  • 2:47 - 2:51
    across the platform for Ruby by about forty
    percent.
  • 2:51 - 2:54
    So it's a pretty decent speed improvement.
    And, in
  • 2:54 - 2:56
    order to do this, we mainly looked at three
  • 2:56 - 2:59
    various ways to speed this up.
  • 2:59 - 3:03
    The first thing was running code in parallel,
    so
  • 3:03 - 3:07
    running more things, running things, like,
    more than one
  • 3:07 - 3:10
    thing at one time. If you cache stuff you
  • 3:10 - 3:12
    don't have to do it again, and, in general,
  • 3:12 - 3:15
    just like, cutting out code that doesn't need
    to
  • 3:15 - 3:16
    be there.
  • 3:16 - 3:19
    So, with the parallel code, we worked with
    the
  • 3:19 - 3:24
    bundler team on Bundler 1.5. There was a pull
  • 3:24 - 3:27
    request sent by CookPad that was sent in to
  • 3:27 - 3:31
    add parallel bundler install for Bundler 1.5.
    So if
  • 3:31 - 3:33
    you actually aren't using this yet, I would
    recommend
  • 3:33 - 3:37
    upgrading your bundle to at least Bundler
    1.5. And
  • 3:37 - 3:41
    the bundler added this dash j option, which
    allows
  • 3:41 - 3:45
    you to specify the number of jobs to run.
  • 3:45 - 3:47
    And this is, basically, if you're using MRI,
    it
  • 3:47 - 3:51
    forks and does a num, these number of sub-processes,
  • 3:51 - 3:54
    and if you're on JRuby or Rubinius it actually
  • 3:54 - 3:56
    just uses threads here.
  • 3:56 - 3:58
    And the benefit of doing this is, when you
  • 3:58 - 4:01
    actually do bundle install, the dependencies
    that get installed
  • 4:01 - 4:04
    get downloaded in parallel, so you're not
    waiting on
  • 4:04 - 4:07
    network traffic sequentially anymore, and
    in addition you're also
  • 4:07 - 4:11
    install gems in parallel. And this is mostly
    beneficial,
  • 4:11 - 4:14
    especially when you're running native extensions.
    So if you
  • 4:14 - 4:16
    have something like Nokogiri, that takes a
    long time.
  • 4:16 - 4:20
    Oftentimes, you notice, you just like hang
    and wait
  • 4:20 - 4:21
    for it to install and then it installs the
  • 4:21 - 4:24
    next thing, so this allows you to install
    that,
  • 4:24 - 4:26
    basically, in the background and then go and
    install
  • 4:26 - 4:29
    other gems at the same time.
  • 4:29 - 4:35
    Also, in Bundler 1.5, Richard actually added
    this function
  • 4:35 - 4:40
    that allows people, allows bundler to auto-retry
    failed commands,
  • 4:40 - 4:44
    so initially, before this, we would, when
    we run
  • 4:44 - 4:47
    bundle install and something would fail because
    of some
  • 4:47 - 4:50
    odd network timeout, like during one chance,
    you would
  • 4:50 - 4:52
    have to basically repush again, no matter
    where you
  • 4:52 - 4:56
    were in the build process. So, by default
    now,
  • 4:56 - 5:00
    Bundler actually will retry clones and gem
    installs for
  • 5:00 - 5:02
    up to three times by default.
  • 5:02 - 5:07
    So, it will continue going during the deploy
    process.
  • 5:07 - 5:10
    And so is anyone here actually familiar with
    the
  • 5:10 - 5:17
    PIGz command? So, just Richard? So, PIGz is
    Parallel
  • 5:19 - 5:24
    Gzip, and the build and packaging team at
    Heroku
  • 5:24 - 5:27
    worked on this feature, or worked on implementing
    this
  • 5:27 - 5:31
    at Heroku using the PIGz command, and in order
  • 5:31 - 5:33
    to understand the kind of benefit of using
    something
  • 5:33 - 5:36
    like this, when you push an app up on
  • 5:36 - 5:39
    Heroku during the compile process, it actually
    builds these
  • 5:39 - 5:44
    things at Heroku that are called slugs. And
    basically
  • 5:44 - 5:48
    it's just, like, a tar of your app directory
  • 5:48 - 5:50
    of everything after the compile face is run.
  • 5:50 - 5:55
    And, originally, we were just using SquashFS,
    initially, and
  • 5:55 - 5:57
    then we moved to kind of just tar files,
  • 5:57 - 6:00
    and we noticed that one of the slowest point
  • 6:00 - 6:03
    in the actual build process was actually just
    going
  • 6:03 - 6:06
    through and compressing everything in that,
    that file directory,
  • 6:06 - 6:09
    and then pushing it up onto S3 after that
  • 6:09 - 6:12
    was done. And so one of the things that
  • 6:12 - 6:14
    we looked into was, is there a way we
  • 6:14 - 6:17
    can make that faster? So, if you ever push
  • 6:17 - 6:19
    a Heroku app and then you basically, like,
    wait
  • 6:19 - 6:21
    when it says, like, compressing and then it
    goes
  • 6:21 - 6:24
    to done, like that's the compressing of the
    actual
  • 6:24 - 6:25
    slug.
  • 6:25 - 6:29
    And we managed to use slug, PIGz to now
  • 6:29 - 6:31
    improve that by sniffing them out. I don't
    remember
  • 6:31 - 6:36
    the actual performance improvement, but it
    was pretty significant,
  • 6:36 - 6:39
    and the only downside was in certain slugs,
    the
  • 6:39 - 6:42
    slug sizes are a little bit bigger. But the
  • 6:42 - 6:46
    performance trade off was worth it at that
    time.
  • 6:46 - 6:48
    The next thing we started doing was looking
    into
  • 6:48 - 6:54
    caching. So anyone here using Rails 4? So
    pretty
  • 6:54 - 6:57
    good amount of the room. So the one thing
  • 6:57 - 7:01
    is that we did which differ from Rails 3,
  • 7:01 - 7:02
    thanks to a bunch of the work that's happened
  • 7:02 - 7:04
    on the Rails Core team with us is that,
  • 7:04 - 7:07
    we can now cache assets between deploys. This
    wasn't
  • 7:07 - 7:11
    possible in Rails 3 because the cache was,
    you
  • 7:11 - 7:13
    couldn't actually reuse the cache. There was
    times when
  • 7:13 - 7:16
    the cache would basically be corrupted and
    then you
  • 7:16 - 7:19
    would get, like, assets that wouldn't work
    between deploys.
  • 7:19 - 7:20
    So the fix there was you actually have to
  • 7:20 - 7:24
    remove the assets between each deploy in some
    Rails
  • 7:24 - 7:27
    3 builds. But it wasn't consistent, so sometimes
    it
  • 7:27 - 7:30
    would work and sometimes it didn't. And on
    Heroku
  • 7:30 - 7:31
    that's not something we can rely on in an
  • 7:31 - 7:32
    automated fashion.
  • 7:32 - 7:35
    But luckily a lot of that stuff has been
  • 7:35 - 7:37
    fixed for Rails 4, so now we cache assets
  • 7:37 - 7:40
    between deploys in Rails 4. And so if we
  • 7:40 - 7:44
    look at Rails 3, I guess this got cut
  • 7:44 - 7:46
    off, but this is supposed to say about, like,
  • 7:46 - 7:50
    thirty-two seconds for a Rails 3 deploy, and
    then
  • 7:50 - 7:54
    on Rails 4 it got, for the average we,
  • 7:54 - 7:57
    we measured the steps in the, in the build
  • 7:57 - 8:00
    process, and on Rails 4, the perk fifty was
  • 8:00 - 8:03
    about fourteen point something seconds. So
    a pretty significant
  • 8:03 - 8:07
    speed improvement there, both due to caching
    and other
  • 8:07 - 8:11
    improvements inside of Rails 4 for the asset
    pipeline.
  • 8:11 - 8:13
    So the other thing we also looked at was
  • 8:13 - 8:17
    just, if there's code that is doing extra
    work,
  • 8:17 - 8:19
    if we remove that, it will speed up the
  • 8:19 - 8:24
    build process for everyone who's deploying
    every day. So
  • 8:24 - 8:25
    one of the first things that we did was
  • 8:25 - 8:30
    actually stop downloading bundler more than
    once. So, initially,
  • 8:30 - 8:33
    when you, when we do the Ruby version detection,
  • 8:33 - 8:37
    we actually have to download bundler, and
    then basically
  • 8:37 - 8:39
    run that to get the version of Ruby to
  • 8:39 - 8:42
    install on the application. And then again,
    we would
  • 8:42 - 8:45
    then download and install it again because
    it was
  • 8:45 - 8:50
    run in a separate process for the actual,
    like,
  • 8:50 - 8:52
    installing of your dependencies. And one of
    the things
  • 8:52 - 8:55
    we did was to actually just stop doing that
  • 8:55 - 8:58
    and we would cache the Bundler gem so we
  • 8:58 - 9:00
    don't have to download that two or three times
  • 9:00 - 9:03
    during the build process. So, so cutting network
    IO
  • 9:03 - 9:06
    and other things.
  • 9:06 - 9:09
    We also started removing, there was like duplicate
    checks
  • 9:09 - 9:11
    between detection of what kind of app you
    were
  • 9:11 - 9:15
    using so, and, bin detects. We would use it
  • 9:15 - 9:17
    to figure out what kind of app you have,
  • 9:17 - 9:18
    like if it was a Ruby app, a Rack
  • 9:18 - 9:21
    app, a Rails 3 app, a Rails 4 app,
  • 9:21 - 9:24
    stuff like that. And then, again, since it
    was
  • 9:24 - 9:26
    a separate process in bin compile, we would
    have
  • 9:26 - 9:30
    to do it again. So, Richard actually did a
  • 9:30 - 9:33
    bunch of work to refactor both detect and
    release,
  • 9:33 - 9:36
    and so now detect is super simple. It literally
  • 9:36 - 9:39
    just checks if you have the gemfile file there,
  • 9:39 - 9:41
    and then all the other work is now deferred
  • 9:41 - 9:43
    to bin compile. So that means we're only doing
  • 9:43 - 9:46
    a bunch of these checks once, like examine
    your
  • 9:46 - 9:50
    jump file, checking what gems you have. So
    not
  • 9:50 - 9:53
    doing that two or more times.
  • 9:53 - 9:57
    And, if you haven't watched this talk, he
    gave
  • 9:57 - 9:59
    this talk at Ancient City Ruby. I don't actually
  • 9:59 - 10:03
    know if the videos are quite up yet. But
  • 10:03 - 10:05
    Richard does a talk about testing the untestable,
    so
  • 10:05 - 10:07
    if you're interested in learning how we test
    the
  • 10:07 - 10:11
    build pack, you should go watch this talk.
  • 10:11 - 10:15
    So I'd like to introduce Richard, cause he's
    gonna
  • 10:15 - 10:19
    present on the next section. So Richard loves
    Ruby
  • 10:19 - 10:22
    so much that he got married to her. I
  • 10:22 - 10:23
    think he got married last, last year.
  • 10:23 - 10:24
    R.S.: Right before our last RailsConf.
  • 10:24 - 10:28
    T.L.: Yeah. Right before our last RailsConf.
    I remember
  • 10:28 - 10:32
    that. He's also on the Rails Issue Team, and
  • 10:32 - 10:35
    he's one of the top one hundred Rails contributors,
  • 10:35 - 10:40
    according to the Rails contributor sites.
    And you might
  • 10:40 - 10:44
    also know him for his, this gem called sextant
  • 10:44 - 10:49
    that he released for Rails 3. Basically, I
    remember
  • 10:49 - 10:51
    back in the day, developing Rails apps, when
    I
  • 10:51 - 10:53
    wanted to basically verify routes, I would
    run the
  • 10:53 - 10:55
    rake routes command, and it would, you know,
    boot
  • 10:55 - 10:56
    up the Rails environment and you'd have to
    wait
  • 10:56 - 10:58
    a few seconds and then they would print out
  • 10:58 - 11:00
    all the routes. And then if you wanted to,
  • 11:00 - 11:03
    like, rerun it using grep, you would keep
    running
  • 11:03 - 11:04
    it again.
  • 11:04 - 11:07
    So, a lot of us, when we're doing development,
  • 11:07 - 11:11
    already have, like, Rails running in a server
    while
  • 11:11 - 11:14
    we're testing things and whatnot. And so what
    sextant
  • 11:14 - 11:18
    does is it allows it, supports basically looking
    at
  • 11:18 - 11:20
    the routes that are already in memory and
    just
  • 11:20 - 11:22
    allowing you to query against the programmatically,
    and then
  • 11:22 - 11:25
    that has a view for doing this. And this
  • 11:25 - 11:28
    was also just merged into Rails 4. So if
  • 11:28 - 11:30
    you're using Rails 4 or higher, you actually
    don't
  • 11:30 - 11:34
    need to sextant gem and it's now built in.
  • 11:34 - 11:36
    Richard and I both live in Austin, and so
  • 11:36 - 11:38
    when people come visit, or actually when I'm
    in
  • 11:38 - 11:41
    town, which isn't often, we have Ruby meet
    ups
  • 11:41 - 11:44
    at Franklin's Barbecue, so if you guys are
    ever
  • 11:44 - 11:46
    in town, let us know and we'd be more
  • 11:46 - 11:50
    than happy to take you to a meet up.
  • 11:50 - 11:54
    R.S.: All right. So the, for the first part
  • 11:54 - 11:57
    of this, this act, we're gonna be talking
    about
  • 11:57 - 11:58
    app speed, but before we talk about app speed,
  • 11:58 - 12:03
    we're actually gonna talk about dimensions.
    So, the, the
  • 12:03 - 12:09
    document dimensions are, let me see. Here
    we go.
  • 12:09 - 12:12
    Were originally written in wide-screen, but
    the screens here
  • 12:12 - 12:13
    are standard.
  • 12:13 - 12:18
    There we go. So. You're actually gonna get
    to
  • 12:18 - 12:20
    see all of the slides, as opposed to just
  • 12:20 - 12:22
    having some of them cut off. So, OK.
  • 12:22 - 12:25
    On, on app speed. The first thing I want
  • 12:25 - 12:28
    to talk about is, is tail latencies. Is anybody
  • 12:28 - 12:31
    familiar with tail latencies? OK. The guys
    in the
  • 12:31 - 12:34
    Heroku t-shirts and somebody else.
  • 12:34 - 12:39
    OK. So this is, this is a normalized distribution.
  • 12:39 - 12:42
    WE have, on one side, the number of requests.
  • 12:42 - 12:44
    On the, on the other side, we have the
  • 12:44 - 12:46
    time to respond. So the further out you go,
  • 12:46 - 12:49
    the slower it's gonna be. And we can, we
  • 12:49 - 12:51
    can see this is the distribution of our requests.
  • 12:51 - 12:55
    So over here, it's super fast. Like you love
  • 12:55 - 12:57
    to be that customer. You're super happy.
  • 12:57 - 13:00
    Over here, we have a super slow request, and
  • 13:00 - 13:01
    you don't want to be that customer and you're
  • 13:01 - 13:05
    pretty unhappy. So right in the middle is
    our
  • 13:05 - 13:09
    average, and I'm sure they talked a ton about
  • 13:09 - 13:12
    why the average is really misleading in the,
    in
  • 13:12 - 13:15
    the last session with Skylight.IO. But, but
    we're basically
  • 13:15 - 13:18
    saying that, roughly fifty percent of your,
    of your
  • 13:18 - 13:20
    customers, fifty percent of your traffic,
    is going to
  • 13:20 - 13:24
    get a response time at this or, or lower.
  • 13:24 - 13:26
    So, like, this is, this is pretty decent.
    We
  • 13:26 - 13:28
    can say, like, fifty percent of the people
    who
  • 13:28 - 13:30
    come to our web site get a response before
  • 13:30 - 13:35
    then. Moving up the distribution, to something
    like perk
  • 13:35 - 13:37
    ninety-five, we say ninety-five percent of
    everyone who visits
  • 13:37 - 13:40
    our traffic will get a response by now. So
  • 13:40 - 13:42
    I'm gonna be using those terms, perk fifty,
    perk
  • 13:42 - 13:46
    ninety-five, that refers to the percentage
    of, of requests
  • 13:46 - 13:49
    that come in that we can respond by.
  • 13:49 - 13:51
    So this is kind of theorized. This is an
  • 13:51 - 13:57
    actual application. I, and one thing that
    you'll notice
  • 13:57 - 13:59
    is that it's not perfectly normalized. Like,
    it's not,
  • 13:59 - 14:02
    like both sides are not symmetrical. We kind
    of
  • 14:02 - 14:04
    like, steeply shoot up, and then we have this
  • 14:04 - 14:08
    really, really long tail, and, and this is
    kind
  • 14:08 - 14:10
    of the, what I'm referring to when I'm saying
  • 14:10 - 14:11
    tail latencies.
  • 14:11 - 14:14
    So, yes, somebody actually might have gotten
    a response
  • 14:14 - 14:17
    in zero milliseconds. You know, I doubt it,
    but
  • 14:17 - 14:20
    somebody for sure did get a response in 3000
  • 14:20 - 14:23
    milliseconds, and that's a really long time
    to wait
  • 14:23 - 14:26
    for your request to actually come in and,
    and
  • 14:26 - 14:29
    get finished. So even though somebody is getting
    really
  • 14:29 - 14:31
    fast responses, and your average isn't bad
    - your
  • 14:31 - 14:35
    average is under 250 milliseconds - one customer
    might
  • 14:35 - 14:37
    be getting a really slow response and a really
  • 14:37 - 14:39
    fast response, and, and the net is a bad
  • 14:39 - 14:40
    experience.
  • 14:40 - 14:44
    So, the net, it, it just, it's a very
  • 14:44 - 14:48
    inconsistent experience. So whenever we're
    talking about application speed,
  • 14:48 - 14:51
    we have to consider individual request speed
    and average,
  • 14:51 - 14:56
    but also consistency. How consistent is each
    request?
  • 14:56 - 14:59
    So, how do, how do we do this? What?
  • 14:59 - 15:01
    How can we, how can we help with this?
  • 15:01 - 15:03
    Well, one of the things that we launched this
  • 15:03 - 15:06
    year was PX dynos. So a PX dyno -
  • 15:06 - 15:09
    a typical dyno only has 512 megabytes of RAM.
  • 15:09 - 15:13
    It's a shared infrastructure. A PX Dyna has
    six
  • 15:13 - 15:16
    gigabytes of RAM and eight CPU cores, which
    is
  • 15:16 - 15:18
    a little, a little nicer, a little better,
    a
  • 15:18 - 15:20
    little bit more room to play.
  • 15:20 - 15:24
    And, and it's also real hardware. So, or,
    it's,
  • 15:24 - 15:29
    it's not on the same shared infrastructure.
    So you
  • 15:29 - 15:33
    can, you can scale with Dynos, you, you can
  • 15:33 - 15:35
    also scale inside of Dynos. And that's kind
    of
  • 15:35 - 15:37
    two, two important parts that we're gonna,
    gonna have
  • 15:37 - 15:40
    to cover. So, of course, whenever you have
    more
  • 15:40 - 15:43
    requests that you can possibly process, you
    want to
  • 15:43 - 15:45
    scale up and say, I'm gonna have more Dynos.
  • 15:45 - 15:50
    But what happens if, if you're not making
    the
  • 15:50 - 15:53
    best use of everything inside of your dyno?
    Previously
  • 15:53 - 15:55
    with 512 megabytes of RAM, you could just,
    you
  • 15:55 - 15:57
    know, throw a couple Unicorn workers in there
    and
  • 15:57 - 15:59
    you're like, oh, I'm probably using most of
    this.
  • 15:59 - 16:01
    Like, if you put two unicorn workers in a
  • 16:01 - 16:03
    PX Dyno, you're not making the most use of
  • 16:03 - 16:04
    it all.
  • 16:04 - 16:08
    So, recently, I am super in love with, with
  • 16:08 - 16:13
    Puma. Evan, this is Evan Phoenix's web server
    that
  • 16:13 - 16:15
    was originally written at, to kind of show
    case
  • 16:15 - 16:18
    Rubinius. Guess what? It's really nice with
    MRI as
  • 16:18 - 16:22
    well. Recently we've gotten some Puma docs,
    and so
  • 16:22 - 16:24
    I'm gonna talk about Puma for just, for just
  • 16:24 - 16:25
    a little bit.
  • 16:25 - 16:30
    So, if you're, if you're not familiar. I was
  • 16:30 - 16:35
    totally off on the formatting. So, Puma handles
    requests
  • 16:35 - 16:38
    by running multiple processes, or by multiple
    threads. And
  • 16:38 - 16:40
    it can actually run in something called a
    hybrid
  • 16:40 - 16:43
    mode, where each process has multiple threads.
    We, we
  • 16:43 - 16:46
    recommend this, or I recommend this. E- if
    one
  • 16:46 - 16:48
    of your processes crash, it doesn't crash
    your entire
  • 16:48 - 16:51
    web server. It's kind of nice.
  • 16:51 - 16:54
    And so the multiple processes is something
    that we're
  • 16:54 - 16:58
    pretty familiar with. As Rubyists, we're familiar
    with forking
  • 16:58 - 17:02
    processes. We're familiar with Unicorn. But
    the, the, the
  • 17:02 - 17:04
    multiple threads is a little bit different.
  • 17:04 - 17:07
    Even with MRI, even with a, something like
    a
  • 17:07 - 17:10
    global interpreter lock, you are still doing
    enough IO,
  • 17:10 - 17:14
    you're still hitting your database frequently
    enough, maybe making
  • 17:14 - 17:18
    API calls to like, Facebook or GitHub status,
    being
  • 17:18 - 17:20
    like, hey, are you still up?
  • 17:20 - 17:23
    And, and this will give our threads time to
  • 17:23 - 17:25
    kind of jump around and allow others to do
  • 17:25 - 17:27
    work. So you, you can get quite an extra
  • 17:27 - 17:29
    bit of performance with, there.
  • 17:29 - 17:31
    So, we're actually gonna be using Puma to
    scale
  • 17:31 - 17:33
    up inside of our Dynos. So once we give
  • 17:33 - 17:36
    you that eight gigs of RAM, we want to
  • 17:36 - 17:38
    make sure that, that you can, you can, you
  • 17:38 - 17:40
    can make the most use out of it.
  • 17:40 - 17:44
    In general, with Puma, more processes means
    more RAM,
  • 17:44 - 17:47
    and more threads are gonna need more CPU consumption.
  • 17:47 - 17:50
    So, you want to, you want to maximize your
  • 17:50 - 17:53
    processes and maximize your threads, kind
    of without going
  • 17:53 - 17:54
    over. As soon as you start swapping, as soon
  • 17:54 - 17:57
    as you go over that RAM limit, your app's
  • 17:57 - 17:59
    gonna be really slow and that kind of defeats
  • 17:59 - 18:03
    the purpose of trying to add these resources.
  • 18:03 - 18:06
    Another issue is that I had kind of never
  • 18:06 - 18:08
    heard of until I started looking into all
    of
  • 18:08 - 18:11
    these multiple web servers, is slow-client.
    So if somebody's
  • 18:11 - 18:14
    connecting to your web site via like, a two
  • 18:14 - 18:18
    G over like a Nokia candy bar phone, uploading
  • 18:18 - 18:20
    like photos or something like that, like that
    is
  • 18:20 - 18:22
    a slow client, and if you're using something
    like
  • 18:22 - 18:27
    Unicorn, it can deDOS your, your site, because
    each
  • 18:27 - 18:30
    one of those requests takes up an entire Unicorn
  • 18:30 - 18:33
    worker, whereas Puma has a, has a buffer,
    and
  • 18:33 - 18:36
    it buffers those requests as similar to the
    way
  • 18:36 - 18:38
    NginX does.
  • 18:38 - 18:41
    One other thing to consider with Puma is,
    so
  • 18:41 - 18:44
    I'm mentioning threads, I'm talking, talking
    about threads. Ruby,
  • 18:44 - 18:49
    we're not necessarily known as the most thread-safe
    culture.
  • 18:49 - 18:52
    Thread-safe community. And so a lot of apps
    just
  • 18:52 - 18:55
    aren't thread-safe. And so some, you might
    take a
  • 18:55 - 18:56
    look at Puma and be like, hey, that's not
  • 18:56 - 18:59
    for me. You can always set your maximum threads
  • 18:59 - 19:01
    to one, and then now you're behaving just
    like
  • 19:01 - 19:05
    Unicorn, except you have the slow-client protection,
    and whenever
  • 19:05 - 19:08
    you get that gem that's bad or you, like,
  • 19:08 - 19:13
    stop mutating your constants are runtime or
    something, then
  • 19:13 - 19:16
    you can maybe bump up and try multiple threads.
  • 19:16 - 19:19
    OK. So, I'm, I'm talking about consistency
    and I'm
  • 19:19 - 19:21
    talking a lot about Puma. How does that all
  • 19:21 - 19:25
    kind of boil down and help? So, does anybody
  • 19:25 - 19:30
    think that sharing distributed state across
    multiple machines is
  • 19:30 - 19:37
    like really fast? Maybe. I, OK. Good.
  • 19:37 - 19:39
    What about sharing state inside of memory
    on the
  • 19:39 - 19:45
    same machine? Is that faster? OK. All right.
    I
  • 19:45 - 19:48
    think we're in, in agreement. So, a, a little
  • 19:48 - 19:50
    bit of a point of controversy. You might have
  • 19:50 - 19:54
    heard of the, the Heroku router at some point
  • 19:54 - 19:58
    in time. And this, the router is actually
    designed,
  • 19:58 - 20:01
    not randomly, but it is, it is designed to
  • 20:01 - 20:04
    use a random algorithm. And it basically will
    try
  • 20:04 - 20:07
    to deliver requests as fast as humanly possible,
    or
  • 20:07 - 20:11
    computerly possible, to individual dynos.
    So it's like, it
  • 20:11 - 20:13
    gets the request. It wants to get it to
  • 20:13 - 20:15
    your dyno as fast as it possibly can.
  • 20:15 - 20:18
    And adding any sort of additional overhead
    of distributed
  • 20:18 - 20:24
    locks or queues is gonna be slowing that down.
  • 20:24 - 20:27
    Once inside of your, in your, your process,
    Puma
  • 20:27 - 20:31
    or Unicorn has, in memory, state of all of
  • 20:31 - 20:33
    its own processes, and is capable of saying,
    oh,
  • 20:33 - 20:35
    hey, this process is busy. This process is
    not
  • 20:35 - 20:40
    busy. I can do really intelligent routing
    and, and
  • 20:40 - 20:44
    basically, for free.
  • 20:44 - 20:46
    It's really fast. It, it took a little bit
  • 20:46 - 20:50
    of convincing for me. So does anybody else
    need
  • 20:50 - 20:50
    to be convinced?
  • 20:50 - 20:52
    AUDIENCE: Yeah.
  • 20:52 - 20:55
    R.S.: OK. Good. Cause otherwise I could totally
    just
  • 20:55 - 20:57
    skip over the next section of slides.
  • 20:57 - 21:01
    So, this is, this is a graph produced by
  • 21:01 - 21:06
    the fine developers over at, at RapGenius,
    and on
  • 21:06 - 21:08
    one side we will actually see a percentage
    of
  • 21:08 - 21:11
    requests queued, and on the bottom we are
    gonna
  • 21:11 - 21:13
    be seeing number of dynos. So the goal is
  • 21:13 - 21:16
    actually to minimize request queuing, like
    this, this is
  • 21:16 - 21:18
    time that your customers are waiting that
    you're not
  • 21:18 - 21:20
    actually doing anything.
  • 21:20 - 21:22
    You, so you, you want to minimize that queuing
  • 21:22 - 21:24
    with the smallest number of resources, so
    the smallest
  • 21:24 - 21:28
    number of dynos. This top line, we actually
    have
  • 21:28 - 21:31
    is what we've currently got now. The random
    routing
  • 21:31 - 21:35
    with a, a single-threaded server. And, like,
    this is
  • 21:35 - 21:37
    pretty bad. It, like, starts out bad and it,
  • 21:37 - 21:39
    like, it doesn't even like trend towards zero.
    So
  • 21:39 - 21:41
    this is probably bad. So this is using something
  • 21:41 - 21:44
    like Webrick in production.
  • 21:44 - 21:48
    So. Don't use Webrick in production. Or, or
    like
  • 21:48 - 21:55
    even, even thin. In single-threaded mode.
    So, on the,
  • 21:55 - 21:58
    on the very bottom, we actually have a, like,
  • 21:58 - 22:01
    mythological, like, if, if we could do all
    of
  • 22:01 - 22:05
    that distributed shared state without, and
    locks and queues,
  • 22:05 - 22:08
    without having any kind of overhead, we can
    see
  • 22:08 - 22:11
    that, basically, it just drops down to zero
    at
  • 22:11 - 22:14
    about, you know, in their case, about seventy-five
    dynos,
  • 22:14 - 22:16
    and then just, you know, it's straight zero.
    There's
  • 22:16 - 22:17
    no queuing.
  • 22:17 - 22:19
    And it's great. And this would be amazing
    if
  • 22:19 - 22:22
    we could have it. But unfortunately there
    is a
  • 22:22 - 22:25
    little bit over overhead. What was really
    interesting to
  • 22:25 - 22:28
    me is this second one, which is not nearly
  • 22:28 - 22:31
    as nice as that mythological intelligent router,
    but it's
  • 22:31 - 22:34
    kind of not too far off. This is still
  • 22:34 - 22:37
    our random routing, and, and this was actually
    done
  • 22:37 - 22:42
    with Unicorn, and workers set to two. So basically,
  • 22:42 - 22:44
    once we get the, the request to your operating
  • 22:44 - 22:46
    system, it's like one of those two workers
    is
  • 22:46 - 22:49
    free and can immediately start working on
    it.
  • 22:49 - 22:51
    Some, some interesting things known about
    this is, for
  • 22:51 - 22:54
    the non-optimal case, for the, we basically
    don't have
  • 22:54 - 22:56
    enough dynos to handle this, so that might
    happen
  • 22:56 - 22:59
    is, you know, you got on Hacker News or
  • 22:59 - 23:06
    whatever, slash dotted, Reddited. SnapChatted.
    Secreted. I don't know.
  • 23:08 - 23:12
    And it does actually, eventually approach
    ideal state. So,
  • 23:12 - 23:15
    it, it gets even better, and unfortunately
    they kind
  • 23:15 - 23:18
    of stopped at, at two processes, but it gets
  • 23:18 - 23:20
    better, the more concurrency that you add.
    So if
  • 23:20 - 23:23
    you had three or four workers or, again, if
  • 23:23 - 23:25
    you're using something like Puma, and each
    one of
  • 23:25 - 23:28
    those workers is running, like, four threads,
    now you
  • 23:28 - 23:30
    have, like, a massive amount of concurrency
    that you
  • 23:30 - 23:31
    could deal with all of these requests coming
    in.
  • 23:31 - 23:36
    So, the, the, the, if, again, and we're looking
  • 23:36 - 23:38
    for consistency. We want that request to get
    to
  • 23:38 - 23:41
    our dyno, immediately be able to process it.
    So,
  • 23:41 - 23:44
    you can use Puma or Unicorn to maximize that
  • 23:44 - 23:50
    worker number, and, again, distributed optimi-
    distributed routing is
  • 23:50 - 23:54
    slow. In memory, routing is relatively quick.
  • 23:54 - 24:00
    On, again, just in, in the whole context of
  • 24:00 - 24:05
    speed, Ruby 2.0 came out and this was awhile
  • 24:05 - 24:07
    ago. It's got GC, it's optimized for Copy
    on
  • 24:07 - 24:12
    Write. In, in Ruby, extra processes, process
    forks actually
  • 24:12 - 24:14
    become cheaper. So the first process might
    take seventy
  • 24:14 - 24:17
    megabytes, the second one twenty, and ten
    and seven
  • 24:17 - 24:20
    and six. So, if you get a larger box,
  • 24:20 - 24:23
    you can actually run more processes on them.
    If
  • 24:23 - 24:24
    you get eight gigs on one box, you can
  • 24:24 - 24:27
    run more processes than you can if you had
  • 24:27 - 24:29
    eight gigs across eight boxes.
  • 24:29 - 24:33
    So, again, more processes mean more concurrency,
    and more
  • 24:33 - 24:39
    concurrency means consistency. If you are
    using workers, you
  • 24:39 - 24:43
    can, you can also scale out with Resq pool,
  • 24:43 - 24:46
    and if your application's still slow, we rolled
    out
  • 24:46 - 24:48
    a couple of really neat platform features.
    One of
  • 24:48 - 24:52
    them is, is called HTTP Request ID. So as
  • 24:52 - 24:55
    a request comes into our system, we will actually
  • 24:55 - 24:57
    give it a uuid, and you can see this
  • 24:57 - 25:00
    in your router log. And then we've got documentation
  • 25:00 - 25:03
    on how to configure your Rails app so it
  • 25:03 - 25:05
    will actually pick this up and use that uuid
  • 25:05 - 25:07
    in tagged logs.
  • 25:07 - 25:10
    So, like, how is this useful? So, if you
  • 25:10 - 25:12
    are getting, like, and out of memory error,
    or
  • 25:12 - 25:14
    if your request is taking a really long time
  • 25:14 - 25:16
    and you're like, ah, like, that request is
    timing
  • 25:16 - 25:19
    out and, you know, Heroku's returning a response
    and
  • 25:19 - 25:22
    we don't even know why. Now, if the request
  • 25:22 - 25:24
    id is tagged, you can actually follow along
    between
  • 25:24 - 25:26
    your two logs and be like, oh, it's hitting
  • 25:26 - 25:29
    that controller action. Maybe I should be
    sending that
  • 25:29 - 25:31
    email in the background as opposed to having
    to
  • 25:31 - 25:34
    actually block on it. So you can trace specific
  • 25:34 - 25:35
    errors.
  • 25:35 - 25:38
    We also launched Log Runtime Metrics awhile
    ago, and
  • 25:38 - 25:43
    this is something that we'll actually put
    your, your
  • 25:43 - 25:45
    runtime information directly into your logs.
    You can check
  • 25:45 - 25:50
    it out. Liberato will automatically pick it
    up for
  • 25:50 - 25:52
    you and make you these, these really nice
    graphs.
  • 25:52 - 25:55
    And, again, if you're doing something like
    Unicorn or,
  • 25:55 - 25:57
    or Puma, then you want to get as close
  • 25:57 - 26:01
    to your RAM limit without actually going over.
  • 26:01 - 26:05
    OK. So, the, the next act in, in our
  • 26:05 - 26:10
    play, again, introducing Terence, is, we'll
    be talking about
  • 26:10 - 26:12
    Ruby on the Heroku stack and in the community.
  • 26:12 - 26:16
    T.L.: Thank you. So I know we're at RailsConf,
  • 26:16 - 26:18
    but I've been doing a bunch of work with
  • 26:18 - 26:21
    Ruby, so I wanted to talk about some Ruby
  • 26:21 - 26:25
    stuff. So who here is actually using Ruby
    1.8.7?
  • 26:25 - 26:32
    Wow. No one. That's pretty awesome. Oh, wait.
    One
  • 26:32 - 26:35
    person. You should probably get off of it.
  • 26:35 - 26:36
    [laughter]
  • 26:36 - 26:41
    But. Who is using Ruby 1.9.2? A few more
  • 26:41 - 26:48
    people. 1.9.3? Good amount of people here.
  • 26:49 - 26:51
    So, I don't know if you guys were following
  • 26:51 - 26:55
    along, but Ruby 1.8.7 and 1.9.2 got end-of-lifed
    at
  • 26:55 - 26:59
    one point. And then there was a security incidence,
  • 26:59 - 27:04
    and Zachary Scott and I have volunteered to
    maintain
  • 27:04 - 27:07
    security patches till the end of June. So,
    if
  • 27:07 - 27:13
    you are on 1.8.7 and 1.9.2, I would recommend
  • 27:13 - 27:17
    hopefully getting off some time soon, unless
    you don't
  • 27:17 - 27:19
    care about security or want to back port your
  • 27:19 - 27:22
    own patches.
  • 27:22 - 27:25
    And then Ru- we recently announced that Ruby
    1.9.3
  • 27:25 - 27:28
    is also getting an end of life in February
  • 27:28 - 27:32
    2015, which is coming up relatively quickly.
    It's a
  • 27:32 - 27:34
    little less than a year away now, at this
  • 27:34 - 27:39
    point. So, please upgrade to at least 2.0.0
    or
  • 27:39 - 27:40
    later.
  • 27:40 - 27:44
    And, during this past Rails Standard Year,
    we also
  • 27:44 - 27:46
    moved the default Ruby on Heroku from 1.9.2
    to
  • 27:46 - 27:50
    2 dot 0 dot 0. We believe people should
  • 27:50 - 27:52
    be at least using this version of Ruby or
  • 27:52 - 27:54
    higher.
  • 27:54 - 27:57
    And, if you don't know yet, you can declare
  • 27:57 - 28:00
    your Ruby version in the gemfile on Heroku
    to
  • 28:00 - 28:05
    get that version. And, we've also are pretty,
    pretty
  • 28:05 - 28:09
    serious about supporting the latest versions
    of Ruby. Basically,
  • 28:09 - 28:10
    the same day that they come out. So we
  • 28:10 - 28:15
    did this for 2.0.0, 2.1.0 and 2.1.1, and in
  • 28:15 - 28:18
    addition we also try to support any of the
  • 28:18 - 28:22
    preview releases as, whenever they get form
    release, so
  • 28:22 - 28:25
    we can, as a community, help, help find bugs
  • 28:25 - 28:28
    and test things like, put your staging app
    on
  • 28:28 - 28:31
    new versions of Ruby. If you find bugs then,
  • 28:31 - 28:33
    hopefully, we can fix them before they actually
    make
  • 28:33 - 28:36
    it to the final release.
  • 28:36 - 28:40
    And, with regards to security patches, if
    there are
  • 28:40 - 28:42
    any security releases that come out, we make
    sure
  • 28:42 - 28:45
    to release them that day as well. We take
  • 28:45 - 28:47
    security pretty seriously.
  • 28:47 - 28:51
    So, once a security patch has release and
    we've
  • 28:51 - 28:54
    patched those Rubies, you have to push your
    app
  • 28:54 - 28:57
    again to get that release. And the reason
    we,
  • 28:57 - 28:59
    well, a lot of people ask us, like, why
  • 28:59 - 29:02
    we don't do that. Why we don't just automatically
  • 29:02 - 29:05
    upgrade peoples' Rubies in place. And the
    reason it,
  • 29:05 - 29:08
    the reasoning here is that, not, there might
    be
  • 29:08 - 29:10
    a regression to the security patch, or maybe
    the
  • 29:10 - 29:13
    patch level is not 100% backwards compatible.
    There's a
  • 29:13 - 29:16
    bug that slipped through. But you probably
    want to
  • 29:16 - 29:19
    be there when you're actually deploying your
    application, in
  • 29:19 - 29:21
    case something does go wrong.
  • 29:21 - 29:23
    You probably wouldn't want us to deploy something
    and
  • 29:23 - 29:25
    then have your site go down and then you're
  • 29:25 - 29:27
    like, not at your computer at all. You're
    at
  • 29:27 - 29:29
    dinner somewhere and it's like super inconvenient
    to get
  • 29:29 - 29:31
    paged there.
  • 29:31 - 29:35
    So, we publish all of this information, all
    of
  • 29:35 - 29:37
    the updates to the platform, but also all
    of
  • 29:37 - 29:41
    the Ruby updates, including security updates
    to the devcenter
  • 29:41 - 29:45
    changelogs. So, if you don't, this is, I think,
  • 29:45 - 29:48
    devcenter dot heroku dot com slash changelog.
    And if
  • 29:48 - 29:51
    you don't subscribe to it, I would recommend
    subscribing
  • 29:51 - 29:54
    to it just to keep up to date with
  • 29:54 - 29:57
    what is happening on Heroku for platform changes
    in
  • 29:57 - 30:03
    addition to updates to Ruby specifically on
    Heroku. And,
  • 30:03 - 30:05
    there isn't too much traffic. Like, you won't
    get,
  • 30:05 - 30:07
    like, a hundred emails a day. So, I highly
  • 30:07 - 30:10
    recommend subscribing to this to just keep
    up to
  • 30:10 - 30:14
    date with things like that here.
  • 30:14 - 30:15
    So the next thing I would like to talk
  • 30:15 - 30:19
    about is Matz's Ruby team. So if you didn't
  • 30:19 - 30:22
    know, back in 2012 we hired three people from
  • 30:22 - 30:27
    Ruby core. We hired Matz himself, Koichi and
    Nobu.
  • 30:27 - 30:30
    And, as I've gone around over the last years
  • 30:30 - 30:32
    talking and interacting with people, I realize
    a lot
  • 30:32 - 30:36
    of people have no idea who, besides Matz,
    who
  • 30:36 - 30:37
    Koichi and Nobu are.
  • 30:37 - 30:38
    So I wanted to take the time to kind
  • 30:38 - 30:42
    of update people on who these people were
    and
  • 30:42 - 30:44
    kind of what they've actually, like, we've
    been paying
  • 30:44 - 30:47
    them money, and what they've actually been
    doing to
  • 30:47 - 30:51
    kind of move Ruby forward in a positive direction.
  • 30:51 - 30:55
    So, if you run a git log, since 2012,
  • 30:55 - 30:57
    since we've hired them, you can see the number
  • 30:57 - 31:02
    of commits they've made to Ruby itself. So,
    the,
  • 31:02 - 31:05
    so Nobu here, who we've hired, has basically
    more
  • 31:05 - 31:08
    commits than like the second guy by many,
    many
  • 31:08 - 31:14
    commits. And then, Koichi's the third highest
    committer as
  • 31:14 - 31:16
    well.
  • 31:16 - 31:18
    And you're probably wondering why I have six
    names
  • 31:18 - 31:21
    here on a list for the top five. And
  • 31:21 - 31:23
    so there is this, so there isn't actually
    on
  • 31:23 - 31:26
    the Ruby Core Team who has the handle svn.
  • 31:26 - 31:29
    It's not actually a person. So I find out,
  • 31:29 - 31:33
    the hard way, who this person was. So when
  • 31:33 - 31:35
    I made my first patch to Ruby after being
  • 31:35 - 31:40
    on core, I found out that if, all the
  • 31:40 - 31:43
    date information is done in JST, and I, of
  • 31:43 - 31:46
    course, did not know that, and but like scumbag
  • 31:46 - 31:49
    American dates. And so there's basically this
    bot that
  • 31:49 - 31:51
    will go through and, like, fix your commits
    for
  • 31:51 - 31:54
    you, and like, so he does like another commit,
  • 31:54 - 31:56
    and it's like, ah, you actually put the wrong
  • 31:56 - 31:57
    date. Let me fix that for you.
  • 31:57 - 32:00
    So there's like 710 of those commits. I think
  • 32:00 - 32:03
    I did this like a month ago. So these
  • 32:03 - 32:06
    are the number of commits from a month ago.
  • 32:06 - 32:08
    So, the first person I like to talk about
  • 32:08 - 32:13
    is Nobuyoshi Nokada. Also known as Nobu. And
    he's
  • 32:13 - 32:15
    known, I think on Ruby Core as The Patch
  • 32:15 - 32:22
    Monster. So, we'll go into why he's known
    by
  • 32:22 - 32:22
    this.
  • 32:22 - 32:26
    So, what do you think the result of Time
  • 32:26 - 32:31
    dot now equals empty string? I'm sure you
    thought
  • 32:31 - 32:36
    it was an infinite loop, right. Or using the
  • 32:36 - 32:40
    rational, like, so if you're using the rational
    number
  • 32:40 - 32:42
    library in standard lib, like, what do you
    think
  • 32:42 - 32:45
    the result of doing this operation?
  • 32:45 - 32:46
    AUDIENCE: Segfault.
  • 32:46 - 32:48
    T.L.: Yeah. So this is a segfault.
  • 32:48 - 32:50
    AUDIENCE: [indecipherable - 00:32:40]
  • 32:50 - 32:55
    T.L.: Thank you. Thank you for reporting the
    bug.
  • 32:55 - 32:57
    So these, so Eric Hodel actually reported
    the other
  • 32:57 - 32:59
    bug, the time thing, and he found this in
  • 32:59 - 33:02
    RubyGems I believe. But these are real issues
    that
  • 33:02 - 33:05
    are in Ruby itself. So if you actually run
  • 33:05 - 33:07
    those two things now and you're using later
    patch
  • 33:07 - 33:10
    levels, you should not see them. But, they're
    real
  • 33:10 - 33:14
    issues, and someone has to go and fix all
  • 33:14 - 33:15
    them.
  • 33:15 - 33:18
    And so the person who actually does this is
  • 33:18 - 33:22
    Nobu. And he actually gets paid full time
    and,
  • 33:22 - 33:25
    to basically do bug fixes for Ruby. So all
  • 33:25 - 33:27
    those two thousand seven hundred and some
    commits are
  • 33:27 - 33:30
    bug fixes to Ruby trunk to make Ruby run
  • 33:30 - 33:34
    better. And, I thanked him when I was just
  • 33:34 - 33:37
    in Japan last week for all the work he's
  • 33:37 - 33:40
    done. It's pretty incredible. Like, there's
    so many times
  • 33:40 - 33:42
    when things segfault and other things, and
    he's basically
  • 33:42 - 33:44
    made it better.
  • 33:44 - 33:46
    I was at Oweito, and there was actually someone
  • 33:46 - 33:49
    giving a presentation about, like, thirty
    tips of, like,
  • 33:49 - 33:52
    how to use Ruby. And someone was talking about
  • 33:52 - 33:55
    open uri, and there was code on the screen,
  • 33:55 - 33:58
    and he found a bug during, like, the guy's
  • 33:58 - 34:00
    presentation, and during it, he committed
    a patch to
  • 34:00 - 34:04
    trunk during that guy's presentation. So,
    he's pretty awesome.
  • 34:04 - 34:08
    He doesn't, he doesn't do, he hasn't done
    any
  • 34:08 - 34:10
    talks, but I think people should know about
    the
  • 34:10 - 34:12
    work he's been doing.
  • 34:12 - 34:15
    So, this last bug, actually, that I wanted
    to
  • 34:15 - 34:17
    talk about, that he fixed was, are any of
  • 34:17 - 34:20
    you familiar with the regression from, in
    Ruby 2.1.1,
  • 34:20 - 34:23
    which regards to hash?
  • 34:23 - 34:27
    So, I'm sure you're familiar with the fact
    that,
  • 34:27 - 34:29
    if you use Ruby 2.1.1 on Rails 4.0.3, it
  • 34:29 - 34:35
    just doesn't work. Like, there, in Rails,
    we, we
  • 34:35 - 34:39
    use this, we fetch, in hashes we use objects
  • 34:39 - 34:41
    as keys. And if you override the hash and
  • 34:41 - 34:44
    equal method, and when you fetch you won't
    get
  • 34:44 - 34:48
    the right result back. So inside of Rails
    in
  • 34:48 - 34:51
    4.0.4, they actually had to work around this
    bug.
  • 34:51 - 34:53
    And, Nobu actually was the one who fixed this
  • 34:53 - 34:55
    inside of Ruby itself.
  • 34:55 - 34:59
    So, these were just, like, the three most
    interesting
  • 34:59 - 35:02
    bugs that I found from within the last year
  • 35:02 - 35:05
    or two of stuff he's worked on. But, if
  • 35:05 - 35:08
    you look on site Ruby Core, you can find,
  • 35:08 - 35:10
    like, hundreds and hundreds of bugs that he's
    done
  • 35:10 - 35:13
    within the last year of, just like, rip segfaults
  • 35:13 - 35:17
    and other things. So he's great to work with.
  • 35:17 - 35:19
    So the next person I want to talk about
  • 35:19 - 35:26
    is Koichi Sasada. He's also known as ko1,
    ko1.
  • 35:27 - 35:30
    And he doesn't have a nickname in Ruby Core,
  • 35:30 - 35:33
    so me and Richard spent a good amount of
  • 35:33 - 35:35
    our talk preparation trying to come up with
    a
  • 35:35 - 35:37
    nickname for him. So we came up with the
  • 35:37 - 35:39
    Performance Pro. And this is a picture of
    him
  • 35:39 - 35:43
    giving a talk in Japanese.
  • 35:43 - 35:48
    So, if you use Ruby 1.9 at all, he
  • 35:48 - 35:53
    worked on Yarv. So basically the new VM stuff
  • 35:53 - 35:56
    that made Ruby 1.9, I think it was like
  • 35:56 - 36:00
    thirty percent faster than 1.8 for like longer-running
    processes.
  • 36:00 - 36:05
    More recently he's worked on the RGenGC. And
    this
  • 36:05 - 36:08
    was introduced in Ruby 2.1.1, and it allows
    faster
  • 36:08 - 36:13
    code execution by having, basically, shorter
    GC pauses. So
  • 36:13 - 36:15
    instead of doing full GC every time, like,
    you
  • 36:15 - 36:18
    can have these minor ones.
  • 36:18 - 36:22
    So, just, he spends all of his time thinking
  • 36:22 - 36:25
    about performance in Ruby, and that's like
    what he's
  • 36:25 - 36:29
    paid to work on. So if anyone actually cares
  • 36:29 - 36:32
    about Ruby performance, you should thank this
    guy for
  • 36:32 - 36:34
    the work he's done. If you've looked at the
  • 36:34 - 36:37
    performance of Ruby since, in the last few
    years,
  • 36:37 - 36:40
    like it's improved a lot. A lot due to
  • 36:40 - 36:41
    this guy's work.
  • 36:41 - 36:44
    And, I was just, I was talking to him,
  • 36:44 - 36:47
    and he was telling me that he basically, like,
  • 36:47 - 36:49
    when he was working on RGenGC, he like, he
  • 36:49 - 36:51
    was just like, walking around the park and
    he
  • 36:51 - 36:53
    had a breakthrough. So he like spends a lot
  • 36:53 - 36:56
    of his time, even, off of work hours, just
  • 36:56 - 36:59
    thinking about this stuff.
  • 36:59 - 37:00
    Other stuff that he's been working on as well
  • 37:00 - 37:04
    is profiling work. So, if you've used any
    of
  • 37:04 - 37:09
    the Man stuff for 2.1.1, with the MemProfiler
    and
  • 37:09 - 37:12
    other things, he's been working on, with him
    to
  • 37:12 - 37:15
    introduce hooks into the internal API to make
    stuff
  • 37:15 - 37:18
    like that work. So we, I think we understand
  • 37:18 - 37:20
    that profiling, being able to measure your
    application for
  • 37:20 - 37:25
    Ruby is super important. So, if you have basically
  • 37:25 - 37:29
    comments or suggestions on things that you
    need or
  • 37:29 - 37:31
    think that you can't improve this thing, like
    it's
  • 37:31 - 37:34
    worth talking, reaching out and talking to
    Koichi about
  • 37:34 - 37:35
    this.
  • 37:35 - 37:37
    And some of the stuff he's been working on
  • 37:37 - 37:41
    in this vein has been, like, the gc_tracer
    gem.
  • 37:41 - 37:44
    So, using this to basically get more information
    about
  • 37:44 - 37:48
    your garbage collector, an allocation_tracer
    gem to see how
  • 37:48 - 37:51
    long live, like, objects are. And then even
    in
  • 37:51 - 37:55
    2.2, we're, as a team, we're working on, there
  • 37:55 - 37:59
    is an incremental GC patch, and then also.
    Or,
  • 37:59 - 38:01
    he's working on making the GC better with
    incremental
  • 38:01 - 38:04
    GC and there is symbol GC for security things,
  • 38:04 - 38:07
    which'll be super good for Rails. So we can't
  • 38:07 - 38:10
    get, like, DOS because of the symbol table
    being
  • 38:10 - 38:11
    filled up.
  • 38:11 - 38:14
    Another, so one of the things, when I was
  • 38:14 - 38:16
    in Japan, we had a Ruby Core meeting, and
  • 38:16 - 38:20
    we talked about Ruby releases. And releasing
    Ruby is
  • 38:20 - 38:24
    kind of a slow process, and I was, I
  • 38:24 - 38:27
    wasn't really sure why it took so long. And
  • 38:27 - 38:30
    so I kind of asked the question, and, and
  • 38:30 - 38:35
    Naruse, who's the release manager of 2.1 and
    was
  • 38:35 - 38:38
    telling me that it requires lots of human
    and
  • 38:38 - 38:41
    machine resources. Basically, Ruby has to
    work on many
  • 38:41 - 38:46
    configurations, Linux distros, you know, on
    OS X and
  • 38:46 - 38:48
    other things. And in order to release, like,
    the
  • 38:48 - 38:50
    CI server has to pass and, like, you kind
  • 38:50 - 38:53
    of have to pass on like various vendors and
  • 38:53 - 38:54
    what not. And so like, there's a lot of
  • 38:54 - 38:59
    coordination and like checking to like make
    an actual
  • 38:59 - 39:02
    release happen. Which is why things don't
    release super
  • 39:02 - 39:02
    fast.
  • 39:02 - 39:07
    So, some of the stuff that Koichi and my
  • 39:07 - 39:08
    team and other people on Ruby Core have been
  • 39:08 - 39:11
    working on is, like, working on infrastructure
    and services
  • 39:11 - 39:14
    to help with, basically, testing of Ruby,
    to kind
  • 39:14 - 39:17
    of hopefully automate and, like, basically
    do that per,
  • 39:17 - 39:20
    either nightly or per commit or something
    along those
  • 39:20 - 39:20
    lines.
  • 39:20 - 39:23
    So hopefully we can get releases that are
    faster
  • 39:23 - 39:27
    and are out to users sooner.
  • 39:27 - 39:30
    If you have ideas for Ruby 2.2, like, I
  • 39:30 - 39:33
    would love to hear them. We have a meeting
  • 39:33 - 39:36
    next month in May, about what is gonna go
  • 39:36 - 39:39
    into Ruby 2.2. So I'd be more than happy
  • 39:39 - 39:42
    to talk to you about ideas that you have
  • 39:42 - 39:46
    that you would like to see there. I'm just
  • 39:46 - 39:48
    gonna skip this stuff since I talked about
    it
  • 39:48 - 39:50
    earlier, and we're running short on time.
    So, here's
  • 39:50 - 39:51
    Scheems to actually talk about Rails.
  • 39:51 - 39:52
    R.S.: OK.
  • 39:52 - 39:55
    Has anybody used Rails? Have we covered that
    question
  • 39:55 - 40:00
    yet? OK. Welcome to RailsConf. OK, so Rails
    4.1
  • 40:00 - 40:00
    on Heroku.
  • 40:00 - 40:03
    A lot of things in a very short amount
  • 40:03 - 40:05
    of time. We are secure by default. Have you
  • 40:05 - 40:09
    heard of the secrets dot yml file? OK. So
  • 40:09 - 40:10
    secrets dot yml file is actually reading out
    an
  • 40:10 - 40:13
    environment variable by default, which is
    great. We love
  • 40:13 - 40:18
    environment variables. It separates your config
    from your source.
  • 40:18 - 40:21
    And, so whenever you push your app, we're
    gonna
  • 40:21 - 40:23
    set this environment variable to just, like,
    literally a
  • 40:23 - 40:26
    random value. And if, for some reason, you
    ever
  • 40:26 - 40:29
    need to like change that, you can do so
  • 40:29 - 40:33
    by just setting your, the, the secret key
    base
  • 40:33 - 40:35
    environment variable to, to whatever you want.
  • 40:35 - 40:37
    Maybe, you know, like another OpenSSL bug
    comes out
  • 40:37 - 40:42
    or something. So, another thing that was worked
    on
  • 40:42 - 40:46
    a bunch is the database_url environment variable.
    This is
  • 40:46 - 40:47
    something that we have spent a lot of time
  • 40:47 - 40:49
    looking at. And it's actually, support has
    been in
  • 40:49 - 40:52
    Rails for a surprisingly large amount of time,
    to
  • 40:52 - 40:55
    just read from the environment variable, but
    never quite
  • 40:55 - 40:58
    worked due to some edge cases and random rake
  • 40:58 - 41:01
    tasks and so on and so forth. So this,
  • 41:01 - 41:04
    this December, around Christmas time, I spent
    a lot
  • 41:04 - 41:05
    of time getting that to work.
  • 41:05 - 41:09
    So I'd like to happily announce that Rails
    4,
  • 41:09 - 41:13
    4.1 actually does support the database_url
    environment variable out
  • 41:13 - 41:18
    of the box. Whoo! And, so, some, to describe
  • 41:18 - 41:23
    a little, like, the behavior is, bears going
    over.
  • 41:23 - 41:25
    If the database_url is present, we're just
    gonna connect
  • 41:25 - 41:28
    to that database. It's, that's pretty simple.
    Makes sense.
  • 41:28 - 41:31
    If the database.yml is present but there's
    no environment
  • 41:31 - 41:33
    variable, then we're gonna use that. That
    also just
  • 41:33 - 41:35
    kind of makes sense.
  • 41:35 - 41:37
    If both are present, then we're gonna merge
    the
  • 41:37 - 41:42
    values. Makes sense, right? OK.
  • 41:42 - 41:47
    So, we, that sounds crazy. Bear with me. But,
  • 41:47 - 41:48
    a lot of people, you, you want to put
  • 41:48 - 41:52
    your connection information in your database_url
    environment variable. But,
  • 41:52 - 41:55
    there's also other values you can use inside
    of
  • 41:55 - 41:59
    your database.yml file to configure ActiveRecord
    itself. Not your
  • 41:59 - 42:01
    database. So you can turn off and on prepared
  • 42:01 - 42:03
    statements. You can change your pool size.
    All this
  • 42:03 - 42:05
    kind of thing.
  • 42:05 - 42:07
    And, we wanted to still enable you to be
  • 42:07 - 42:10
    able, able to do this. So the, the results
  • 42:10 - 42:15
    are actually merged, and for, for somebody
    like Heroku
  • 42:15 - 42:19
    or, like, if you're using another container,
    we don't
  • 42:19 - 42:21
    have to have as much magic. If you, if
  • 42:21 - 42:24
    you didn't know, database_url, we actually
    had to over,
  • 42:24 - 42:26
    whatever your database_url was, we were just
    writing a
  • 42:26 - 42:29
    file over top of it. And it's like, forget
  • 42:29 - 42:30
    that. We're gonna write a custom file.
  • 42:30 - 42:33
    So people would put stuff in their database_url,
    or
  • 42:33 - 42:35
    their database.yml file, and they'd be surprised
    when it
  • 42:35 - 42:37
    wasn't there. Like, a different file was there.
    So,
  • 42:37 - 42:39
    we no longer, we no longer have to do
  • 42:39 - 42:41
    that. And Rails plays a little bit nicer with,
  • 42:41 - 42:45
    with this containerized style environment.
  • 42:45 - 42:48
    It also means that, you could actually start
    putting
  • 42:48 - 42:52
    your ActiveRecord configuration in that file.
    Another note, if
  • 42:52 - 42:56
    you were manually setting that, your pool
    size or
  • 42:56 - 42:59
    any of those things via a, after reading an
  • 42:59 - 43:02
    article on our devcenter, go back and revisit
    that
  • 43:02 - 43:05
    please, before upgrading to Rails 4.1. Some
    of the
  • 43:05 - 43:08
    syntax did change between Rails 4.0 and 4.1.
    So,
  • 43:08 - 43:11
    if you can't connect to a database, then maybe,
  • 43:11 - 43:13
    just like, email Schneemz and be like, I hate
  • 43:13 - 43:14
    you. What's the link to that thing? And I'll,
  • 43:14 - 43:16
    I'll help you out.
  • 43:16 - 43:21
    OK. I think, probably, actually, the last
    thing that
  • 43:21 - 43:25
    we have time for, is asset pipeline. Who,
    like,
  • 43:25 - 43:29
    if asked in an interview, would say that their
  • 43:29 - 43:32
    favorite thing in the whole world is Rails
    asset
  • 43:32 - 43:35
    pipeline? Oh. Oh.
  • 43:35 - 43:37
    AUDIENCE: Just Raphael.
  • 43:37 - 43:39
    R.S.: Just Raphael. We have a bunch of, like,
  • 43:39 - 43:42
    Rails Core here, by the way. So you should,
  • 43:42 - 43:44
    you should come and thank them afterwards.
    For, for
  • 43:44 - 43:46
    other things. Not for the asset pipeline.
  • 43:46 - 43:47
    [laughter]
  • 43:47 - 43:50
    So, the asset pipeline is the number one source
  • 43:50 - 43:53
    of, of Ruby support tickets at Heroku. Just
    people
  • 43:53 - 43:55
    being like, hey, this worked locally, and
    like, didn't
  • 43:55 - 43:58
    work in production. And we're like, yeah,
    that's just
  • 43:58 - 44:01
    how asset pipeline works. That's not Heroku.
  • 44:01 - 44:05
    So, so Rails 4.1 added, added a couple things.
  • 44:05 - 44:08
    It's gonna warn you in development if you're
    doing
  • 44:08 - 44:11
    something that's gonna break production. Like,
    if you've ever
  • 44:11 - 44:14
    forgotten to add something to your precompile
    list, well
  • 44:14 - 44:17
    now, guess what, you get an error. If you
  • 44:17 - 44:20
    are not properly declaring your asset dependencies,
    then you're
  • 44:20 - 44:22
    gonna get an error.
  • 44:22 - 44:26
    And this is even better, actually, in Rails
    4.2.
  • 44:26 - 44:28
    As some of these checks aren't even needed
    anymore,
  • 44:28 - 44:30
    we can just automatically do them for you.
    But,
  • 44:30 - 44:34
    unfortunately, those have, are not in Rails
    4.1 yet.
  • 44:34 - 44:37
    So, in general, I have a, a personal belief
  • 44:37 - 44:40
    that, in programming, or, really in life,
    the only
  • 44:40 - 44:47
    thing that should fail silently is. This.
    This joke.
  • 44:48 - 44:54
    So. Thank you all very much for, for coming.
  • 44:54 - 45:00
    We, we have a booth, and later on, what.
  • 45:00 - 45:01
    What time, three o' clock?
  • 45:01 - 45:02
    T.L.: Between 3:00 and 4:30.
  • 45:02 - 45:03
    R.S.: Yeah. From 3:00 to 4:30, we'll actually
    have
  • 45:03 - 45:10
    a bunch of Rails contributors coming to, to
    talk
  • 45:10 - 45:14
    about. Oh yeah, the slides. Yeah. Yeah.
  • 45:14 - 45:17
    T.L.: Yeah. 3:00 to 4:30, we'll have community
    office
  • 45:17 - 45:21
    hours with some nice people from Rails Core,
    contrib.
  • 45:21 - 45:23
    R.S.: Yeah. So come ask.
  • 45:23 - 45:27
    T.L.: Basically any Rails questions or anything
    you want.
  • 45:27 - 45:28
    And then Schneeman will actually be doing
    a book
  • 45:28 - 45:31
    signing of his Heroku Up & Running book today
  • 45:31 - 45:34
    and tomorrow at 2:30. So if you want that.
  • 45:34 - 45:36
    R.S.: Yeah. So get a, get a free book,
  • 45:36 - 45:38
    and then come and ask questions and just,
    like,
  • 45:38 - 45:42
    hang out. And, any time you stop by the
  • 45:42 - 45:44
    booth, feel free to ask Heroku questions.
    And thank
  • 45:44 - 45:46
    you all very much for coming.
Title:
RailsConf 2014 - Heroku 2014: A Year in Review by Terence Lee & Richard Schneeman
Description:

more » « less
Duration:
46:12

English subtitles

Revisions