0:00:18.160,0:00:24.640 RICHARD SCHNEEMAN: All right. OK. Hello everyone. 0:00:24.739,0:00:25.930 AUDIENCE: Hello. 0:00:25.930,0:00:29.180 R.S.: Thank you. Thank you. Welcome to, welcome,[br]let 0:00:29.180,0:00:31.220 me be the first to welcome you to RailsConf. 0:00:31.220,0:00:35.809 So, our, our talk today is Heroku 2014: A 0:00:35.809,0:00:38.390 Year in Review. It is gonna be a play 0:00:38.390,0:00:43.610 in six acts, featuring Terrance Lee and Richard[br]Schneeman. 0:00:43.610,0:00:45.930 So, of course this is a year in review, 0:00:45.930,0:00:49.059 and Heroku does measure their years by RailsConf.[br]So 0:00:49.059,0:00:53.809 this is from Portland to Chicago RailsConf[br]year. The 0:00:53.809,0:00:55.340 Standard RailsConf Year. 0:00:55.340,0:00:57.769 As, as some of you might know, we are 0:00:57.769,0:01:01.059 on the Ruby Task Force, and, in fact, that 0:01:01.059,0:01:06.060 makes us Ruby Task Force members. And, of[br]course, 0:01:06.060,0:01:07.990 this was a big year. We're gonna be talking 0:01:07.990,0:01:12.310 a little bit about app performance, some Heroku[br]features, 0:01:12.310,0:01:15.590 and community features. So, first up to the[br]stage, 0:01:15.590,0:01:19.020 I'm gonna be introducing the one, the only[br]Mister 0:01:19.020,0:01:23.030 Terrance Lee. You might have recognized him[br]in some 0:01:23.030,0:01:27.890 other roles. He hails from Austin, Texas,[br]which has, 0:01:27.890,0:01:31.800 undoubtedly, the best tacos in the entire[br]world. So, 0:01:31.800,0:01:32.489 he- 0:01:32.489,0:01:33.860 AUDIENCE: [indecipherable] 0:01:33.860,0:01:38.640 R.S.: Them's fightin' words, friend. So that[br]he, he's 0:01:38.640,0:01:42.360 also sometimes known as the Chief Taco Officer.[br]Or, 0:01:42.360,0:01:45.500 or the CTO. And something, something very[br]interesting about 0:01:45.500,0:01:48.940 Terrance is, recently, he was inducted into[br]Ruby Core, 0:01:48.940,0:01:52.340 so congratulations to, to Terrance. All right. 0:01:52.340,0:01:56.090 So, without further ado, Act 1: Deploy Speed. 0:01:56.090,0:02:02.479 TERRANCE LEE: Thank you, Richard. So, at the[br]beginning 0:02:02.479,0:02:05.720 of the year Rails Standard Year, we focused[br]a 0:02:05.720,0:02:08.060 lot on deployment speed. We got a lot of 0:02:08.060,0:02:12.060 feedback and realized deployment was not as[br]fast as 0:02:12.060,0:02:14.629 it could be. And we wanted to make it 0:02:14.629,0:02:16.250 faster. So, the first thing we set out to 0:02:16.250,0:02:18.250 do was to actually do a bunch of measurement 0:02:18.250,0:02:22.019 and profiling to look at where things were[br]slow, 0:02:22.019,0:02:24.160 and how we could make it better, and to 0:02:24.160,0:02:27.590 kind of gage, like, the before and after and 0:02:27.590,0:02:30.920 know when the good points were to kind of 0:02:30.920,0:02:34.730 stop and move on to other things. Cause you 0:02:34.730,0:02:37.140 can never make, you can never, you will never 0:02:37.140,0:02:40.230 be done with, like, performance improvements. 0:02:40.230,0:02:44.370 So, after about six months of work on this, 0:02:44.370,0:02:46.930 we managed to cut down the deploy speeds for, 0:02:46.930,0:02:50.780 across the platform for Ruby by about forty[br]percent. 0:02:50.780,0:02:54.220 So it's a pretty decent speed improvement.[br]And, in 0:02:54.220,0:02:56.130 order to do this, we mainly looked at three 0:02:56.130,0:02:59.360 various ways to speed this up. 0:02:59.360,0:03:02.640 The first thing was running code in parallel,[br]so 0:03:02.640,0:03:07.360 running more things, running things, like,[br]more than one 0:03:07.360,0:03:10.290 thing at one time. If you cache stuff you 0:03:10.290,0:03:12.430 don't have to do it again, and, in general, 0:03:12.430,0:03:14.580 just like, cutting out code that doesn't need[br]to 0:03:14.580,0:03:16.010 be there. 0:03:16.010,0:03:19.360 So, with the parallel code, we worked with[br]the 0:03:19.360,0:03:23.930 bundler team on Bundler 1.5. There was a pull 0:03:23.930,0:03:27.010 request sent by CookPad that was sent in to 0:03:27.010,0:03:30.780 add parallel bundler install for Bundler 1.5.[br]So if 0:03:30.780,0:03:33.060 you actually aren't using this yet, I would[br]recommend 0:03:33.060,0:03:37.450 upgrading your bundle to at least Bundler[br]1.5. And 0:03:37.450,0:03:41.140 the bundler added this dash j option, which[br]allows 0:03:41.140,0:03:44.520 you to specify the number of jobs to run. 0:03:44.520,0:03:47.420 And this is, basically, if you're using MRI,[br]it 0:03:47.420,0:03:50.780 forks and does a num, these number of sub-processes, 0:03:50.780,0:03:53.870 and if you're on JRuby or Rubinius it actually 0:03:53.870,0:03:55.970 just uses threads here. 0:03:55.970,0:03:58.440 And the benefit of doing this is, when you 0:03:58.440,0:04:01.250 actually do bundle install, the dependencies[br]that get installed 0:04:01.250,0:04:03.830 get downloaded in parallel, so you're not[br]waiting on 0:04:03.830,0:04:07.150 network traffic sequentially anymore, and[br]in addition you're also 0:04:07.150,0:04:10.709 install gems in parallel. And this is mostly[br]beneficial, 0:04:10.709,0:04:14.500 especially when you're running native extensions.[br]So if you 0:04:14.500,0:04:16.500 have something like Nokogiri, that takes a[br]long time. 0:04:16.500,0:04:19.759 Oftentimes, you notice, you just like hang[br]and wait 0:04:19.759,0:04:21.228 for it to install and then it installs the 0:04:21.228,0:04:24.130 next thing, so this allows you to install[br]that, 0:04:24.130,0:04:26.259 basically, in the background and then go and[br]install 0:04:26.259,0:04:28.919 other gems at the same time. 0:04:28.919,0:04:34.610 Also, in Bundler 1.5, Richard actually added[br]this function 0:04:34.610,0:04:39.590 that allows people, allows bundler to auto-retry[br]failed commands, 0:04:39.590,0:04:43.659 so initially, before this, we would, when[br]we run 0:04:43.659,0:04:46.990 bundle install and something would fail because[br]of some 0:04:46.990,0:04:49.749 odd network timeout, like during one chance,[br]you would 0:04:49.749,0:04:51.939 have to basically repush again, no matter[br]where you 0:04:51.939,0:04:55.620 were in the build process. So, by default[br]now, 0:04:55.620,0:05:00.490 Bundler actually will retry clones and gem[br]installs for 0:05:00.490,0:05:02.090 up to three times by default. 0:05:02.090,0:05:06.889 So, it will continue going during the deploy[br]process. 0:05:06.889,0:05:10.349 And so is anyone here actually familiar with[br]the 0:05:10.349,0:05:17.349 PIGz command? So, just Richard? So, PIGz is[br]Parallel 0:05:19.089,0:05:23.749 Gzip, and the build and packaging team at[br]Heroku 0:05:23.749,0:05:27.029 worked on this feature, or worked on implementing[br]this 0:05:27.029,0:05:31.029 at Heroku using the PIGz command, and in order 0:05:31.029,0:05:32.969 to understand the kind of benefit of using[br]something 0:05:32.969,0:05:35.710 like this, when you push an app up on 0:05:35.710,0:05:39.020 Heroku during the compile process, it actually[br]builds these 0:05:39.020,0:05:44.029 things at Heroku that are called slugs. And[br]basically 0:05:44.029,0:05:47.659 it's just, like, a tar of your app directory 0:05:47.659,0:05:50.419 of everything after the compile face is run. 0:05:50.419,0:05:54.599 And, originally, we were just using SquashFS,[br]initially, and 0:05:54.599,0:05:57.430 then we moved to kind of just tar files, 0:05:57.430,0:06:00.400 and we noticed that one of the slowest point 0:06:00.400,0:06:03.169 in the actual build process was actually just[br]going 0:06:03.169,0:06:06.419 through and compressing everything in that,[br]that file directory, 0:06:06.419,0:06:09.080 and then pushing it up onto S3 after that 0:06:09.080,0:06:12.139 was done. And so one of the things that 0:06:12.139,0:06:14.139 we looked into was, is there a way we 0:06:14.139,0:06:16.809 can make that faster? So, if you ever push 0:06:16.809,0:06:19.360 a Heroku app and then you basically, like,[br]wait 0:06:19.360,0:06:21.449 when it says, like, compressing and then it[br]goes 0:06:21.449,0:06:23.960 to done, like that's the compressing of the[br]actual 0:06:23.960,0:06:25.219 slug. 0:06:25.219,0:06:29.199 And we managed to use slug, PIGz to now 0:06:29.199,0:06:31.460 improve that by sniffing them out. I don't[br]remember 0:06:31.460,0:06:35.860 the actual performance improvement, but it[br]was pretty significant, 0:06:35.860,0:06:39.400 and the only downside was in certain slugs,[br]the 0:06:39.400,0:06:42.199 slug sizes are a little bit bigger. But the 0:06:42.199,0:06:46.409 performance trade off was worth it at that[br]time. 0:06:46.409,0:06:48.330 The next thing we started doing was looking[br]into 0:06:48.330,0:06:54.150 caching. So anyone here using Rails 4? So[br]pretty 0:06:54.150,0:06:56.529 good amount of the room. So the one thing 0:06:56.529,0:07:00.719 is that we did which differ from Rails 3, 0:07:00.719,0:07:02.259 thanks to a bunch of the work that's happened 0:07:02.259,0:07:04.419 on the Rails Core team with us is that, 0:07:04.419,0:07:07.479 we can now cache assets between deploys. This[br]wasn't 0:07:07.479,0:07:10.779 possible in Rails 3 because the cache was,[br]you 0:07:10.779,0:07:13.289 couldn't actually reuse the cache. There was[br]times when 0:07:13.289,0:07:15.889 the cache would basically be corrupted and[br]then you 0:07:15.889,0:07:18.659 would get, like, assets that wouldn't work[br]between deploys. 0:07:18.659,0:07:20.300 So the fix there was you actually have to 0:07:20.300,0:07:24.330 remove the assets between each deploy in some[br]Rails 0:07:24.330,0:07:27.099 3 builds. But it wasn't consistent, so sometimes[br]it 0:07:27.099,0:07:29.520 would work and sometimes it didn't. And on[br]Heroku 0:07:29.520,0:07:31.379 that's not something we can rely on in an 0:07:31.379,0:07:32.369 automated fashion. 0:07:32.369,0:07:34.569 But luckily a lot of that stuff has been 0:07:34.569,0:07:37.399 fixed for Rails 4, so now we cache assets 0:07:37.399,0:07:39.569 between deploys in Rails 4. And so if we 0:07:39.569,0:07:43.639 look at Rails 3, I guess this got cut 0:07:43.639,0:07:46.379 off, but this is supposed to say about, like, 0:07:46.379,0:07:50.149 thirty-two seconds for a Rails 3 deploy, and[br]then 0:07:50.149,0:07:54.449 on Rails 4 it got, for the average we, 0:07:54.449,0:07:56.680 we measured the steps in the, in the build 0:07:56.680,0:07:59.949 process, and on Rails 4, the perk fifty was 0:07:59.949,0:08:03.449 about fourteen point something seconds. So[br]a pretty significant 0:08:03.449,0:08:07.179 speed improvement there, both due to caching[br]and other 0:08:07.179,0:08:11.149 improvements inside of Rails 4 for the asset[br]pipeline. 0:08:11.149,0:08:12.550 So the other thing we also looked at was 0:08:12.550,0:08:17.199 just, if there's code that is doing extra[br]work, 0:08:17.199,0:08:19.050 if we remove that, it will speed up the 0:08:19.050,0:08:23.789 build process for everyone who's deploying[br]every day. So 0:08:23.789,0:08:25.179 one of the first things that we did was 0:08:25.179,0:08:30.499 actually stop downloading bundler more than[br]once. So, initially, 0:08:30.499,0:08:33.409 when you, when we do the Ruby version detection, 0:08:33.409,0:08:36.659 we actually have to download bundler, and[br]then basically 0:08:36.659,0:08:39.169 run that to get the version of Ruby to 0:08:39.169,0:08:42.210 install on the application. And then again,[br]we would 0:08:42.210,0:08:45.000 then download and install it again because[br]it was 0:08:45.000,0:08:49.779 run in a separate process for the actual,[br]like, 0:08:49.779,0:08:52.279 installing of your dependencies. And one of[br]the things 0:08:52.279,0:08:55.100 we did was to actually just stop doing that 0:08:55.100,0:08:57.649 and we would cache the Bundler gem so we 0:08:57.649,0:09:00.319 don't have to download that two or three times 0:09:00.319,0:09:03.300 during the build process. So, so cutting network[br]IO 0:09:03.300,0:09:06.060 and other things. 0:09:06.060,0:09:09.139 We also started removing, there was like duplicate[br]checks 0:09:09.139,0:09:11.250 between detection of what kind of app you[br]were 0:09:11.250,0:09:14.610 using so, and, bin detects. We would use it 0:09:14.610,0:09:17.060 to figure out what kind of app you have, 0:09:17.060,0:09:18.480 like if it was a Ruby app, a Rack 0:09:18.480,0:09:21.069 app, a Rails 3 app, a Rails 4 app, 0:09:21.069,0:09:24.130 stuff like that. And then, again, since it[br]was 0:09:24.130,0:09:26.190 a separate process in bin compile, we would[br]have 0:09:26.190,0:09:29.740 to do it again. So, Richard actually did a 0:09:29.740,0:09:32.829 bunch of work to refactor both detect and[br]release, 0:09:32.829,0:09:35.589 and so now detect is super simple. It literally 0:09:35.589,0:09:39.029 just checks if you have the gemfile file there, 0:09:39.029,0:09:40.660 and then all the other work is now deferred 0:09:40.660,0:09:43.279 to bin compile. So that means we're only doing 0:09:43.279,0:09:45.589 a bunch of these checks once, like examine[br]your 0:09:45.589,0:09:49.569 jump file, checking what gems you have. So[br]not 0:09:49.569,0:09:52.790 doing that two or more times. 0:09:52.790,0:09:57.230 And, if you haven't watched this talk, he[br]gave 0:09:57.230,0:09:58.930 this talk at Ancient City Ruby. I don't actually 0:09:58.930,0:10:03.170 know if the videos are quite up yet. But 0:10:03.170,0:10:05.110 Richard does a talk about testing the untestable,[br]so 0:10:05.110,0:10:07.230 if you're interested in learning how we test[br]the 0:10:07.230,0:10:11.130 build pack, you should go watch this talk. 0:10:11.130,0:10:15.149 So I'd like to introduce Richard, cause he's[br]gonna 0:10:15.149,0:10:18.790 present on the next section. So Richard loves[br]Ruby 0:10:18.790,0:10:21.829 so much that he got married to her. I 0:10:21.829,0:10:23.170 think he got married last, last year. 0:10:23.170,0:10:24.230 R.S.: Right before our last RailsConf. 0:10:24.230,0:10:28.300 T.L.: Yeah. Right before our last RailsConf.[br]I remember 0:10:28.300,0:10:31.600 that. He's also on the Rails Issue Team, and 0:10:31.600,0:10:35.220 he's one of the top one hundred Rails contributors, 0:10:35.220,0:10:40.149 according to the Rails contributor sites.[br]And you might 0:10:40.149,0:10:43.980 also know him for his, this gem called sextant 0:10:43.980,0:10:48.759 that he released for Rails 3. Basically, I[br]remember 0:10:48.759,0:10:50.940 back in the day, developing Rails apps, when[br]I 0:10:50.940,0:10:52.850 wanted to basically verify routes, I would[br]run the 0:10:52.850,0:10:55.129 rake routes command, and it would, you know,[br]boot 0:10:55.129,0:10:56.430 up the Rails environment and you'd have to[br]wait 0:10:56.430,0:10:58.360 a few seconds and then they would print out 0:10:58.360,0:11:00.360 all the routes. And then if you wanted to, 0:11:00.360,0:11:03.269 like, rerun it using grep, you would keep[br]running 0:11:03.269,0:11:04.389 it again. 0:11:04.389,0:11:07.029 So, a lot of us, when we're doing development, 0:11:07.029,0:11:10.639 already have, like, Rails running in a server[br]while 0:11:10.639,0:11:14.079 we're testing things and whatnot. And so what[br]sextant 0:11:14.079,0:11:17.750 does is it allows it, supports basically looking[br]at 0:11:17.750,0:11:19.940 the routes that are already in memory and[br]just 0:11:19.940,0:11:22.440 allowing you to query against the programmatically,[br]and then 0:11:22.440,0:11:24.709 that has a view for doing this. And this 0:11:24.709,0:11:27.959 was also just merged into Rails 4. So if 0:11:27.959,0:11:29.740 you're using Rails 4 or higher, you actually[br]don't 0:11:29.740,0:11:33.569 need to sextant gem and it's now built in. 0:11:33.569,0:11:36.190 Richard and I both live in Austin, and so 0:11:36.190,0:11:38.190 when people come visit, or actually when I'm[br]in 0:11:38.190,0:11:41.230 town, which isn't often, we have Ruby meet[br]ups 0:11:41.230,0:11:44.220 at Franklin's Barbecue, so if you guys are[br]ever 0:11:44.220,0:11:46.490 in town, let us know and we'd be more 0:11:46.490,0:11:49.620 than happy to take you to a meet up. 0:11:49.620,0:11:54.420 R.S.: All right. So the, for the first part 0:11:54.420,0:11:56.569 of this, this act, we're gonna be talking[br]about 0:11:56.569,0:11:58.389 app speed, but before we talk about app speed, 0:11:58.389,0:12:03.459 we're actually gonna talk about dimensions.[br]So, the, the 0:12:03.459,0:12:09.269 document dimensions are, let me see. Here[br]we go. 0:12:09.269,0:12:12.189 Were originally written in wide-screen, but[br]the screens here 0:12:12.189,0:12:13.230 are standard. 0:12:13.230,0:12:17.730 There we go. So. You're actually gonna get[br]to 0:12:17.730,0:12:20.019 see all of the slides, as opposed to just 0:12:20.019,0:12:22.139 having some of them cut off. So, OK. 0:12:22.139,0:12:24.779 On, on app speed. The first thing I want 0:12:24.779,0:12:27.750 to talk about is, is tail latencies. Is anybody 0:12:27.750,0:12:31.199 familiar with tail latencies? OK. The guys[br]in the 0:12:31.199,0:12:34.269 Heroku t-shirts and somebody else. 0:12:34.269,0:12:38.699 OK. So this is, this is a normalized distribution. 0:12:38.699,0:12:41.990 WE have, on one side, the number of requests. 0:12:41.990,0:12:43.529 On the, on the other side, we have the 0:12:43.529,0:12:46.329 time to respond. So the further out you go, 0:12:46.329,0:12:49.069 the slower it's gonna be. And we can, we 0:12:49.069,0:12:50.990 can see this is the distribution of our requests. 0:12:50.990,0:12:54.550 So over here, it's super fast. Like you love 0:12:54.550,0:12:57.120 to be that customer. You're super happy. 0:12:57.120,0:12:59.769 Over here, we have a super slow request, and 0:12:59.769,0:13:01.160 you don't want to be that customer and you're 0:13:01.160,0:13:05.259 pretty unhappy. So right in the middle is[br]our 0:13:05.259,0:13:09.060 average, and I'm sure they talked a ton about 0:13:09.060,0:13:11.670 why the average is really misleading in the,[br]in 0:13:11.670,0:13:14.720 the last session with Skylight.IO. But, but[br]we're basically 0:13:14.720,0:13:17.600 saying that, roughly fifty percent of your,[br]of your 0:13:17.600,0:13:20.139 customers, fifty percent of your traffic,[br]is going to 0:13:20.139,0:13:24.120 get a response time at this or, or lower. 0:13:24.120,0:13:26.399 So, like, this is, this is pretty decent.[br]We 0:13:26.399,0:13:27.750 can say, like, fifty percent of the people[br]who 0:13:27.750,0:13:29.899 come to our web site get a response before 0:13:29.899,0:13:34.649 then. Moving up the distribution, to something[br]like perk 0:13:34.649,0:13:37.360 ninety-five, we say ninety-five percent of[br]everyone who visits 0:13:37.360,0:13:40.500 our traffic will get a response by now. So 0:13:40.500,0:13:42.339 I'm gonna be using those terms, perk fifty,[br]perk 0:13:42.339,0:13:46.060 ninety-five, that refers to the percentage[br]of, of requests 0:13:46.060,0:13:48.649 that come in that we can respond by. 0:13:48.649,0:13:50.939 So this is kind of theorized. This is an 0:13:50.939,0:13:56.560 actual application. I, and one thing that[br]you'll notice 0:13:56.560,0:13:59.399 is that it's not perfectly normalized. Like,[br]it's not, 0:13:59.399,0:14:01.800 like both sides are not symmetrical. We kind[br]of 0:14:01.800,0:14:04.310 like, steeply shoot up, and then we have this 0:14:04.310,0:14:07.730 really, really long tail, and, and this is[br]kind 0:14:07.730,0:14:09.579 of the, what I'm referring to when I'm saying 0:14:09.579,0:14:10.639 tail latencies. 0:14:10.639,0:14:13.949 So, yes, somebody actually might have gotten[br]a response 0:14:13.949,0:14:17.269 in zero milliseconds. You know, I doubt it,[br]but 0:14:17.269,0:14:20.269 somebody for sure did get a response in 3000 0:14:20.269,0:14:23.220 milliseconds, and that's a really long time[br]to wait 0:14:23.220,0:14:26.160 for your request to actually come in and,[br]and 0:14:26.160,0:14:28.620 get finished. So even though somebody is getting[br]really 0:14:28.620,0:14:31.019 fast responses, and your average isn't bad[br]- your 0:14:31.019,0:14:34.850 average is under 250 milliseconds - one customer[br]might 0:14:34.850,0:14:36.889 be getting a really slow response and a really 0:14:36.889,0:14:39.470 fast response, and, and the net is a bad 0:14:39.470,0:14:40.350 experience. 0:14:40.350,0:14:43.620 So, the net, it, it just, it's a very 0:14:43.620,0:14:47.790 inconsistent experience. So whenever we're[br]talking about application speed, 0:14:47.790,0:14:50.959 we have to consider individual request speed[br]and average, 0:14:50.959,0:14:55.579 but also consistency. How consistent is each[br]request? 0:14:55.579,0:14:58.569 So, how do, how do we do this? What? 0:14:58.569,0:15:00.639 How can we, how can we help with this? 0:15:00.639,0:15:02.529 Well, one of the things that we launched this 0:15:02.529,0:15:06.249 year was PX dynos. So a PX dyno - 0:15:06.249,0:15:09.430 a typical dyno only has 512 megabytes of RAM. 0:15:09.430,0:15:12.529 It's a shared infrastructure. A PX Dyna has[br]six 0:15:12.529,0:15:15.959 gigabytes of RAM and eight CPU cores, which[br]is 0:15:15.959,0:15:17.680 a little, a little nicer, a little better,[br]a 0:15:17.680,0:15:19.819 little bit more room to play. 0:15:19.819,0:15:24.499 And, and it's also real hardware. So, or,[br]it's, 0:15:24.499,0:15:29.430 it's not on the same shared infrastructure.[br]So you 0:15:29.430,0:15:32.560 can, you can scale with Dynos, you, you can 0:15:32.560,0:15:34.680 also scale inside of Dynos. And that's kind[br]of 0:15:34.680,0:15:37.319 two, two important parts that we're gonna,[br]gonna have 0:15:37.319,0:15:40.379 to cover. So, of course, whenever you have[br]more 0:15:40.379,0:15:42.810 requests that you can possibly process, you[br]want to 0:15:42.810,0:15:45.439 scale up and say, I'm gonna have more Dynos. 0:15:45.439,0:15:50.430 But what happens if, if you're not making[br]the 0:15:50.430,0:15:53.120 best use of everything inside of your dyno?[br]Previously 0:15:53.120,0:15:55.160 with 512 megabytes of RAM, you could just,[br]you 0:15:55.160,0:15:56.889 know, throw a couple Unicorn workers in there[br]and 0:15:56.889,0:15:58.910 you're like, oh, I'm probably using most of[br]this. 0:15:58.910,0:16:00.879 Like, if you put two unicorn workers in a 0:16:00.879,0:16:03.370 PX Dyno, you're not making the most use of 0:16:03.370,0:16:04.209 it all. 0:16:04.209,0:16:07.740 So, recently, I am super in love with, with 0:16:07.740,0:16:12.949 Puma. Evan, this is Evan Phoenix's web server[br]that 0:16:12.949,0:16:15.059 was originally written at, to kind of show[br]case 0:16:15.059,0:16:18.050 Rubinius. Guess what? It's really nice with[br]MRI as 0:16:18.050,0:16:22.029 well. Recently we've gotten some Puma docs,[br]and so 0:16:22.029,0:16:23.809 I'm gonna talk about Puma for just, for just 0:16:23.809,0:16:24.540 a little bit. 0:16:24.540,0:16:29.559 So, if you're, if you're not familiar. I was 0:16:29.559,0:16:34.829 totally off on the formatting. So, Puma handles[br]requests 0:16:34.829,0:16:38.279 by running multiple processes, or by multiple[br]threads. And 0:16:38.279,0:16:40.040 it can actually run in something called a[br]hybrid 0:16:40.040,0:16:43.189 mode, where each process has multiple threads.[br]We, we 0:16:43.189,0:16:45.819 recommend this, or I recommend this. E- if[br]one 0:16:45.819,0:16:47.980 of your processes crash, it doesn't crash[br]your entire 0:16:47.980,0:16:51.399 web server. It's kind of nice. 0:16:51.399,0:16:54.110 And so the multiple processes is something[br]that we're 0:16:54.110,0:16:57.959 pretty familiar with. As Rubyists, we're familiar[br]with forking 0:16:57.959,0:17:02.129 processes. We're familiar with Unicorn. But[br]the, the, the 0:17:02.129,0:17:04.220 multiple threads is a little bit different. 0:17:04.220,0:17:07.260 Even with MRI, even with a, something like[br]a 0:17:07.260,0:17:10.359 global interpreter lock, you are still doing[br]enough IO, 0:17:10.359,0:17:13.569 you're still hitting your database frequently[br]enough, maybe making 0:17:13.569,0:17:17.510 API calls to like, Facebook or GitHub status,[br]being 0:17:17.510,0:17:19.970 like, hey, are you still up? 0:17:19.970,0:17:23.099 And, and this will give our threads time to 0:17:23.099,0:17:25.460 kind of jump around and allow others to do 0:17:25.460,0:17:27.050 work. So you, you can get quite an extra 0:17:27.050,0:17:29.110 bit of performance with, there. 0:17:29.110,0:17:31.290 So, we're actually gonna be using Puma to[br]scale 0:17:31.290,0:17:33.470 up inside of our Dynos. So once we give 0:17:33.470,0:17:35.780 you that eight gigs of RAM, we want to 0:17:35.780,0:17:38.280 make sure that, that you can, you can, you 0:17:38.280,0:17:39.830 can make the most use out of it. 0:17:39.830,0:17:43.710 In general, with Puma, more processes means[br]more RAM, 0:17:43.710,0:17:47.360 and more threads are gonna need more CPU consumption. 0:17:47.360,0:17:50.050 So, you want to, you want to maximize your 0:17:50.050,0:17:52.870 processes and maximize your threads, kind[br]of without going 0:17:52.870,0:17:54.370 over. As soon as you start swapping, as soon 0:17:54.370,0:17:56.550 as you go over that RAM limit, your app's 0:17:56.550,0:17:58.930 gonna be really slow and that kind of defeats 0:17:58.930,0:18:02.670 the purpose of trying to add these resources. 0:18:02.670,0:18:06.240 Another issue is that I had kind of never 0:18:06.240,0:18:07.940 heard of until I started looking into all[br]of 0:18:07.940,0:18:11.300 these multiple web servers, is slow-client.[br]So if somebody's 0:18:11.300,0:18:13.940 connecting to your web site via like, a two 0:18:13.940,0:18:17.660 G over like a Nokia candy bar phone, uploading 0:18:17.660,0:18:20.180 like photos or something like that, like that[br]is 0:18:20.180,0:18:22.330 a slow client, and if you're using something[br]like 0:18:22.330,0:18:27.000 Unicorn, it can deDOS your, your site, because[br]each 0:18:27.000,0:18:29.670 one of those requests takes up an entire Unicorn 0:18:29.670,0:18:32.890 worker, whereas Puma has a, has a buffer,[br]and 0:18:32.890,0:18:36.390 it buffers those requests as similar to the[br]way 0:18:36.390,0:18:38.340 NginX does. 0:18:38.340,0:18:40.860 One other thing to consider with Puma is,[br]so 0:18:40.860,0:18:44.260 I'm mentioning threads, I'm talking, talking[br]about threads. Ruby, 0:18:44.260,0:18:48.570 we're not necessarily known as the most thread-safe[br]culture. 0:18:48.570,0:18:52.060 Thread-safe community. And so a lot of apps[br]just 0:18:52.060,0:18:54.540 aren't thread-safe. And so some, you might[br]take a 0:18:54.540,0:18:55.920 look at Puma and be like, hey, that's not 0:18:55.920,0:18:58.820 for me. You can always set your maximum threads 0:18:58.820,0:19:01.400 to one, and then now you're behaving just[br]like 0:19:01.400,0:19:05.270 Unicorn, except you have the slow-client protection,[br]and whenever 0:19:05.270,0:19:08.170 you get that gem that's bad or you, like, 0:19:08.170,0:19:13.180 stop mutating your constants are runtime or[br]something, then 0:19:13.180,0:19:15.570 you can maybe bump up and try multiple threads. 0:19:15.570,0:19:18.980 OK. So, I'm, I'm talking about consistency[br]and I'm 0:19:18.980,0:19:20.990 talking a lot about Puma. How does that all 0:19:20.990,0:19:25.380 kind of boil down and help? So, does anybody 0:19:25.380,0:19:29.630 think that sharing distributed state across[br]multiple machines is 0:19:29.630,0:19:36.630 like really fast? Maybe. I, OK. Good. 0:19:37.150,0:19:39.440 What about sharing state inside of memory[br]on the 0:19:39.440,0:19:44.520 same machine? Is that faster? OK. All right.[br]I 0:19:44.520,0:19:47.550 think we're in, in agreement. So, a, a little 0:19:47.550,0:19:50.120 bit of a point of controversy. You might have 0:19:50.120,0:19:53.750 heard of the, the Heroku router at some point 0:19:53.750,0:19:58.310 in time. And this, the router is actually[br]designed, 0:19:58.310,0:20:00.630 not randomly, but it is, it is designed to 0:20:00.630,0:20:04.300 use a random algorithm. And it basically will[br]try 0:20:04.300,0:20:07.360 to deliver requests as fast as humanly possible,[br]or 0:20:07.360,0:20:11.210 computerly possible, to individual dynos.[br]So it's like, it 0:20:11.210,0:20:12.580 gets the request. It wants to get it to 0:20:12.580,0:20:15.280 your dyno as fast as it possibly can. 0:20:15.280,0:20:18.260 And adding any sort of additional overhead[br]of distributed 0:20:18.260,0:20:24.180 locks or queues is gonna be slowing that down. 0:20:24.180,0:20:26.690 Once inside of your, in your, your process,[br]Puma 0:20:26.690,0:20:30.760 or Unicorn has, in memory, state of all of 0:20:30.760,0:20:33.300 its own processes, and is capable of saying,[br]oh, 0:20:33.300,0:20:35.400 hey, this process is busy. This process is[br]not 0:20:35.400,0:20:40.060 busy. I can do really intelligent routing[br]and, and 0:20:40.060,0:20:43.720 basically, for free. 0:20:43.720,0:20:45.970 It's really fast. It, it took a little bit 0:20:45.970,0:20:49.600 of convincing for me. So does anybody else[br]need 0:20:49.600,0:20:50.460 to be convinced? 0:20:50.460,0:20:51.590 AUDIENCE: Yeah. 0:20:51.590,0:20:54.680 R.S.: OK. Good. Cause otherwise I could totally[br]just 0:20:54.680,0:20:57.280 skip over the next section of slides. 0:20:57.280,0:21:00.570 So, this is, this is a graph produced by 0:21:00.570,0:21:05.760 the fine developers over at, at RapGenius,[br]and on 0:21:05.760,0:21:07.940 one side we will actually see a percentage[br]of 0:21:07.940,0:21:10.740 requests queued, and on the bottom we are[br]gonna 0:21:10.740,0:21:13.160 be seeing number of dynos. So the goal is 0:21:13.160,0:21:15.930 actually to minimize request queuing, like[br]this, this is 0:21:15.930,0:21:18.000 time that your customers are waiting that[br]you're not 0:21:18.000,0:21:19.560 actually doing anything. 0:21:19.560,0:21:21.870 You, so you, you want to minimize that queuing 0:21:21.870,0:21:24.220 with the smallest number of resources, so[br]the smallest 0:21:24.220,0:21:28.020 number of dynos. This top line, we actually[br]have 0:21:28.020,0:21:30.510 is what we've currently got now. The random[br]routing 0:21:30.510,0:21:34.560 with a, a single-threaded server. And, like,[br]this is 0:21:34.560,0:21:36.960 pretty bad. It, like, starts out bad and it, 0:21:36.960,0:21:39.180 like, it doesn't even like trend towards zero.[br]So 0:21:39.180,0:21:41.120 this is probably bad. So this is using something 0:21:41.120,0:21:43.500 like Webrick in production. 0:21:43.500,0:21:47.890 So. Don't use Webrick in production. Or, or[br]like 0:21:47.890,0:21:54.890 even, even thin. In single-threaded mode.[br]So, on the, 0:21:55.150,0:21:58.050 on the very bottom, we actually have a, like, 0:21:58.050,0:22:01.280 mythological, like, if, if we could do all[br]of 0:22:01.280,0:22:05.410 that distributed shared state without, and[br]locks and queues, 0:22:05.410,0:22:08.250 without having any kind of overhead, we can[br]see 0:22:08.250,0:22:11.390 that, basically, it just drops down to zero[br]at 0:22:11.390,0:22:14.380 about, you know, in their case, about seventy-five[br]dynos, 0:22:14.380,0:22:16.350 and then just, you know, it's straight zero.[br]There's 0:22:16.350,0:22:17.190 no queuing. 0:22:17.190,0:22:19.020 And it's great. And this would be amazing[br]if 0:22:19.020,0:22:22.200 we could have it. But unfortunately there[br]is a 0:22:22.200,0:22:24.750 little bit over overhead. What was really[br]interesting to 0:22:24.750,0:22:27.570 me is this second one, which is not nearly 0:22:27.570,0:22:30.900 as nice as that mythological intelligent router,[br]but it's 0:22:30.900,0:22:34.210 kind of not too far off. This is still 0:22:34.210,0:22:36.890 our random routing, and, and this was actually[br]done 0:22:36.890,0:22:41.880 with Unicorn, and workers set to two. So basically, 0:22:41.880,0:22:44.490 once we get the, the request to your operating 0:22:44.490,0:22:46.260 system, it's like one of those two workers[br]is 0:22:46.260,0:22:48.620 free and can immediately start working on[br]it. 0:22:48.620,0:22:51.490 Some, some interesting things known about[br]this is, for 0:22:51.490,0:22:54.220 the non-optimal case, for the, we basically[br]don't have 0:22:54.220,0:22:56.460 enough dynos to handle this, so that might[br]happen 0:22:56.460,0:22:58.910 is, you know, you got on Hacker News or 0:22:58.910,0:23:05.910 whatever, slash dotted, Reddited. SnapChatted.[br]Secreted. I don't know. 0:23:07.620,0:23:11.990 And it does actually, eventually approach[br]ideal state. So, 0:23:11.990,0:23:14.820 it, it gets even better, and unfortunately[br]they kind 0:23:14.820,0:23:17.820 of stopped at, at two processes, but it gets 0:23:17.820,0:23:20.290 better, the more concurrency that you add.[br]So if 0:23:20.290,0:23:23.230 you had three or four workers or, again, if 0:23:23.230,0:23:24.830 you're using something like Puma, and each[br]one of 0:23:24.830,0:23:27.630 those workers is running, like, four threads,[br]now you 0:23:27.630,0:23:29.540 have, like, a massive amount of concurrency[br]that you 0:23:29.540,0:23:31.050 could deal with all of these requests coming[br]in. 0:23:31.050,0:23:35.640 So, the, the, the, if, again, and we're looking 0:23:35.640,0:23:37.920 for consistency. We want that request to get[br]to 0:23:37.920,0:23:40.850 our dyno, immediately be able to process it.[br]So, 0:23:40.850,0:23:43.610 you can use Puma or Unicorn to maximize that 0:23:43.610,0:23:49.700 worker number, and, again, distributed optimi-[br]distributed routing is 0:23:49.700,0:23:54.140 slow. In memory, routing is relatively quick. 0:23:54.140,0:24:00.100 On, again, just in, in the whole context of 0:24:00.100,0:24:04.820 speed, Ruby 2.0 came out and this was awhile 0:24:04.820,0:24:07.250 ago. It's got GC, it's optimized for Copy[br]on 0:24:07.250,0:24:12.180 Write. In, in Ruby, extra processes, process[br]forks actually 0:24:12.180,0:24:14.430 become cheaper. So the first process might[br]take seventy 0:24:14.430,0:24:17.250 megabytes, the second one twenty, and ten[br]and seven 0:24:17.250,0:24:19.900 and six. So, if you get a larger box, 0:24:19.900,0:24:22.620 you can actually run more processes on them.[br]If 0:24:22.620,0:24:24.300 you get eight gigs on one box, you can 0:24:24.300,0:24:26.890 run more processes than you can if you had 0:24:26.890,0:24:28.960 eight gigs across eight boxes. 0:24:28.960,0:24:33.470 So, again, more processes mean more concurrency,[br]and more 0:24:33.470,0:24:38.900 concurrency means consistency. If you are[br]using workers, you 0:24:38.900,0:24:43.400 can, you can also scale out with Resq pool, 0:24:43.400,0:24:46.370 and if your application's still slow, we rolled[br]out 0:24:46.370,0:24:48.480 a couple of really neat platform features.[br]One of 0:24:48.480,0:24:52.040 them is, is called HTTP Request ID. So as 0:24:52.040,0:24:54.860 a request comes into our system, we will actually 0:24:54.860,0:24:57.300 give it a uuid, and you can see this 0:24:57.300,0:25:00.490 in your router log. And then we've got documentation 0:25:00.490,0:25:02.790 on how to configure your Rails app so it 0:25:02.790,0:25:05.410 will actually pick this up and use that uuid 0:25:05.410,0:25:06.590 in tagged logs. 0:25:06.590,0:25:09.590 So, like, how is this useful? So, if you 0:25:09.590,0:25:11.740 are getting, like, and out of memory error,[br]or 0:25:11.740,0:25:13.870 if your request is taking a really long time 0:25:13.870,0:25:16.200 and you're like, ah, like, that request is[br]timing 0:25:16.200,0:25:19.380 out and, you know, Heroku's returning a response[br]and 0:25:19.380,0:25:22.320 we don't even know why. Now, if the request 0:25:22.320,0:25:24.370 id is tagged, you can actually follow along[br]between 0:25:24.370,0:25:26.090 your two logs and be like, oh, it's hitting 0:25:26.090,0:25:28.540 that controller action. Maybe I should be[br]sending that 0:25:28.540,0:25:31.160 email in the background as opposed to having[br]to 0:25:31.160,0:25:34.140 actually block on it. So you can trace specific 0:25:34.140,0:25:34.960 errors. 0:25:34.960,0:25:38.500 We also launched Log Runtime Metrics awhile[br]ago, and 0:25:38.500,0:25:42.950 this is something that we'll actually put[br]your, your 0:25:42.950,0:25:45.240 runtime information directly into your logs.[br]You can check 0:25:45.240,0:25:49.870 it out. Liberato will automatically pick it[br]up for 0:25:49.870,0:25:52.500 you and make you these, these really nice[br]graphs. 0:25:52.500,0:25:55.330 And, again, if you're doing something like[br]Unicorn or, 0:25:55.330,0:25:57.460 or Puma, then you want to get as close 0:25:57.460,0:26:00.570 to your RAM limit without actually going over. 0:26:00.570,0:26:05.400 OK. So, the, the next act in, in our 0:26:05.400,0:26:09.660 play, again, introducing Terence, is, we'll[br]be talking about 0:26:09.660,0:26:12.160 Ruby on the Heroku stack and in the community. 0:26:12.160,0:26:15.630 T.L.: Thank you. So I know we're at RailsConf, 0:26:15.630,0:26:17.690 but I've been doing a bunch of work with 0:26:17.690,0:26:20.940 Ruby, so I wanted to talk about some Ruby 0:26:20.940,0:26:25.420 stuff. So who here is actually using Ruby[br]1.8.7? 0:26:25.420,0:26:31.890 Wow. No one. That's pretty awesome. Oh, wait.[br]One 0:26:31.890,0:26:34.940 person. You should probably get off of it. 0:26:34.940,0:26:35.880 [laughter] 0:26:35.880,0:26:41.190 But. Who is using Ruby 1.9.2? A few more 0:26:41.190,0:26:48.190 people. 1.9.3? Good amount of people here. 0:26:48.660,0:26:51.160 So, I don't know if you guys were following 0:26:51.160,0:26:54.980 along, but Ruby 1.8.7 and 1.9.2 got end-of-lifed[br]at 0:26:54.980,0:26:59.470 one point. And then there was a security incidence, 0:26:59.470,0:27:03.510 and Zachary Scott and I have volunteered to[br]maintain 0:27:03.510,0:27:07.350 security patches till the end of June. So,[br]if 0:27:07.350,0:27:12.860 you are on 1.8.7 and 1.9.2, I would recommend 0:27:12.860,0:27:16.870 hopefully getting off some time soon, unless[br]you don't 0:27:16.870,0:27:19.190 care about security or want to back port your 0:27:19.190,0:27:22.490 own patches. 0:27:22.490,0:27:25.040 And then Ru- we recently announced that Ruby[br]1.9.3 0:27:25.040,0:27:27.530 is also getting an end of life in February 0:27:27.530,0:27:32.380 2015, which is coming up relatively quickly.[br]It's a 0:27:32.380,0:27:34.010 little less than a year away now, at this 0:27:34.010,0:27:38.560 point. So, please upgrade to at least 2.0.0[br]or 0:27:38.560,0:27:40.130 later. 0:27:40.130,0:27:43.880 And, during this past Rails Standard Year,[br]we also 0:27:43.880,0:27:46.200 moved the default Ruby on Heroku from 1.9.2[br]to 0:27:46.200,0:27:49.710 2 dot 0 dot 0. We believe people should 0:27:49.710,0:27:52.470 be at least using this version of Ruby or 0:27:52.470,0:27:53.560 higher. 0:27:53.560,0:27:56.530 And, if you don't know yet, you can declare 0:27:56.530,0:27:59.790 your Ruby version in the gemfile on Heroku[br]to 0:27:59.790,0:28:05.480 get that version. And, we've also are pretty,[br]pretty 0:28:05.480,0:28:08.740 serious about supporting the latest versions[br]of Ruby. Basically, 0:28:08.740,0:28:10.320 the same day that they come out. So we 0:28:10.320,0:28:15.460 did this for 2.0.0, 2.1.0 and 2.1.1, and in 0:28:15.460,0:28:17.880 addition we also try to support any of the 0:28:17.880,0:28:21.860 preview releases as, whenever they get form[br]release, so 0:28:21.860,0:28:25.410 we can, as a community, help, help find bugs 0:28:25.410,0:28:27.880 and test things like, put your staging app[br]on 0:28:27.880,0:28:30.870 new versions of Ruby. If you find bugs then, 0:28:30.870,0:28:32.900 hopefully, we can fix them before they actually[br]make 0:28:32.900,0:28:35.720 it to the final release. 0:28:35.720,0:28:39.850 And, with regards to security patches, if[br]there are 0:28:39.850,0:28:41.990 any security releases that come out, we make[br]sure 0:28:41.990,0:28:44.850 to release them that day as well. We take 0:28:44.850,0:28:47.280 security pretty seriously. 0:28:47.280,0:28:50.540 So, once a security patch has release and[br]we've 0:28:50.540,0:28:54.390 patched those Rubies, you have to push your[br]app 0:28:54.390,0:28:57.420 again to get that release. And the reason[br]we, 0:28:57.420,0:28:58.890 well, a lot of people ask us, like, why 0:28:58.890,0:29:01.710 we don't do that. Why we don't just automatically 0:29:01.710,0:29:04.640 upgrade peoples' Rubies in place. And the[br]reason it, 0:29:04.640,0:29:07.900 the reasoning here is that, not, there might[br]be 0:29:07.900,0:29:09.790 a regression to the security patch, or maybe[br]the 0:29:09.790,0:29:13.360 patch level is not 100% backwards compatible.[br]There's a 0:29:13.360,0:29:16.190 bug that slipped through. But you probably[br]want to 0:29:16.190,0:29:18.810 be there when you're actually deploying your[br]application, in 0:29:18.810,0:29:20.790 case something does go wrong. 0:29:20.790,0:29:22.740 You probably wouldn't want us to deploy something[br]and 0:29:22.740,0:29:25.270 then have your site go down and then you're 0:29:25.270,0:29:27.290 like, not at your computer at all. You're[br]at 0:29:27.290,0:29:29.350 dinner somewhere and it's like super inconvenient[br]to get 0:29:29.350,0:29:30.870 paged there. 0:29:30.870,0:29:34.800 So, we publish all of this information, all[br]of 0:29:34.800,0:29:37.420 the updates to the platform, but also all[br]of 0:29:37.420,0:29:41.230 the Ruby updates, including security updates[br]to the devcenter 0:29:41.230,0:29:45.150 changelogs. So, if you don't, this is, I think, 0:29:45.150,0:29:48.360 devcenter dot heroku dot com slash changelog.[br]And if 0:29:48.360,0:29:51.490 you don't subscribe to it, I would recommend[br]subscribing 0:29:51.490,0:29:53.860 to it just to keep up to date with 0:29:53.860,0:29:57.010 what is happening on Heroku for platform changes[br]in 0:29:57.010,0:30:02.510 addition to updates to Ruby specifically on[br]Heroku. And, 0:30:02.510,0:30:04.710 there isn't too much traffic. Like, you won't[br]get, 0:30:04.710,0:30:07.380 like, a hundred emails a day. So, I highly 0:30:07.380,0:30:09.940 recommend subscribing to this to just keep[br]up to 0:30:09.940,0:30:14.090 date with things like that here. 0:30:14.090,0:30:15.110 So the next thing I would like to talk 0:30:15.110,0:30:18.590 about is Matz's Ruby team. So if you didn't 0:30:18.590,0:30:22.120 know, back in 2012 we hired three people from 0:30:22.120,0:30:27.410 Ruby core. We hired Matz himself, Koichi and[br]Nobu. 0:30:27.410,0:30:29.850 And, as I've gone around over the last years 0:30:29.850,0:30:32.310 talking and interacting with people, I realize[br]a lot 0:30:32.310,0:30:35.600 of people have no idea who, besides Matz,[br]who 0:30:35.600,0:30:37.130 Koichi and Nobu are. 0:30:37.130,0:30:38.400 So I wanted to take the time to kind 0:30:38.400,0:30:42.100 of update people on who these people were[br]and 0:30:42.100,0:30:44.200 kind of what they've actually, like, we've[br]been paying 0:30:44.200,0:30:46.700 them money, and what they've actually been[br]doing to 0:30:46.700,0:30:50.710 kind of move Ruby forward in a positive direction. 0:30:50.710,0:30:54.820 So, if you run a git log, since 2012, 0:30:54.820,0:30:56.970 since we've hired them, you can see the number 0:30:56.970,0:31:01.720 of commits they've made to Ruby itself. So,[br]the, 0:31:01.720,0:31:05.200 so Nobu here, who we've hired, has basically[br]more 0:31:05.200,0:31:07.970 commits than like the second guy by many,[br]many 0:31:07.970,0:31:13.610 commits. And then, Koichi's the third highest[br]committer as 0:31:13.610,0:31:15.680 well. 0:31:15.680,0:31:17.560 And you're probably wondering why I have six[br]names 0:31:17.560,0:31:21.260 here on a list for the top five. And 0:31:21.260,0:31:23.410 so there is this, so there isn't actually[br]on 0:31:23.410,0:31:25.650 the Ruby Core Team who has the handle svn. 0:31:25.650,0:31:28.590 It's not actually a person. So I find out, 0:31:28.590,0:31:32.870 the hard way, who this person was. So when 0:31:32.870,0:31:35.480 I made my first patch to Ruby after being 0:31:35.480,0:31:40.040 on core, I found out that if, all the 0:31:40.040,0:31:43.160 date information is done in JST, and I, of 0:31:43.160,0:31:45.840 course, did not know that, and but like scumbag 0:31:45.840,0:31:48.610 American dates. And so there's basically this[br]bot that 0:31:48.610,0:31:51.330 will go through and, like, fix your commits[br]for 0:31:51.330,0:31:53.760 you, and like, so he does like another commit, 0:31:53.760,0:31:55.750 and it's like, ah, you actually put the wrong 0:31:55.750,0:31:57.360 date. Let me fix that for you. 0:31:57.360,0:32:00.120 So there's like 710 of those commits. I think 0:32:00.120,0:32:02.680 I did this like a month ago. So these 0:32:02.680,0:32:05.880 are the number of commits from a month ago. 0:32:05.880,0:32:07.630 So, the first person I like to talk about 0:32:07.630,0:32:13.200 is Nobuyoshi Nokada. Also known as Nobu. And[br]he's 0:32:13.200,0:32:15.430 known, I think on Ruby Core as The Patch 0:32:15.430,0:32:21.630 Monster. So, we'll go into why he's known[br]by 0:32:21.630,0:32:22.320 this. 0:32:22.320,0:32:25.540 So, what do you think the result of Time 0:32:25.540,0:32:30.510 dot now equals empty string? I'm sure you[br]thought 0:32:30.510,0:32:35.790 it was an infinite loop, right. Or using the 0:32:35.790,0:32:40.460 rational, like, so if you're using the rational[br]number 0:32:40.460,0:32:42.440 library in standard lib, like, what do you[br]think 0:32:42.440,0:32:44.990 the result of doing this operation? 0:32:44.990,0:32:45.810 AUDIENCE: Segfault. 0:32:45.810,0:32:48.260 T.L.: Yeah. So this is a segfault. 0:32:48.260,0:32:50.090 AUDIENCE: [indecipherable - 00:32:40] 0:32:50.090,0:32:54.760 T.L.: Thank you. Thank you for reporting the[br]bug. 0:32:54.760,0:32:57.230 So these, so Eric Hodel actually reported[br]the other 0:32:57.230,0:32:59.350 bug, the time thing, and he found this in 0:32:59.350,0:33:02.050 RubyGems I believe. But these are real issues[br]that 0:33:02.050,0:33:04.940 are in Ruby itself. So if you actually run 0:33:04.940,0:33:06.990 those two things now and you're using later[br]patch 0:33:06.990,0:33:10.200 levels, you should not see them. But, they're[br]real 0:33:10.200,0:33:14.190 issues, and someone has to go and fix all 0:33:14.190,0:33:14.960 them. 0:33:14.960,0:33:17.760 And so the person who actually does this is 0:33:17.760,0:33:22.160 Nobu. And he actually gets paid full time[br]and, 0:33:22.160,0:33:25.040 to basically do bug fixes for Ruby. So all 0:33:25.040,0:33:27.330 those two thousand seven hundred and some[br]commits are 0:33:27.330,0:33:30.220 bug fixes to Ruby trunk to make Ruby run 0:33:30.220,0:33:34.370 better. And, I thanked him when I was just 0:33:34.370,0:33:36.660 in Japan last week for all the work he's 0:33:36.660,0:33:39.730 done. It's pretty incredible. Like, there's[br]so many times 0:33:39.730,0:33:42.270 when things segfault and other things, and[br]he's basically 0:33:42.270,0:33:44.110 made it better. 0:33:44.110,0:33:46.280 I was at Oweito, and there was actually someone 0:33:46.280,0:33:49.050 giving a presentation about, like, thirty[br]tips of, like, 0:33:49.050,0:33:51.570 how to use Ruby. And someone was talking about 0:33:51.570,0:33:55.180 open uri, and there was code on the screen, 0:33:55.180,0:33:57.770 and he found a bug during, like, the guy's 0:33:57.770,0:34:00.190 presentation, and during it, he committed[br]a patch to 0:34:00.190,0:34:04.500 trunk during that guy's presentation. So,[br]he's pretty awesome. 0:34:04.500,0:34:07.810 He doesn't, he doesn't do, he hasn't done[br]any 0:34:07.810,0:34:09.949 talks, but I think people should know about[br]the 0:34:09.949,0:34:11.619 work he's been doing. 0:34:11.619,0:34:14.589 So, this last bug, actually, that I wanted[br]to 0:34:14.589,0:34:16.980 talk about, that he fixed was, are any of 0:34:16.980,0:34:20.290 you familiar with the regression from, in[br]Ruby 2.1.1, 0:34:20.290,0:34:23.480 which regards to hash? 0:34:23.480,0:34:26.590 So, I'm sure you're familiar with the fact[br]that, 0:34:26.590,0:34:29.469 if you use Ruby 2.1.1 on Rails 4.0.3, it 0:34:29.469,0:34:35.010 just doesn't work. Like, there, in Rails,[br]we, we 0:34:35.010,0:34:38.870 use this, we fetch, in hashes we use objects 0:34:38.870,0:34:41.480 as keys. And if you override the hash and 0:34:41.480,0:34:44.149 equal method, and when you fetch you won't[br]get 0:34:44.149,0:34:48.060 the right result back. So inside of Rails[br]in 0:34:48.060,0:34:50.739 4.0.4, they actually had to work around this[br]bug. 0:34:50.739,0:34:53.139 And, Nobu actually was the one who fixed this 0:34:53.139,0:34:55.370 inside of Ruby itself. 0:34:55.370,0:34:58.980 So, these were just, like, the three most[br]interesting 0:34:58.980,0:35:01.720 bugs that I found from within the last year 0:35:01.720,0:35:05.050 or two of stuff he's worked on. But, if 0:35:05.050,0:35:07.720 you look on site Ruby Core, you can find, 0:35:07.720,0:35:09.860 like, hundreds and hundreds of bugs that he's[br]done 0:35:09.860,0:35:12.960 within the last year of, just like, rip segfaults 0:35:12.960,0:35:16.750 and other things. So he's great to work with. 0:35:16.750,0:35:19.450 So the next person I want to talk about 0:35:19.450,0:35:26.450 is Koichi Sasada. He's also known as ko1,[br]ko1. 0:35:26.800,0:35:30.230 And he doesn't have a nickname in Ruby Core, 0:35:30.230,0:35:32.560 so me and Richard spent a good amount of 0:35:32.560,0:35:34.900 our talk preparation trying to come up with[br]a 0:35:34.900,0:35:36.680 nickname for him. So we came up with the 0:35:36.680,0:35:39.240 Performance Pro. And this is a picture of[br]him 0:35:39.240,0:35:42.650 giving a talk in Japanese. 0:35:42.650,0:35:47.590 So, if you use Ruby 1.9 at all, he 0:35:47.590,0:35:52.850 worked on Yarv. So basically the new VM stuff 0:35:52.850,0:35:55.600 that made Ruby 1.9, I think it was like 0:35:55.600,0:35:59.860 thirty percent faster than 1.8 for like longer-running[br]processes. 0:35:59.860,0:36:04.830 More recently he's worked on the RGenGC. And[br]this 0:36:04.830,0:36:08.400 was introduced in Ruby 2.1.1, and it allows[br]faster 0:36:08.400,0:36:13.260 code execution by having, basically, shorter[br]GC pauses. So 0:36:13.260,0:36:15.360 instead of doing full GC every time, like,[br]you 0:36:15.360,0:36:17.690 can have these minor ones. 0:36:17.690,0:36:22.110 So, just, he spends all of his time thinking 0:36:22.110,0:36:25.030 about performance in Ruby, and that's like[br]what he's 0:36:25.030,0:36:28.640 paid to work on. So if anyone actually cares 0:36:28.640,0:36:31.650 about Ruby performance, you should thank this[br]guy for 0:36:31.650,0:36:33.630 the work he's done. If you've looked at the 0:36:33.630,0:36:37.030 performance of Ruby since, in the last few[br]years, 0:36:37.030,0:36:39.960 like it's improved a lot. A lot due to 0:36:39.960,0:36:41.480 this guy's work. 0:36:41.480,0:36:43.780 And, I was just, I was talking to him, 0:36:43.780,0:36:46.980 and he was telling me that he basically, like, 0:36:46.980,0:36:49.000 when he was working on RGenGC, he like, he 0:36:49.000,0:36:51.200 was just like, walking around the park and[br]he 0:36:51.200,0:36:53.110 had a breakthrough. So he like spends a lot 0:36:53.110,0:36:55.580 of his time, even, off of work hours, just 0:36:55.580,0:36:58.530 thinking about this stuff. 0:36:58.530,0:37:00.070 Other stuff that he's been working on as well 0:37:00.070,0:37:04.500 is profiling work. So, if you've used any[br]of 0:37:04.500,0:37:09.330 the Man stuff for 2.1.1, with the MemProfiler[br]and 0:37:09.330,0:37:12.110 other things, he's been working on, with him[br]to 0:37:12.110,0:37:14.930 introduce hooks into the internal API to make[br]stuff 0:37:14.930,0:37:17.830 like that work. So we, I think we understand 0:37:17.830,0:37:20.420 that profiling, being able to measure your[br]application for 0:37:20.420,0:37:25.100 Ruby is super important. So, if you have basically 0:37:25.100,0:37:29.100 comments or suggestions on things that you[br]need or 0:37:29.100,0:37:31.320 think that you can't improve this thing, like[br]it's 0:37:31.320,0:37:34.090 worth talking, reaching out and talking to[br]Koichi about 0:37:34.090,0:37:35.150 this. 0:37:35.150,0:37:37.180 And some of the stuff he's been working on 0:37:37.180,0:37:40.980 in this vein has been, like, the gc_tracer[br]gem. 0:37:40.980,0:37:44.390 So, using this to basically get more information[br]about 0:37:44.390,0:37:47.800 your garbage collector, an allocation_tracer[br]gem to see how 0:37:47.800,0:37:51.380 long live, like, objects are. And then even[br]in 0:37:51.380,0:37:54.550 2.2, we're, as a team, we're working on, there 0:37:54.550,0:37:58.520 is an incremental GC patch, and then also.[br]Or, 0:37:58.520,0:38:01.060 he's working on making the GC better with[br]incremental 0:38:01.060,0:38:03.960 GC and there is symbol GC for security things, 0:38:03.960,0:38:07.130 which'll be super good for Rails. So we can't 0:38:07.130,0:38:09.530 get, like, DOS because of the symbol table[br]being 0:38:09.530,0:38:11.220 filled up. 0:38:11.220,0:38:14.240 Another, so one of the things, when I was 0:38:14.240,0:38:15.960 in Japan, we had a Ruby Core meeting, and 0:38:15.960,0:38:20.150 we talked about Ruby releases. And releasing[br]Ruby is 0:38:20.150,0:38:23.990 kind of a slow process, and I was, I 0:38:23.990,0:38:26.780 wasn't really sure why it took so long. And 0:38:26.780,0:38:29.770 so I kind of asked the question, and, and 0:38:29.770,0:38:34.980 Naruse, who's the release manager of 2.1 and[br]was 0:38:34.980,0:38:38.000 telling me that it requires lots of human[br]and 0:38:38.000,0:38:41.150 machine resources. Basically, Ruby has to[br]work on many 0:38:41.150,0:38:45.670 configurations, Linux distros, you know, on[br]OS X and 0:38:45.670,0:38:48.070 other things. And in order to release, like,[br]the 0:38:48.070,0:38:50.040 CI server has to pass and, like, you kind 0:38:50.040,0:38:52.840 of have to pass on like various vendors and 0:38:52.840,0:38:54.120 what not. And so like, there's a lot of 0:38:54.120,0:38:58.850 coordination and like checking to like make[br]an actual 0:38:58.850,0:39:01.550 release happen. Which is why things don't[br]release super 0:39:01.550,0:39:02.460 fast. 0:39:02.460,0:39:06.610 So, some of the stuff that Koichi and my 0:39:06.610,0:39:08.110 team and other people on Ruby Core have been 0:39:08.110,0:39:10.550 working on is, like, working on infrastructure[br]and services 0:39:10.550,0:39:13.970 to help with, basically, testing of Ruby,[br]to kind 0:39:13.970,0:39:16.880 of hopefully automate and, like, basically[br]do that per, 0:39:16.880,0:39:19.860 either nightly or per commit or something[br]along those 0:39:19.860,0:39:20.200 lines. 0:39:20.200,0:39:22.780 So hopefully we can get releases that are[br]faster 0:39:22.780,0:39:26.720 and are out to users sooner. 0:39:26.720,0:39:29.690 If you have ideas for Ruby 2.2, like, I 0:39:29.690,0:39:32.940 would love to hear them. We have a meeting 0:39:32.940,0:39:36.070 next month in May, about what is gonna go 0:39:36.070,0:39:39.490 into Ruby 2.2. So I'd be more than happy 0:39:39.490,0:39:42.380 to talk to you about ideas that you have 0:39:42.380,0:39:45.980 that you would like to see there. I'm just 0:39:45.980,0:39:47.620 gonna skip this stuff since I talked about[br]it 0:39:47.620,0:39:49.600 earlier, and we're running short on time.[br]So, here's 0:39:49.600,0:39:51.290 Scheems to actually talk about Rails. 0:39:51.290,0:39:52.460 R.S.: OK. 0:39:52.460,0:39:54.970 Has anybody used Rails? Have we covered that[br]question 0:39:54.970,0:39:59.900 yet? OK. Welcome to RailsConf. OK, so Rails[br]4.1 0:39:59.900,0:40:00.500 on Heroku. 0:40:00.500,0:40:03.130 A lot of things in a very short amount 0:40:03.130,0:40:05.420 of time. We are secure by default. Have you 0:40:05.420,0:40:08.670 heard of the secrets dot yml file? OK. So 0:40:08.670,0:40:10.360 secrets dot yml file is actually reading out[br]an 0:40:10.360,0:40:13.070 environment variable by default, which is[br]great. We love 0:40:13.070,0:40:17.820 environment variables. It separates your config[br]from your source. 0:40:17.820,0:40:20.790 And, so whenever you push your app, we're[br]gonna 0:40:20.790,0:40:22.770 set this environment variable to just, like,[br]literally a 0:40:22.770,0:40:25.850 random value. And if, for some reason, you[br]ever 0:40:25.850,0:40:28.990 need to like change that, you can do so 0:40:28.990,0:40:32.860 by just setting your, the, the secret key[br]base 0:40:32.860,0:40:35.260 environment variable to, to whatever you want. 0:40:35.260,0:40:37.450 Maybe, you know, like another OpenSSL bug[br]comes out 0:40:37.450,0:40:42.150 or something. So, another thing that was worked[br]on 0:40:42.150,0:40:45.650 a bunch is the database_url environment variable.[br]This is 0:40:45.650,0:40:47.300 something that we have spent a lot of time 0:40:47.300,0:40:49.380 looking at. And it's actually, support has[br]been in 0:40:49.380,0:40:52.010 Rails for a surprisingly large amount of time,[br]to 0:40:52.010,0:40:54.510 just read from the environment variable, but[br]never quite 0:40:54.510,0:40:58.380 worked due to some edge cases and random rake 0:40:58.380,0:41:01.360 tasks and so on and so forth. So this, 0:41:01.360,0:41:03.560 this December, around Christmas time, I spent[br]a lot 0:41:03.560,0:41:05.260 of time getting that to work. 0:41:05.260,0:41:08.910 So I'd like to happily announce that Rails[br]4, 0:41:08.910,0:41:13.490 4.1 actually does support the database_url[br]environment variable out 0:41:13.490,0:41:18.100 of the box. Whoo! And, so, some, to describe 0:41:18.100,0:41:22.810 a little, like, the behavior is, bears going[br]over. 0:41:22.810,0:41:25.250 If the database_url is present, we're just[br]gonna connect 0:41:25.250,0:41:28.430 to that database. It's, that's pretty simple.[br]Makes sense. 0:41:28.430,0:41:30.800 If the database.yml is present but there's[br]no environment 0:41:30.800,0:41:33.230 variable, then we're gonna use that. That[br]also just 0:41:33.230,0:41:35.190 kind of makes sense. 0:41:35.190,0:41:37.369 If both are present, then we're gonna merge[br]the 0:41:37.369,0:41:41.920 values. Makes sense, right? OK. 0:41:41.920,0:41:46.600 So, we, that sounds crazy. Bear with me. But, 0:41:46.600,0:41:48.369 a lot of people, you, you want to put 0:41:48.369,0:41:52.440 your connection information in your database_url[br]environment variable. But, 0:41:52.440,0:41:54.840 there's also other values you can use inside[br]of 0:41:54.840,0:41:58.860 your database.yml file to configure ActiveRecord[br]itself. Not your 0:41:58.860,0:42:00.990 database. So you can turn off and on prepared 0:42:00.990,0:42:03.230 statements. You can change your pool size.[br]All this 0:42:03.230,0:42:04.790 kind of thing. 0:42:04.790,0:42:07.420 And, we wanted to still enable you to be 0:42:07.420,0:42:10.300 able, able to do this. So the, the results 0:42:10.300,0:42:15.369 are actually merged, and for, for somebody[br]like Heroku 0:42:15.369,0:42:19.310 or, like, if you're using another container,[br]we don't 0:42:19.310,0:42:21.260 have to have as much magic. If you, if 0:42:21.260,0:42:24.460 you didn't know, database_url, we actually[br]had to over, 0:42:24.460,0:42:26.490 whatever your database_url was, we were just[br]writing a 0:42:26.490,0:42:28.850 file over top of it. And it's like, forget 0:42:28.850,0:42:30.110 that. We're gonna write a custom file. 0:42:30.110,0:42:32.580 So people would put stuff in their database_url,[br]or 0:42:32.580,0:42:34.869 their database.yml file, and they'd be surprised[br]when it 0:42:34.869,0:42:37.070 wasn't there. Like, a different file was there.[br]So, 0:42:37.070,0:42:39.390 we no longer, we no longer have to do 0:42:39.390,0:42:41.380 that. And Rails plays a little bit nicer with, 0:42:41.380,0:42:45.110 with this containerized style environment. 0:42:45.110,0:42:47.900 It also means that, you could actually start[br]putting 0:42:47.900,0:42:52.260 your ActiveRecord configuration in that file.[br]Another note, if 0:42:52.260,0:42:55.500 you were manually setting that, your pool[br]size or 0:42:55.500,0:42:59.150 any of those things via a, after reading an 0:42:59.150,0:43:01.840 article on our devcenter, go back and revisit[br]that 0:43:01.840,0:43:04.560 please, before upgrading to Rails 4.1. Some[br]of the 0:43:04.560,0:43:08.060 syntax did change between Rails 4.0 and 4.1.[br]So, 0:43:08.060,0:43:10.619 if you can't connect to a database, then maybe, 0:43:10.619,0:43:12.930 just like, email Schneemz and be like, I hate 0:43:12.930,0:43:14.460 you. What's the link to that thing? And I'll, 0:43:14.460,0:43:16.480 I'll help you out. 0:43:16.480,0:43:20.520 OK. I think, probably, actually, the last[br]thing that 0:43:20.520,0:43:25.110 we have time for, is asset pipeline. Who,[br]like, 0:43:25.110,0:43:29.040 if asked in an interview, would say that their 0:43:29.040,0:43:31.510 favorite thing in the whole world is Rails[br]asset 0:43:31.510,0:43:35.460 pipeline? Oh. Oh. 0:43:35.460,0:43:37.210 AUDIENCE: Just Raphael. 0:43:37.210,0:43:38.880 R.S.: Just Raphael. We have a bunch of, like, 0:43:38.880,0:43:41.510 Rails Core here, by the way. So you should, 0:43:41.510,0:43:44.380 you should come and thank them afterwards.[br]For, for 0:43:44.380,0:43:46.260 other things. Not for the asset pipeline. 0:43:46.260,0:43:47.140 [laughter] 0:43:47.140,0:43:49.650 So, the asset pipeline is the number one source 0:43:49.650,0:43:52.860 of, of Ruby support tickets at Heroku. Just[br]people 0:43:52.860,0:43:55.220 being like, hey, this worked locally, and[br]like, didn't 0:43:55.220,0:43:58.020 work in production. And we're like, yeah,[br]that's just 0:43:58.020,0:44:00.970 how asset pipeline works. That's not Heroku. 0:44:00.970,0:44:04.800 So, so Rails 4.1 added, added a couple things. 0:44:04.800,0:44:07.860 It's gonna warn you in development if you're[br]doing 0:44:07.860,0:44:10.550 something that's gonna break production. Like,[br]if you've ever 0:44:10.550,0:44:13.530 forgotten to add something to your precompile[br]list, well 0:44:13.530,0:44:16.600 now, guess what, you get an error. If you 0:44:16.600,0:44:19.619 are not properly declaring your asset dependencies,[br]then you're 0:44:19.619,0:44:21.730 gonna get an error. 0:44:21.730,0:44:26.440 And this is even better, actually, in Rails[br]4.2. 0:44:26.440,0:44:28.260 As some of these checks aren't even needed[br]anymore, 0:44:28.260,0:44:30.490 we can just automatically do them for you.[br]But, 0:44:30.490,0:44:34.220 unfortunately, those have, are not in Rails[br]4.1 yet. 0:44:34.220,0:44:37.340 So, in general, I have a, a personal belief 0:44:37.340,0:44:40.200 that, in programming, or, really in life,[br]the only 0:44:40.200,0:44:47.200 thing that should fail silently is. This.[br]This joke. 0:44:48.369,0:44:54.369 So. Thank you all very much for, for coming. 0:44:54.369,0:44:59.760 We, we have a booth, and later on, what. 0:44:59.760,0:45:00.990 What time, three o' clock? 0:45:00.990,0:45:02.020 T.L.: Between 3:00 and 4:30. 0:45:02.020,0:45:03.480 R.S.: Yeah. From 3:00 to 4:30, we'll actually[br]have 0:45:03.480,0:45:09.510 a bunch of Rails contributors coming to, to[br]talk 0:45:09.510,0:45:14.330 about. Oh yeah, the slides. Yeah. Yeah. 0:45:14.330,0:45:16.750 T.L.: Yeah. 3:00 to 4:30, we'll have community[br]office 0:45:16.750,0:45:20.720 hours with some nice people from Rails Core,[br]contrib. 0:45:20.720,0:45:22.869 R.S.: Yeah. So come ask. 0:45:22.869,0:45:26.590 T.L.: Basically any Rails questions or anything[br]you want. 0:45:26.590,0:45:28.010 And then Schneeman will actually be doing[br]a book 0:45:28.010,0:45:30.990 signing of his Heroku Up & Running book today 0:45:30.990,0:45:33.850 and tomorrow at 2:30. So if you want that. 0:45:33.850,0:45:36.010 R.S.: Yeah. So get a, get a free book, 0:45:36.010,0:45:38.160 and then come and ask questions and just,[br]like, 0:45:38.160,0:45:41.580 hang out. And, any time you stop by the 0:45:41.580,0:45:44.160 booth, feel free to ask Heroku questions.[br]And thank 0:45:44.160,0:45:45.880 you all very much for coming.