< Return to Video

RailsConf 2014 - Keynote: 10 Years! by Yehuda Katz

  • 0:18 - 0:19
    YEHUDA KATZ: So it's been ten years.
  • 0:19 - 0:21
    Unlike DHH I can't get up here and
  • 0:21 - 0:23
    tell you I've been doing Rails for ten years.
  • 0:23 - 0:25
    I've been doing Rails for nine or eight or
    something
  • 0:25 - 0:27
    like that. But, Rails has been around
  • 0:27 - 0:29
    for ten years, and so it's not surprising
  • 0:29 - 0:30
    that a bunch of people are gonna get up here
  • 0:30 - 0:33
    and do a little bit of a retrospective.
  • 0:33 - 0:36
    So this is sort of my feeling. Oh my god,
  • 0:36 - 0:39
    I remember sort of thinking back in 2008,
  • 0:39 - 0:43
    when DHH was, was giving his talk about sort
    of
  • 0:43 - 0:46
    a look back around the same year that Merb
  • 0:46 - 0:49
    was becoming a thing, and DHH, and we were
  • 0:49 - 0:52
    eventually gonna, you know, merge a little
    bit later.
  • 0:52 - 0:54
    But in 2008, when DHH gave the great surplus
  • 0:54 - 0:57
    talk, that was sort of a retrospective year
    too,
  • 0:57 - 0:59
    because we had gotten to the point
  • 0:59 - 1:01
    where Rails was big enough that it couldn't
  • 1:01 - 1:02
    actually host a competitor.
  • 1:02 - 1:04
    And I think it's really great that we ended
  • 1:04 - 1:06
    up merging in and we ended up getting another
  • 1:06 - 1:10
    five, six years of, of, of great, a great
  • 1:10 - 1:13
    framework community. But now it's another
    opportunity to look
  • 1:13 - 1:15
    back and think about sort of what, what the
  • 1:15 - 1:18
    Rails community is and how you should think
    about
  • 1:18 - 1:21
    taking the lessons of Rails to other environments.
    And
  • 1:21 - 1:23
    that's sort of what my talk is about today.
  • 1:23 - 1:26
    So, I'm gonna start by saying, I think, if
  • 1:26 - 1:28
    you think about one thing about Rails, if
    you
  • 1:28 - 1:32
    want to think about what Rails is above anything
  • 1:32 - 1:36
    else, I think Rails popularized the idea of
    convention
  • 1:36 - 1:38
    over configuration. And you've been hearing
    the term convention
  • 1:38 - 1:42
    over configuration for ten years now. So probably
    it's
  • 1:42 - 1:44
    sort of, it's a meaningless term. It's sort
    of
  • 1:44 - 1:45
    like when you hear the same word over and
  • 1:45 - 1:48
    over and over again, eventually you reach
    semantic saturation.
  • 1:48 - 1:50
    I think we've all reached semantic saturation
    of the
  • 1:50 - 1:52
    term convention over configuration.
  • 1:52 - 1:55
    I want to unpack it a little bit. I
  • 1:55 - 1:58
    think one way of thinking about it is, is
  • 1:58 - 2:00
    this other term called the paradox of choice.
    This
  • 2:00 - 2:03
    idea that people, well I'll let DHH say what
  • 2:03 - 2:06
    it is, people like choices a lot more than
  • 2:06 - 2:08
    they like having to actually choose. Right,
    so there's
  • 2:08 - 2:11
    this, I think this sort of narrow point, but
  • 2:11 - 2:13
    it's still a very important point, which is
    that
  • 2:13 - 2:17
    people go into environments, they go into
    programming environments
  • 2:17 - 2:20
    or groceries or whatever, and they like the
    idea
  • 2:20 - 2:22
    of having a whole lot of choices, a lot
  • 2:22 - 2:25
    more than they like having to actually choose
    what
  • 2:25 - 2:25
    to do.
  • 2:25 - 2:27
    And this is sort of the state of the
  • 2:27 - 2:29
    art, this is what we knew in 2008 when
  • 2:29 - 2:33
    David gave the great surplus talk back then.
    And
  • 2:33 - 2:34
    what I want to do is go a little
  • 2:34 - 2:38
    bit beyond sort of these, these pithy points
    and
  • 2:38 - 2:41
    talk a little bit about what, what's happening,
    what
  • 2:41 - 2:45
    is actually going on that's causing this idea
    to
  • 2:45 - 2:48
    occur. What's hap- causing the paradox of
    choice? And
  • 2:48 - 2:50
    actually there's been a lot of science, even
    in
  • 2:50 - 2:53
    2008 there was science, but in between 2008
    and
  • 2:53 - 2:55
    now, or certainly 2004 and now, there's been
    a
  • 2:55 - 3:00
    tremendous amount of science about what is
    causing the
  • 3:00 - 3:03
    paradox of choice. What is causing convention
    over configuration
  • 3:03 - 3:05
    to be effective.
  • 3:05 - 3:06
    And if you want to Google it, if you
  • 3:06 - 3:09
    want to go find something, more information
    about this
  • 3:09 - 3:13
    on Wikipedia, the term is called ego depletion.
    Sometimes
  • 3:13 - 3:16
    the idea of cognitive depletion. And in order
    to
  • 3:16 - 3:19
    understand what's happening here, you first
    need to understand,
  • 3:19 - 3:20
    you first need to think about sort of, like,
  • 3:20 - 3:24
    your everyday, your everyday job, your, how
    you feel
  • 3:24 - 3:25
    during the day.
  • 3:25 - 3:28
    So you wake up in the morning. You get
  • 3:28 - 3:31
    out, you go out of the house, and you're
  • 3:31 - 3:33
    pretty much fully charged. You're ready to
    attack the
  • 3:33 - 3:36
    world, you, hopefully it's sunny and you can
    skip
  • 3:36 - 3:38
    down the street. And you're, you're ready
    to do
  • 3:38 - 3:40
    anything. You're ready. You have all the,
    the, the
  • 3:40 - 3:43
    cognitive resources in the world.
  • 3:43 - 3:45
    And then, you know, you get to your desk.
  • 3:45 - 3:49
    I find it amusing that, that character is
    a
  • 3:49 - 3:53
    programmer. It's so perfect. So you get to
    your
  • 3:53 - 3:54
    desk and you know you've, you've done a little
  • 3:54 - 3:57
    bit of work, and you're cognitive resources
    start to
  • 3:57 - 3:59
    deplete a little bit. You got a little bit
  • 3:59 - 4:02
    fewer cognitive resources. You know, eventually
    something happens during
  • 4:02 - 4:06
    the day that you might not, that might not
  • 4:06 - 4:10
    be so pleasant, and your cognitive resources
    deplete and
  • 4:10 - 4:12
    then you reach a point, at some point during
  • 4:12 - 4:13
    the day - this is a Van Gogh painting
  • 4:13 - 4:15
    - you reach a point some time during the
  • 4:15 - 4:17
    day where you're really flagging. You're feeling
    like you
  • 4:17 - 4:19
    don't have a lot of capacity to left to,
  • 4:19 - 4:21
    to do anything or to think hard. And eventually
  • 4:21 - 4:24
    you totally run out. You run out of cognitive
  • 4:24 - 4:26
    resources entirely and you're done.
  • 4:26 - 4:28
    And so the idea here is the concept of
  • 4:28 - 4:31
    cognitive depletion or ego depletion, you
    have a certain
  • 4:31 - 4:33
    amount of resources and they run out. And
    I
  • 4:33 - 4:35
    think most people think about this in terms
    of,
  • 4:35 - 4:37
    like, you're day job. Right, so you wake up
  • 4:37 - 4:40
    in the morning. Throughout the day your resources
    deplete.
  • 4:40 - 4:41
    You get to the end of the day, you're
  • 4:41 - 4:43
    out of resources. Rinse and repeat. I think
    that's
  • 4:43 - 4:44
    how most people think about it and that's
    sort
  • 4:44 - 4:46
    of how I framed it here.
  • 4:46 - 4:48
    But the really interesting thing about ego
    depletion or
  • 4:48 - 4:51
    cognitive depletion is that what actually
    turns out to
  • 4:51 - 4:53
    be the case is that there's sort of this
  • 4:53 - 4:57
    one, there's this one big pool of resources,
    this
  • 4:57 - 5:00
    one battery, that's actually used for a lot
    of
  • 5:00 - 5:02
    different things. And so there's a lot of
    studies
  • 5:02 - 5:06
    about things like grocery stores, right. So
    why is
  • 5:06 - 5:08
    it that when you go to a grocery store,
  • 5:08 - 5:10
    why do you find yourself so willing to buy
  • 5:10 - 5:13
    something at the impulse, at the impulse aisle?
    Right,
  • 5:13 - 5:14
    at the end of the, at the end of
  • 5:14 - 5:16
    the grocery trip.
  • 5:16 - 5:17
    And the reason is that you spent all this
  • 5:17 - 5:20
    time in the grocery store doing not very difficult
  • 5:20 - 5:23
    activities, but you're making a lot of choices.
    So
  • 5:23 - 5:25
    you're making choices the entire time, as
    you're walking
  • 5:25 - 5:28
    around the grocery store. And eventually your
    brain just
  • 5:28 - 5:30
    runs out of resources to do anything else,
    and
  • 5:30 - 5:33
    it's actually drawing from the same pool of
    resources
  • 5:33 - 5:35
    that will power comes from. So even though
    choice
  • 5:35 - 5:37
    making and will power feel like two totally
    different
  • 5:37 - 5:39
    things, when you get to the end of the
  • 5:39 - 5:41
    grocery trip and you've used all of your resources
  • 5:41 - 5:44
    on choice making, you're out of resources
    to not
  • 5:44 - 5:47
    buy the candy bar at the, at the impulse,
  • 5:47 - 5:49
    from the impulse aisle.
  • 5:49 - 5:51
    And the same thing is true about a lot
  • 5:51 - 5:53
    of things. They've done studies where they'll
    just take
  • 5:53 - 5:54
    two halves of a room, like this one, and
  • 5:54 - 5:58
    they'll say, you guys memorize two numbers,
    you guys
  • 5:58 - 6:01
    memorize seven numbers, and then when they're
    totally done
  • 6:01 - 6:03
    and you've done the memorization, they'll
    take you into
  • 6:03 - 6:06
    a processing room and they'll say, OK. You
    have,
  • 6:06 - 6:09
    you know, cookies, and you have some fruit.
    And
  • 6:09 - 6:12
    you get to basically decide which one to eat.
  • 6:12 - 6:13
    And it turns out that the people who did
  • 6:13 - 6:16
    the not much more difficult job of memorizing
    seven
  • 6:16 - 6:19
    numbers are far more likely to eat the cookies.
  • 6:19 - 6:20
    And the same thing is true in the other
  • 6:20 - 6:22
    direction. If you have people eat cookies
    first, and
  • 6:22 - 6:25
    then go and do a cognitively difficult task,
    so
  • 6:25 - 6:28
    a task that requires. One of the most famous
  • 6:28 - 6:30
    experiments has to do with an impossible task.
    How
  • 6:30 - 6:32
    long can people persevere doing a task that
    you
  • 6:32 - 6:35
    can literally never finish? And it turns out
    that
  • 6:35 - 6:38
    people who eat the cookies first actually
    have, spend
  • 6:38 - 6:40
    a lot more time trying to do the impossible
  • 6:40 - 6:43
    task than the people that, that were, had,
    had
  • 6:43 - 6:44
    to sit in front of a tray of cookies
  • 6:44 - 6:46
    and were told not to eat it.
  • 6:46 - 6:48
    And so there's, there's all these experiments,
    there's, by
  • 6:48 - 6:50
    now in 2014 there is a ton of them,
  • 6:50 - 6:53
    and basically what they show in aggregate
    is that
  • 6:53 - 6:55
    there is this pool of resources that we have
  • 6:55 - 6:59
    to do our job, challenging tasks, to do cognitive
  • 6:59 - 7:02
    dissonance. So there's studies around, if
    you just tell
  • 7:02 - 7:03
    people, you need to get up and give a
  • 7:03 - 7:05
    speech about something that's different than
    what you actually
  • 7:05 - 7:09
    believe, people who do that actually have
    less will
  • 7:09 - 7:11
    power afterwards. They have less ability to
    do challenging
  • 7:11 - 7:14
    tasks. They have less general cognitive resources
    than people
  • 7:14 - 7:16
    who are asked to give a speech about something
  • 7:16 - 7:18
    they do believe. Even if the actual act of
  • 7:18 - 7:22
    giving the speech is equally, is equally difficult.
  • 7:22 - 7:24
    And so I think what's kind of interesting
    about
  • 7:24 - 7:26
    this is that, really what we want in programming
  • 7:26 - 7:28
    is we want to be doing, having as much
  • 7:28 - 7:30
    time as we can for the challenging tasks,
    right.
  • 7:30 - 7:33
    We want to be spending all of our time
  • 7:33 - 7:35
    on challenging tasks and as little of those
    very
  • 7:35 - 7:38
    scarce resources as we can on things like
    the
  • 7:38 - 7:40
    will power to write your code in exactly the
  • 7:40 - 7:44
    right way, or making a lot of choices.
  • 7:44 - 7:46
    And here are some terms that you might have
  • 7:46 - 7:50
    heard, basically, about this paradox of choice
    or ego
  • 7:50 - 7:53
    depletion or the concept of decision fatigue.
    These are
  • 7:53 - 7:55
    all ways that you've heard that describe this
    general
  • 7:55 - 7:58
    concept, this general concept of you just
    have this
  • 7:58 - 8:00
    battery, and it runs out at some point, and
  • 8:00 - 8:04
    there's all these really counter-intuitive
    things that don't seem
  • 8:04 - 8:06
    like they're very hard but are taking away
    resources
  • 8:06 - 8:08
    that you need to work on hard problems.
  • 8:08 - 8:12
    So, how do we, how do we solve this?
  • 8:12 - 8:15
    How do we actually solve this problem? Because
    obviously
  • 8:15 - 8:17
    it's the case that you, if you want to
  • 8:17 - 8:19
    be spending a lot of time on your challenging
  • 8:19 - 8:22
    problems, if you just ignore the problem of
    will
  • 8:22 - 8:25
    power or the problem of, of choices, you're
    just
  • 8:25 - 8:27
    gonna end up making a lot of mindless choices
  • 8:27 - 8:29
    all day. And, so what we need to do
  • 8:29 - 8:31
    is we need to find some psychological hacks
    that
  • 8:31 - 8:34
    we can apply that will keep us doing the
  • 8:34 - 8:37
    right thing, basically all the time. Keep
    us from
  • 8:37 - 8:39
    wasting cognitive resources.
  • 8:39 - 8:43
    And I think my favorite study about, about
    this,
  • 8:43 - 8:46
    about the kinds of cognitive hacks that work
    effectively,
  • 8:46 - 8:51
    is the idea of what happens to organ donation
  • 8:51 - 8:54
    if the organ donation requirement is opt in,
    in
  • 8:54 - 8:55
    other words, you go to the DMV and there's
  • 8:55 - 8:58
    a form that says, yes, I agree to donate
  • 8:58 - 8:59
    my organs and here are the ones I agree
  • 8:59 - 9:02
    to donate. And when the organ donation is
    opt
  • 9:02 - 9:04
    out, in other words, you have to explicitly
    say,
  • 9:04 - 9:06
    I do not want, I, I do not want
  • 9:06 - 9:08
    my organs to be donated. And what you can
  • 9:08 - 9:11
    see is that, in the countries where it's opt
  • 9:11 - 9:15
    in, it's actually a very, very low rate and
  • 9:15 - 9:17
    in the countries where it's opt out, it's
    a
  • 9:17 - 9:20
    surprisingly high rate. It's basically almost
    universal.
  • 9:20 - 9:22
    And I, and I think to, to some degree
  • 9:22 - 9:24
    you might expect that this is the case. But
  • 9:24 - 9:28
    I think this difference is really counter-intuitive,
    because you
  • 9:28 - 9:30
    would expect that if somebody, you know, goes
    and
  • 9:30 - 9:32
    they're sitting at a form and the form says,
  • 9:32 - 9:34
    Do you want to donate your organs? And the
  • 9:34 - 9:36
    excuse that they're telling themselves in
    their head for
  • 9:36 - 9:38
    not checking the check box is, you know, my
  • 9:38 - 9:40
    mom would be super angry if she found out
  • 9:40 - 9:42
    or my religion tells me that I shouldn't do
  • 9:42 - 9:45
    this or, you know, growing up, I heard people
  • 9:45 - 9:47
    say negative things or whatever, whatever
    the excuse is
  • 9:47 - 9:49
    you tell yourself to not check the check box,
  • 9:49 - 9:52
    you would think that some of those people,
    more
  • 9:52 - 9:54
    than, you know, zero point one percent of
    those
  • 9:54 - 9:57
    people, would pick up the pen and opt out.
  • 9:57 - 10:01
    But the interesting thing is that, by just
    changing
  • 10:01 - 10:04
    the default from yes to no, all of the
  • 10:04 - 10:06
    sudden, all those excuses, all those things
    that people
  • 10:06 - 10:09
    tell themselves about the reasons that they
    really shouldn't
  • 10:09 - 10:11
    check the check box, suddenly go away.
  • 10:11 - 10:13
    And what's, I think even more interesting
    about this
  • 10:13 - 10:17
    is that, these choices are actually made on,
    on
  • 10:17 - 10:19
    really big DMV forms. So you basically what
    you
  • 10:19 - 10:21
    can see is that people have already gone through
  • 10:21 - 10:27
    this really complicated, somewhat trivial
    but very choice-heavy process
  • 10:27 - 10:29
    of filling out this DMV form, and by the
  • 10:29 - 10:31
    time they get to the bottom and are asked
  • 10:31 - 10:33
    about organ donation, they're so cognitively
    depleted that they
  • 10:33 - 10:35
    have no energy left to even really think about
  • 10:35 - 10:38
    it. They basically just do the defaults.
  • 10:38 - 10:41
    So I think, honestly, defaults are our most
    powerful
  • 10:41 - 10:45
    psychological hack, our most powerful weapon
    in trying to
  • 10:45 - 10:47
    deal with the pro- the fact that we have
  • 10:47 - 10:50
    this limited source of cognitive capacity
    that we want
  • 10:50 - 10:53
    to make good use of when we're programming.
    And
  • 10:53 - 10:56
    the really cool thing about defaults in general
    is
  • 10:56 - 10:59
    that defaults are actually really effective
    on both sides
  • 10:59 - 11:01
    of the spectrum. So some days, you get to
  • 11:01 - 11:04
    work, you're ready to go, you're like, in
    an
  • 11:04 - 11:10
    amazing mood. Everything is perfect. Everything
    is awesome.
  • 11:10 - 11:12
    And on those days, the defaults, you have
    a
  • 11:12 - 11:15
    big store of cognitive resources, and the
    defaults keep
  • 11:15 - 11:18
    you high up. They keep you in the charge
  • 11:18 - 11:19
    state of a longer time. You don't have to
  • 11:19 - 11:21
    make choices that would go deplete, and remember
    the
  • 11:21 - 11:24
    choice-making doesn't deplete per minute.
    It's not every minute
  • 11:24 - 11:27
    of choices depletes. It's every choice depletes
    your cognitive
  • 11:27 - 11:30
    resources. So, having a set of defaults that
    tells
  • 11:30 - 11:31
    you, here is what you're gonna do in general
  • 11:31 - 11:33
    and, you have to, you know, you have to
  • 11:33 - 11:36
    think hard to opt out. That's really great
    when
  • 11:36 - 11:38
    you're in a good mood, when things are charged.
  • 11:38 - 11:39
    But it's actually also really great when you're
    in
  • 11:39 - 11:41
    a bad moon. When you're really depleted and
    you
  • 11:41 - 11:42
    still have to go to work and do your
  • 11:42 - 11:44
    job, because the default keeps you on the
    straight
  • 11:44 - 11:46
    and narrow, right. You don't have enough energy
    left
  • 11:46 - 11:49
    to really think about what you're doing, and
    so
  • 11:49 - 11:51
    everybody has bad days. Everybody works on
    teams with
  • 11:51 - 11:58
    developers who aren't great. Let me say it
    a
  • 11:58 - 11:59
    different way. Everyone has at some point
    worked on
  • 11:59 - 12:02
    a team, hopefully not. But you have junior
    developers.
  • 12:02 - 12:04
    You know, you hire people who are, who are
  • 12:04 - 12:06
    new to whatever it is that you're doing, or
  • 12:06 - 12:09
    you have a bad day, or you're stressed out
  • 12:09 - 12:11
    because your mom gave you a call at lunch
  • 12:11 - 12:13
    and now you're in a bad mood.
  • 12:13 - 12:14
    Right, so everybody gets to a point where
    they
  • 12:14 - 12:21
    have cognitively, my mother's gonna be so
    angry now.
  • 12:22 - 12:28
    Let's say, an ex-girlfriend or whatever. So
    everyone has
  • 12:28 - 12:33
    days where they're cognitively depleted. And
    on those days,
  • 12:33 - 12:35
    defaults are also really powerful, because
    they keep you,
  • 12:35 - 12:37
    when. Instead of having you be sort of in
  • 12:37 - 12:39
    a bad mood and you'll just sort of do
  • 12:39 - 12:42
    whatever, you know, you feel like, you're
    basically kept
  • 12:42 - 12:43
    on the straight and narrow. You're kept on
    the
  • 12:43 - 12:43
    right path.
  • 12:43 - 12:46
    And I think this actually helps to explain
    why
  • 12:46 - 12:50
    yak-shaving doesn't feel as good as you might
    think.
  • 12:50 - 12:52
    So, yak-shaving isn't the most terrible activity
    in the
  • 12:52 - 12:54
    world. I think sometimes you need to yak-shave.
    But
  • 12:54 - 12:56
    I think if you think about doing, like, four
  • 12:56 - 12:58
    hours of yak-shaving in an eight hour day,
    pretty
  • 12:58 - 13:00
    much after four hours, if you, you know, let
  • 13:00 - 13:03
    me set up my, you know, my vagrant box,
  • 13:03 - 13:05
    or let me go set up my testing environment.
  • 13:05 - 13:08
    After like four hours of that, you're totally
    cognitively
  • 13:08 - 13:10
    depleted. Doesn't matter that you only spent
    four hours
  • 13:10 - 13:12
    out of an eight hour day. Basically you have
  • 13:12 - 13:14
    no more cognitive resources left. And I think
    this,
  • 13:14 - 13:17
    this means we should be very careful about
    yak-shaving.
  • 13:17 - 13:19
    Because yak-shaving may feel good and it may
    be
  • 13:19 - 13:21
    important in a lot of cases, but we need
  • 13:21 - 13:23
    to be very honest about the fact that there
  • 13:23 - 13:26
    is, there's a certain amount of cognitive
    resources that
  • 13:26 - 13:28
    we have and yak-shaving takes up more of them
  • 13:28 - 13:31
    than you would expect. And they don't leave
    us
  • 13:31 - 13:34
    time after, even two hours or three hours,
    they
  • 13:34 - 13:35
    don't leave us a lot of cognitive resources
    to
  • 13:35 - 13:38
    actually do the task that we were yak-shaving
    towards.
  • 13:38 - 13:41
    So, obviously, occasionally you know you need
    to refactor
  • 13:41 - 13:44
    and, and do all kinds of these kinds of
  • 13:44 - 13:46
    tasks, but you should be careful about thinking
    that
  • 13:46 - 13:48
    you'll get a lot done afterwards.
  • 13:48 - 13:50
    So, I think this is sort of the unpacking.
  • 13:50 - 13:52
    This is a scientific unpacking of what it
    is
  • 13:52 - 13:56
    that we're talking about. But, and I think
    everyone
  • 13:56 - 13:58
    in this room can nod their heads along with
  • 13:58 - 14:00
    what I'm saying. They can agree with what
    I'm
  • 14:00 - 14:04
    saying. Makes sense. But what ends up happening
    in
  • 14:04 - 14:07
    the rest of the world, and also there's usually
  • 14:07 - 14:11
    a devil on your shoulder, is that people find
  • 14:11 - 14:13
    all kinds of excuses to argue against the
    thing
  • 14:13 - 14:14
    I just said.
  • 14:14 - 14:17
    So I just outlined sort of an unpacking of
  • 14:17 - 14:24
    sort of the conventional reconfiguration story,
    and somehow, we,
  • 14:24 - 14:26
    as a human race, actually find a lot of
  • 14:26 - 14:30
    ways to, to argue against these things. And
    one
  • 14:30 - 14:32
    of these, one of these ways that we find
  • 14:32 - 14:35
    to argue against it is to tell ourselves that
  • 14:35 - 14:39
    we're unique and we're special, and I'll just
    let
  • 14:39 - 14:45
    David from 2008 talk about this for a second.
  • 14:45 - 14:49
    DHH: One point I keep coming back to, over
  • 14:49 - 14:51
    and over again when I talk about Ruby and
  • 14:51 - 14:56
    Rails, is that we confessed commonality. We
    confessed the
  • 14:56 - 14:59
    fact that we're not as special as we like
  • 14:59 - 15:02
    to believe. We confessed that we're not the
    only
  • 15:02 - 15:07
    ones trying to climb the same mountain. And
    I
  • 15:07 - 15:09
    think this is a real important point because
    it's
  • 15:09 - 15:11
    somewhat counter intuitive, I think, for a
    lot of
  • 15:11 - 15:14
    developers to think that they're not that
    special. I
  • 15:14 - 15:17
    think it's counter intuitive for humans in
    general to
  • 15:17 - 15:18
    think they're not that special.
  • 15:18 - 15:21
    But, when they do think that they're special,
    when
  • 15:21 - 15:23
    they do think that they're the only ones climbing
  • 15:23 - 15:26
    that mountain, they kind of get these assumptions
    that
  • 15:26 - 15:29
    they need very unique and special tools that
    will
  • 15:29 - 15:32
    only work for them. And I think that's a
  • 15:32 - 15:37
    really bad way to approach getting greater
    productivity. Because
  • 15:37 - 15:40
    I think what really makes this special and
    makes
  • 15:40 - 15:43
    it work is all the points where we recognize
  • 15:43 - 15:45
    that we're exactly the same.
  • 15:45 - 15:48
    Y.K.: And I think that's really the point,
    is
  • 15:48 - 15:51
    that the way we gain better productivity is
    by
  • 15:51 - 15:54
    pushing back against this impulse. I think,
    we have
  • 15:54 - 15:56
    it in the Rails community to some degree.
    I
  • 15:56 - 15:59
    think it's especially significant outside
    of the Rails community,
  • 15:59 - 16:02
    where people didn't already come together
    around the idea
  • 16:02 - 16:04
    that we're gonna build shared tools and shared
    solutions.
  • 16:04 - 16:06
    But I think we really do have to push
  • 16:06 - 16:07
    back against this idea.
  • 16:07 - 16:09
    And I think my favorite example of this sort
  • 16:09 - 16:16
    of, taken to an absurdist extreme, is sort
    of
  • 16:16 - 16:18
    famous interview. What is your most surprising
    app on
  • 16:18 - 16:20
    the home screen? Well, it's Operator. It's
    a custom-designed,
  • 16:20 - 16:23
    one-of-a-kind bespoke app I had built for
    my assistant
  • 16:23 - 16:26
    and I to communicate and collaborate. Did
    this person
  • 16:26 - 16:29
    need a custom bespoke one-of-a-kind application
    to communicate with
  • 16:29 - 16:32
    their assistant? No. Almost certainly not.
  • 16:32 - 16:34
    But they decided they were so special, they
    were
  • 16:34 - 16:37
    so, they themselves were so one-of-a-kind,
    such a unique
  • 16:37 - 16:41
    snowflake, that they needed a special tool
    to communicate
  • 16:41 - 16:43
    with their assistant. And I think this is
    sort
  • 16:43 - 16:45
    of how, this is how we act. This is
  • 16:45 - 16:47
    how we behave. And if you look at, sort
  • 16:47 - 16:49
    of, how people talk about software, you see
    things
  • 16:49 - 16:51
    like, this is a tool set for building the
  • 16:51 - 16:55
    framework most suited to your application
    development. Your application,
  • 16:55 - 16:58
    your company, your industry is so special,
    that you
  • 16:58 - 17:01
    can't use general-purpose tools. You need
    to use a
  • 17:01 - 17:03
    tool set to build your own framework.
  • 17:03 - 17:08
    Or, in an ecosystem where overarching, decides-everything-for-you
    frameworks are
  • 17:08 - 17:10
    commonplace, and many libraries require your
    site to be
  • 17:10 - 17:12
    reorganized to suit their look, feel, and
    default behavior
  • 17:12 - 17:13
    - we should continue to be a tool that
  • 17:13 - 17:16
    gives you the freedom to design the full experience
  • 17:16 - 17:16
    of your web application.
  • 17:16 - 17:20
    And, who could be against freedom? Right?
    Freedom is
  • 17:20 - 17:25
    a really effective thing to put on the wall
  • 17:25 - 17:26
    to say, this is the thing that we're arguing
  • 17:26 - 17:29
    for. We're arguing for freedom. But this is
    just
  • 17:29 - 17:32
    another way, it's just another way that you,
    that
  • 17:32 - 17:37
    peoples' brains sneak in arguments against
    that, that helped
  • 17:37 - 17:39
    us create the paradox of choice in the first
  • 17:39 - 17:43
    place, right. People say, you know, I'm special.
    I
  • 17:43 - 17:45
    can't use these shared tools. I can't use
    these
  • 17:45 - 17:47
    tools that were built for everybody. I need
    to
  • 17:47 - 17:51
    use special tools. I need to use small libraries
  • 17:51 - 17:54
    that help me build my own abstractions. I
    can't
  • 17:54 - 17:56
    share with the community.
  • 17:56 - 17:59
    And then, even if people come to the conclusion
  • 17:59 - 18:03
    that maybe abstractions, maybe shared solutions
    are a good
  • 18:03 - 18:05
    idea, then you get another argument. The devil
    on
  • 18:05 - 18:08
    your shoulder or the devil in your community.
    It
  • 18:08 - 18:10
    makes another argument, which is the law of
    leaky
  • 18:10 - 18:12
    abstractions. And this is not, this is sort
    of
  • 18:12 - 18:16
    like the law of Demeter. It's not a suggestion,
  • 18:16 - 18:20
    or an observation. It's a law. The law of
  • 18:20 - 18:21
    leaky abstractions.
  • 18:21 - 18:24
    And I think any time somebody couches an observation
  • 18:24 - 18:28
    about software development as a law, you know
    something
  • 18:28 - 18:31
    fishy is going on. You know that something's
    not
  • 18:31 - 18:35
    right. Because software development isn't
    a science. You, basically
  • 18:35 - 18:38
    people want you to put on your science hat,
  • 18:38 - 18:41
    and say, aha! It's a law! It's like the
  • 18:41 - 18:44
    law of gravity. I can derive some conclusions
    from
  • 18:44 - 18:46
    this law. What, what, what conclusions do
    they want
  • 18:46 - 18:49
    you to derive? Abstractions are bad. You should
    never
  • 18:49 - 18:52
    use abstractions. You should do everything
    yourself.
  • 18:52 - 18:55
    And, so this law of leaky abstractions was
    originally
  • 18:55 - 18:59
    built by, or written by Joel Spolsky, and
    Jeff
  • 18:59 - 19:02
    Atwood, who was his partner at Stack Overflow,
    actually
  • 19:02 - 19:05
    responded, I think, kind of brilliantly to
    this. And
  • 19:05 - 19:07
    he said, you know, I'd argue, that virtually
    all
  • 19:07 - 19:10
    good programming abstractions are failed abstractions.
    I don't think
  • 19:10 - 19:11
    I've ever used one that didn't leak like a
  • 19:11 - 19:14
    sieve. But I think that's an awfully architecture
    astronaut
  • 19:14 - 19:17
    way of looking at things. Instead, let's ask
    ourselves
  • 19:17 - 19:20
    a more programatic question: does this abstraction
    make our
  • 19:20 - 19:22
    code at least a little easier to write? To
  • 19:22 - 19:24
    understand? To troubleshoot? Are we better
    off with this
  • 19:24 - 19:26
    abstraction than we were without it?
  • 19:26 - 19:28
    It's out job as modern programmers not to
    abandon
  • 19:28 - 19:31
    abstractions due to these deficiencies, but
    to embrace the
  • 19:31 - 19:34
    useful elements of them. To adapt the working
    parts
  • 19:34 - 19:36
    and construct ever so slightly less leaky
    and broken
  • 19:36 - 19:38
    abstractions over time.
  • 19:38 - 19:42
    And I think people use this idea, these excuses,
  • 19:42 - 19:45
    things like the law of leaky abstractions,
    to give
  • 19:45 - 19:49
    an excuse for themselves to not share solutions.
    And
  • 19:49 - 19:51
    I think sort of the hilarious thing, and this
  • 19:51 - 19:54
    is sort of a compressed super conflated set
    of
  • 19:54 - 19:57
    abstractions. Every single one of us is sitting
    on
  • 19:57 - 20:01
    top of abstractions that maybe occasionally
    leak, but really,
  • 20:01 - 20:03
    how many people ever have to drop down into
  • 20:03 - 20:05
    the X86 or the arm level? Or even the
  • 20:05 - 20:10
    C level? Right. People. We can build higher
    and
  • 20:10 - 20:12
    higher sets of abstractions, and we can keep,
    we
  • 20:12 - 20:15
    can keep building on top of these abstractions,
    and
  • 20:15 - 20:18
    build, and, and allow us to sort of eliminate
  • 20:18 - 20:20
    more and more code that we had to write
  • 20:20 - 20:22
    before. That we had to write in 1960, 1970,
  • 20:22 - 20:25
    1980. Sort of every year is another set of
  • 20:25 - 20:27
    things that we have discovered as a community
    that
  • 20:27 - 20:29
    we don't have to worry about, that were shared.
  • 20:29 - 20:31
    And I think, sort of, people look at us
  • 20:31 - 20:32
    and they say, oh my god it's a pile
  • 20:32 - 20:34
    of hacks. It's hacks on hacks on hacks on
  • 20:34 - 20:36
    hacks. But actually it's not. Actually what's
    going on
  • 20:36 - 20:39
    here is that every single time you start off
  • 20:39 - 20:42
    with this sort of experimental playground,
    people are building,
  • 20:42 - 20:43
    you know, at the bottom layer, people were
    building
  • 20:43 - 20:46
    their own hardware. And eventually people
    came to the
  • 20:46 - 20:48
    conclusion that you don't have to build your
    own
  • 20:48 - 20:51
    hardware. We can standardize around things
    like X86.
  • 20:51 - 20:54
    And then we standardized around it and people
    stopped
  • 20:54 - 20:56
    worrying about all the craziness that was
    underneath. And
  • 20:56 - 20:59
    then people said, we can build C, and if
  • 20:59 - 21:01
    we build C, people can stop worrying, most
    of
  • 21:01 - 21:04
    the time, about what's below it. So really
    every
  • 21:04 - 21:05
    one of these layers is not a pile of
  • 21:05 - 21:07
    hacks built on a pile of hacks. It's us,
  • 21:07 - 21:10
    as a group of people, as a community of
  • 21:10 - 21:12
    programmers, deciding that 90% of the things
    that we're
  • 21:12 - 21:15
    doing, we've figured out we don't actually
    need to,
  • 21:15 - 21:16
    to worry about.
  • 21:16 - 21:19
    And, I think, fundamentally, this is about,
    sort of
  • 21:19 - 21:22
    the history of programming is that we have
    shared
  • 21:22 - 21:26
    solutions. We make progress by building up
    the stack.
  • 21:26 - 21:28
    By eliminating code that we didn't have to
    write.
  • 21:28 - 21:31
    And Steve Jobs actually talked about this
    in 1995.
  • 21:31 - 21:33
    Sort of exactly the same thing. So let me
  • 21:33 - 21:34
    let him talk.
  • 21:34 - 21:37
    STEVE JOBS: Because it's all about managing
    complexity, right.
  • 21:37 - 21:40
    You're developers. You know that. It's all
    about managing
  • 21:40 - 21:44
    complexity. It's, like, scaffolding, right.
    You erect some scaffolding,
  • 21:44 - 21:46
    and if you keep going up and up and
  • 21:46 - 21:50
    up, eventually the scaffolding collapses of
    its own weight,
  • 21:50 - 21:54
    right. That's what building software is. It's,
    how much
  • 21:54 - 21:57
    scaffolding can you erect before the whole
    thing collapses
  • 21:57 - 21:57
    of its own weight.
  • 21:57 - 21:59
    Doesn't matter how many people you have working
    on
  • 21:59 - 22:01
    it. Doesn't matter if you're Microsoft with
    three, four
  • 22:01 - 22:04
    hundred people, five hundred people on the
    team. It
  • 22:04 - 22:06
    will collapse under its own weight. You've
    read the
  • 22:06 - 22:09
    Mythical Man Month, right. Basic premise of
    this is,
  • 22:09 - 22:12
    a software development project gets to a certain
    size
  • 22:12 - 22:14
    where if you add one more person, the amount
  • 22:14 - 22:16
    of energy to communicate with that person
    is actually
  • 22:16 - 22:19
    greater than their net contribution to the
    project, so
  • 22:19 - 22:20
    it slows down.
  • 22:20 - 22:22
    So you have local maximum and then it comes
  • 22:22 - 22:25
    down. We all know that about software. It's
    about
  • 22:25 - 22:30
    managing complexity. These tools allow you
    to not have
  • 22:30 - 22:33
    to worry about ninety percent of the stuff
    you
  • 22:33 - 22:36
    worry about, so that you can erect your five
  • 22:36 - 22:40
    stories of scaffolding, but starting at story
    number twenty-three
  • 22:40 - 22:43
    instead of starting at story number six. You
    get
  • 22:43 - 22:45
    a lot higher.
  • 22:45 - 22:48
    Y.K.: And I think that's fundamentally what
    we do
  • 22:48 - 22:50
    as software people. For all of the complaints
    that
  • 22:50 - 22:53
    people make about, you know, oh my god, every
  • 22:53 - 22:58
    abstraction leaks. All we've ever done, even
    in, even
  • 22:58 - 23:00
    as far back as, you know, in the 60s,
  • 23:00 - 23:04
    but even in 1995, Steve Jobs was already talking
  • 23:04 - 23:06
    about this idea that we can build higher by
  • 23:06 - 23:08
    building shared solutions. And I'm gonna let
    him speak
  • 23:08 - 23:12
    one more time, because I think, really, it's
    really
  • 23:12 - 23:15
    fascinating how much this idea of how you
    get
  • 23:15 - 23:19
    better programmer productivity hasn't really
    changed, fundamentally, since that
  • 23:19 - 23:19
    time.
  • 23:19 - 23:22
    STEVE JOBS: But, on top of that, we're gonna
  • 23:22 - 23:29
    put something called open step. And open step
    lets
  • 23:34 - 23:38
    you start developing your apps on the twentieth
    floor.
  • 23:38 - 23:40
    And the kinds of apps you can deliver are
  • 23:40 - 23:44
    phenomenal. But there's another hidden advantage.
  • 23:44 - 23:48
    Most of the great break through, the page
    makers,
  • 23:48 - 23:52
    the illustrators, et cetera, the directors,
    come from smaller
  • 23:52 - 23:55
    software companies. That's been said a few
    times today.
  • 23:55 - 23:57
    They don't come from the large software companies.
    They
  • 23:57 - 23:59
    come from the smaller ones. And one of the
  • 23:59 - 24:03
    greatest things is that using this new technology,
    two
  • 24:03 - 24:07
    people or three people in a garage can build
  • 24:07 - 24:09
    an app and get it from concept to market
  • 24:09 - 24:12
    in six to nine months, that is every bit
  • 24:12 - 24:16
    as feature-rich, every bit as reliable, and
    every bit
  • 24:16 - 24:18
    as exciting as a giant software company can
    do
  • 24:18 - 24:21
    with a hundred fifty person team.
  • 24:21 - 24:22
    It's phenomenal.
  • 24:22 - 24:26
    Y.K.: So, I think what's kind of cool about
  • 24:26 - 24:32
    this is that, this idea that we can take
  • 24:32 - 24:35
    shared problems that everyone has, shared
    problems that a
  • 24:35 - 24:38
    community of people have, solving the same
    problem, and
  • 24:38 - 24:40
    we can build up shared solutions. This is
    not
  • 24:40 - 24:44
    new. It's not, it shouldn't be controversial.
    It's kind
  • 24:44 - 24:47
    of fundamental to what we do as software developers.
  • 24:47 - 24:50
    And yet, if someone isn't up here telling
    you
  • 24:50 - 24:52
    this, it's so easy to forget. There are so
  • 24:52 - 24:55
    many excuses that people tell themselves.
  • 24:55 - 24:57
    Sort of what happens in reality is you have
  • 24:57 - 24:59
    this bulk of shared solutions, you have an
    area
  • 24:59 - 25:01
    of experimentation - sort of the wild west
    -
  • 25:01 - 25:04
    and you let the area of experimentation fold
    back
  • 25:04 - 25:06
    into shared solutions. This is sort of how
    Rails
  • 25:06 - 25:08
    works, right. So you build higher and higher
    and
  • 25:08 - 25:11
    higher stacks. You get to a point where you,
  • 25:11 - 25:13
    you know, you could build something like Devise
    in
  • 25:13 - 25:15
    the Rails community, because there's so much
    of what
  • 25:15 - 25:19
    underpins Devise, you know, everybody uses
    the same set
  • 25:19 - 25:22
    of model abstractions, everyone has similar
    ways of talking
  • 25:22 - 25:25
    about users, right. So you can build an abstraction
  • 25:25 - 25:27
    on top of that because everyone has sort of
  • 25:27 - 25:29
    built up this shared understanding of what
    it is
  • 25:29 - 25:30
    that we're doing.
  • 25:30 - 25:33
    And, it's so easy to let yourself be confused
  • 25:33 - 25:36
    by the fact that the area of experimentation
    is
  • 25:36 - 25:38
    the wild west, and forget that that area of
  • 25:38 - 25:42
    experimentation is sitting on top of huge,
    a huge
  • 25:42 - 25:44
    stack of abstractions. And this is sort of,
    I
  • 25:44 - 25:46
    think, to me, the answer to why the node
  • 25:46 - 25:50
    community seems, they're sitting on top of
    maybe, you
  • 25:50 - 25:53
    know, the most advanced dynamic language git
    in the
  • 25:53 - 25:57
    world, on top of all kinds of abstractions.
    And
  • 25:57 - 25:59
    they sit on top and they say, oh my
  • 25:59 - 26:01
    god, we need to build a lot of tiny
  • 26:01 - 26:03
    modules, because if we don't build tiny modules,
    this,
  • 26:03 - 26:04
    the abstraction's gonna kill us.
  • 26:04 - 26:07
    But, for me, this has always been a paradox.
  • 26:07 - 26:09
    You're sitting on top of a stack of abstractions
  • 26:09 - 26:11
    that's far higher than anything that you're
    claiming to
  • 26:11 - 26:12
    be afraid of. So, why are you so afraid?
  • 26:12 - 26:14
    And I think it's because of this area of
  • 26:14 - 26:17
    experimentation, right. When you're in an
    area of experimentation,
  • 26:17 - 26:19
    of course abstractions are gonna leak. You're
    still figuring
  • 26:19 - 26:21
    out what it is that you're doing.
  • 26:21 - 26:22
    But the goal of a good community that's gonna
  • 26:22 - 26:25
    help people be more productive is to eventually
    notice
  • 26:25 - 26:29
    that the area of experimentation is over.
    And move
  • 26:29 - 26:32
    into a conventional system where you can say,
    we
  • 26:32 - 26:33
    don't need to argue about this anymore. It
    was
  • 26:33 - 26:35
    a worthy thing for us to discuss when we
  • 26:35 - 26:37
    were thinking about the problem, but we can
    take
  • 26:37 - 26:39
    that and we can roll it in, into our
  • 26:39 - 26:40
    set of shared defaults, and we can climb up
  • 26:40 - 26:41
    the ladder, right.
  • 26:41 - 26:43
    And this is what, this is what Steve was
  • 26:43 - 26:46
    saying. He was saying, you know, instead of
    having
  • 26:46 - 26:48
    everybody start from some, some floor, sort
    of this
  • 26:48 - 26:53
    is how iOS programming works today, right.
    Ironically. Is
  • 26:53 - 26:55
    everyone starts from the same base. There
    is not
  • 26:55 - 26:57
    a lot of shared programming. A lot of shared
  • 26:57 - 27:01
    solutions. And I think, fundamentally, open
    source has been
  • 27:01 - 27:05
    something that has really kick started this
    idea of
  • 27:05 - 27:07
    experimentation merging through the shared
    solutions.
  • 27:07 - 27:09
    Because trying to essentially plan it the
    way big
  • 27:09 - 27:12
    companies like Apple or Microsoft do it, is
    just
  • 27:12 - 27:14
    not gonna get the job done across the board.
  • 27:14 - 27:17
    It'll, it'll solve some problems, but getting
    the open
  • 27:17 - 27:19
    source communities, the power of the open
    source community,
  • 27:19 - 27:21
    means that you can have all these little verticals,
  • 27:21 - 27:24
    all these little areas where people are trying
    to
  • 27:24 - 27:27
    build higher abstractions for shared communities.
  • 27:27 - 27:31
    And interestingly, it's not enough to make
    these abstractions
  • 27:31 - 27:33
    cheap. I think when you think about how people
  • 27:33 - 27:36
    actually go and they build on top of these
  • 27:36 - 27:38
    stacks, it's not enough, if every layer in
    the
  • 27:38 - 27:41
    abstraction, in fact, if x86 and then C and,
  • 27:41 - 27:43
    you know, Linux, and Posits. If every one
    of
  • 27:43 - 27:45
    those things cost a little, by the time you
  • 27:45 - 27:47
    actually got to build software, you would
    be so
  • 27:47 - 27:49
    overwhelmed with the weight of the abstraction
    that you
  • 27:49 - 27:50
    would never be able to do anything.
  • 27:50 - 27:53
    So, it's really fundamental that the abstractions
    that we
  • 27:53 - 27:56
    build eventually get to the point where they're
    basically
  • 27:56 - 27:58
    free. Where they have no cognitive capacity.
    So that
  • 27:58 - 28:00
    we can keep building higher and higher and
    higher,
  • 28:00 - 28:03
    right. And the Rails philosophy is basically,
    how do
  • 28:03 - 28:04
    you do that? How do you, how do, how
  • 28:04 - 28:05
    do you say, you know, we're gonna experiment
    for
  • 28:05 - 28:08
    a little bit, but eventually we're gonna work
    really
  • 28:08 - 28:10
    hard, we're gonna push really hard at making
    the
  • 28:10 - 28:12
    cost of that thing that everyone was just
    experimenting
  • 28:12 - 28:14
    with a minute, a little, a minute ago free,
  • 28:14 - 28:16
    so that we can go build another level up
  • 28:16 - 28:17
    and another level up.
  • 28:17 - 28:20
    And, I have a few sort of closing points
  • 28:20 - 28:23
    to make about the ecosystem. So, first of
    all,
  • 28:23 - 28:25
    Rails is not the only way that we have
  • 28:25 - 28:28
    to share. I think I was pretty sad when
  • 28:28 - 28:32
    the queue abstraction didn't end up in Rails,
    but
  • 28:32 - 28:34
    I kind of am sad that we didn't get
  • 28:34 - 28:36
    to see sort of what you can build up
  • 28:36 - 28:38
    on top of the queue abstraction, there's really
    no
  • 28:38 - 28:41
    reason that all the queue guys didn't get
    together
  • 28:41 - 28:44
    and say, you know, we're gonna build some
    abstraction
  • 28:44 - 28:46
    on top of that. And once you build the
  • 28:46 - 28:47
    abstraction on top of that, then you can see
  • 28:47 - 28:49
    how high you can go, right.
  • 28:49 - 28:51
    And a sort of similar thing happened in the
  • 28:51 - 28:52
    JavaScript community. In the JavaScript community
    there were a
  • 28:52 - 28:58
    lot of different promise implementations,
    and what happened over
  • 28:58 - 28:59
    time is that people realized that not having
    a
  • 28:59 - 29:01
    standard way to talk about this was actually
    making
  • 29:01 - 29:03
    it hard to build up.
  • 29:03 - 29:05
    So we said, let's actually get together and
    let's
  • 29:05 - 29:07
    decide that we're gonna have a standard way
    of
  • 29:07 - 29:10
    talking about that. We'll call it PromisesA+.
    And now
  • 29:10 - 29:12
    Promises are in the DOM. And now, you know,
  • 29:12 - 29:15
    we can build up another level and make asynchronous
  • 29:15 - 29:17
    things look synchronous. And then we can build
    up
  • 29:17 - 29:19
    another level, and we can put that idea into
  • 29:19 - 29:20
    the language.
  • 29:20 - 29:22
    And, you know, I don't know where we're gonna
  • 29:22 - 29:24
    go from there. But we can start building higher
  • 29:24 - 29:27
    and higher abstractions. But it requires taking
    the first
  • 29:27 - 29:30
    step. So I, I guess what I'm saying is,
  • 29:30 - 29:32
    getting something into Rails core is not the
    only
  • 29:32 - 29:34
    way that you can build these abstractions.
    It, it
  • 29:34 - 29:37
    requires some discipline to actually get to
    the point
  • 29:37 - 29:39
    where we're agreeing on something, but I think
    if
  • 29:39 - 29:41
    you find that there is some topic, like, for
  • 29:41 - 29:46
    example, jobs or, you know, queuing or jobs,
    and
  • 29:46 - 29:47
    everybody does them or a lot of people do
  • 29:47 - 29:49
    them but we don't have a good way of
  • 29:49 - 29:51
    building on top of them, that's a good opportunity
  • 29:51 - 29:53
    for someone to go and say, I'm gonna do
  • 29:53 - 29:55
    the hard work to say, let's create a shared
  • 29:55 - 29:56
    idea of what it is that we're doing.
  • 29:56 - 29:58
    And sometimes it's via an inter-op player
    and sometimes
  • 29:58 - 30:01
    it's via standardizing around one solution.
  • 30:01 - 30:04
    And, another part of this is, when I started
  • 30:04 - 30:07
    working on Ember in the JavaScript community,
    I thought
  • 30:07 - 30:09
    a lot of these ideas were obvious. I thought
  • 30:09 - 30:11
    it was gonna be, you know, a slam dunk.
  • 30:11 - 30:13
    Everybody should agree that building a shared
    set of
  • 30:13 - 30:16
    solutions is the right thing. And what I found
  • 30:16 - 30:20
    was that, what I found instead is how powerful
  • 30:20 - 30:24
    the unique snowflake bias is, and how powerful
    the
  • 30:24 - 30:27
    leaky abstraction fallacy can be in communities
    that don't
  • 30:27 - 30:29
    place a high value on shared solutions.
  • 30:29 - 30:31
    So, if you don't see the power of shared
  • 30:31 - 30:34
    solutions, if you're not familiar with the
    idea, with
  • 30:34 - 30:36
    the wins, it's really easy to pull out those
  • 30:36 - 30:38
    olds canards, the I am a unique snow flake,
  • 30:38 - 30:41
    I can't use a tool like Ember, because I
  • 30:41 - 30:44
    have special needs. I need to use a toolkit
  • 30:44 - 30:46
    that lets me build my own framework, because
    my
  • 30:46 - 30:47
    needs are oh-so-special.
  • 30:47 - 30:51
    Or, you know, you know, I looked at Ember
  • 30:51 - 30:53
    when it was new, and Ember leaked all over
  • 30:53 - 30:55
    the place. So the law of leaky abstractions
    means
  • 30:55 - 30:57
    you can't build in JavaScript a shared solution.
    But,
  • 30:57 - 30:58
    of course these things are not true. And I
  • 30:58 - 31:01
    think, what I want to say is, I think
  • 31:01 - 31:04
    Rails, ten years on, has basically proved
    that these
  • 31:04 - 31:07
    things are not true.
  • 31:07 - 31:09
    Before Rails, people spent a lot of time working
  • 31:09 - 31:11
    on their own bespoke solutions, convinced
    that their problem
  • 31:11 - 31:15
    was just too special for shared solutions.
    An when
  • 31:15 - 31:17
    Rails came out, they looked at the very idea
  • 31:17 - 31:20
    of convention over configuration as a joke.
    And then,
  • 31:20 - 31:23
    one day, Rails developers started beating
    the pants off
  • 31:23 - 31:27
    those people. And I think, in closing, if
    you
  • 31:27 - 31:30
    find yourself in an ecosystem where developers
    still start
  • 31:30 - 31:33
    from floor one every time, learn the lessons
    of
  • 31:33 - 31:34
    Rails.
  • 31:34 - 31:38
    Everybody should band together, push back,
    both in your
  • 31:38 - 31:40
    own brain and on other people, on the excuses
  • 31:40 - 31:43
    that drive us apart instead of the things
    that
  • 31:43 - 31:47
    bind us together. The legacy of Rails isn't
    MVC
  • 31:47 - 31:50
    or even Ruby. It's powerful ten years of evidence
  • 31:50 - 31:52
    that by sticking to our guns, we can build
  • 31:52 - 31:54
    far higher than anyone ever imagined.
  • 31:54 - 31:57
    Thank you very much.
Title:
RailsConf 2014 - Keynote: 10 Years! by Yehuda Katz
Description:

more » « less
Duration:
32:24

English subtitles

Revisions