< Return to Video

Garden City Ruby 2014 - Pharmacist or a Doctor - What does your code base need?

  • 0:26 - 0:28
    PAVAN SUDARSHAN: Hi. My name is Pavan.
  • 0:28 - 0:31
    ANANDHA KRISHNAN: And I'm Anandha Krishnan.
    I'm
  • 0:31 - 0:36
    also called Jake. Not Anandha. I know. We
    work
  • 0:36 - 0:37
    at MavenHive technologies.
  • 0:37 - 0:40
    P.S.: This is probably the only
  • 0:40 - 0:43
    talk where there are two speakers, and we
    haven't
  • 0:43 - 0:45
    rehearsed who says what, so we'll be stepping
    on
  • 0:45 - 0:48
    each other's toes, so yeah. Bear with us.
    So
  • 0:48 - 0:50
    yeah. Quick disclaimer: Most of what we are
    going
  • 0:50 - 0:53
    to talk about is actually platform and language
    independent,
  • 0:53 - 0:56
    or at least, in a sense, learned. But the
  • 0:56 - 0:58
    reason we are talking here in a Ruby Conf
  • 0:58 - 1:01
    is because of the Ruby community. We really
    think
  • 1:01 - 1:03
    that a lot of things we are going to
  • 1:03 - 1:05
    talk about resonates really well with the
    community, and
  • 1:05 - 1:07
    we are hoping to, you know, drive a good
  • 1:07 - 1:11
    discussion, conversation, whatever it is,
    from this audience. So
  • 1:11 - 1:13
    that's really why we are talking here on this
  • 1:13 - 1:14
    topic.
  • 1:14 - 1:15
    A.K.: And a lot of the points that
  • 1:15 - 1:17
    we're going to talk about kind of naturally
    apply
  • 1:17 - 1:22
    to Rails and Ruby. And most of these we
  • 1:22 - 1:26
    learned, and we sort of experience in our
    projects,
  • 1:26 - 1:29
    which are mostly in Ruby and Rails, so.
  • 1:29 - 1:34
    P.S.: Yeah, so let's start with condition.
    We have screwed
  • 1:34 - 1:36
    up a lot in our careers. Right, between me
  • 1:36 - 1:39
    and Jake, we have no idea how many mistakes
  • 1:39 - 1:41
    we have made. And on those rare occassions,
    we
  • 1:41 - 1:43
    have actually learned from it, or at least
    we
  • 1:43 - 1:46
    would like to think so. So yeah, this talk
  • 1:46 - 1:53
    is about one such mistake from which we learned,
  • 1:53 - 1:53
    and yeah.
  • 1:53 - 1:55
    A.K.: And, yes, I think most of
  • 1:55 - 1:57
    it, we just put it up front, just based
  • 1:57 - 2:00
    on projects, and you know, as we were talking
  • 2:00 - 2:04
    about what happened with each of us, and things
  • 2:04 - 2:06
    like that. So yeah, just trying to make it,
  • 2:06 - 2:09
    you know, presentable and that stuff. But
    we-
  • 2:09 - 2:12
    P.S.: Yeah, like, though we screwed up, we
    would like
  • 2:12 - 2:14
    to believe that no employers or paranoid androids
    were
  • 2:14 - 2:17
    hurt in the process of our mistakes, so yeah.
  • 2:17 - 2:21
    OK, about three months back, so why pharamcist
    and
  • 2:21 - 2:24
    doctors, right? So about three months about
    I was
  • 2:24 - 2:28
    in this pharmacy buying diapers for my daughter,
    and
  • 2:28 - 2:30
    in walks this guy - he just goes straight
  • 2:30 - 2:34
    to the pharmacist and he's like, hey, can
    you
  • 2:34 - 2:37
    give me something for a toothache? There was
    something
  • 2:37 - 2:40
    very interesting and weird about this, and
    Jake and
  • 2:40 - 2:43
    I, we carpool. So the next morning I was
  • 2:43 - 2:45
    just telling Jake, and between the two of
    us,
  • 2:45 - 2:50
    like, between the two of us, we realized that
  • 2:50 - 2:54
    we have seen people ask for all sorts of
  • 2:54 - 2:57
    medicines in a pharmacy. Right. Headaches,
    fever, like, like,
  • 2:57 - 3:00
    true story, I even once saw this guy with
  • 3:00 - 3:05
    a lot of cuts on his face from a
  • 3:05 - 3:07
    knife, and yeah. Insane. Insane, right. So
    about, when
  • 3:07 - 3:09
    we were talking about this, we thought there
    was
  • 3:09 - 3:11
    something fundamentally wrong with this. Is
    there anyone here
  • 3:11 - 3:14
    who thinks that it's totally OK for you to
  • 3:14 - 3:21
    just walk up to a pharmacist and ask for
  • 3:25 - 3:28
    a medicine? Oh yeah, cool. Yeah, so. Nice.
    OK.
  • 3:28 - 3:31
    Hold that thought. Yeah, but like. So what
    we
  • 3:31 - 3:34
    think, yes, a pharmacy potentially has a cure
    for
  • 3:34 - 3:37
    pretty much any, most ailments that you could
    think
  • 3:37 - 3:41
    of, and but the important thing is, though,
    you
  • 3:41 - 3:43
    need to know what ailment you have. Right,
    there's
  • 3:43 - 3:47
    that small implementation detail, right. And
    if it was
  • 3:47 - 3:49
    that easy for you to just go to the
  • 3:49 - 3:52
    right medicine and get it, this world would
    be
  • 3:52 - 3:57
    filled with only pharmacists and not doctors,
    right. Yeah,
  • 3:57 - 3:59
    so that's, so that's where the whole analogy
    starts,
  • 3:59 - 4:01
    and then we'll get to how we connect to
  • 4:01 - 4:02
    what we're going to talk about.
  • 4:02 - 4:06
    A.K.: Yeah, and that's, that's where we, in
    a sort of thought
  • 4:06 - 4:09
    that we use this metaphor to drive home that
  • 4:09 - 4:12
    point. Of course, a lot of you might have
  • 4:12 - 4:15
    your opinions about self-medication and the
    whole thing. So
  • 4:15 - 4:19
    we'll stop it at this, and we will give
  • 4:19 - 4:21
    us, our definition of what we think about
    these
  • 4:21 - 4:26
    two sort of mindsets actually are and, you
    know.
  • 4:26 - 4:30
    So starting off with like doctors, right.
    They don't
  • 4:30 - 4:33
    treat, rarely, they don't try to treat the
    symptoms,
  • 4:33 - 4:35
    it's about how you deal with them. So they
  • 4:35 - 4:37
    go about just figuring out what the problem
    could
  • 4:37 - 4:40
    be, you know, and then probably, you know,
    a
  • 4:40 - 4:43
    lot of tests, make you run through a few
  • 4:43 - 4:46
    tests or try and figure out what's what, if
  • 4:46 - 4:49
    indeed it is the problem, and then try and,
  • 4:49 - 4:54
    based on that, prescribe a treatment and,
    of course,
  • 4:54 - 4:56
    make sure that you're OK at the end of
  • 4:56 - 4:58
    it, right. The symptoms are gone.
  • 4:58 - 5:01
    P.S.: And we didn't take a look through a
    medical textbook, so
  • 5:01 - 5:03
    we don't know if this is right, but assuming
  • 5:03 - 5:06
    it is, though, this is what worked for us,
  • 5:06 - 5:07
    so.
  • 5:07 - 5:11
    A.K.: And, again, in contrast, a pharmacist's
    job,
  • 5:11 - 5:14
    we think very different, should be more about
    understanding
  • 5:14 - 5:19
    the medicines, the medicines themselves. Probably
    even figure out
  • 5:19 - 5:21
    what the disease is based on the medicine,
    right.
  • 5:21 - 5:25
    But definitely they don't really think about,
    you know,
  • 5:25 - 5:27
    what was the problem originally or what are
    we
  • 5:27 - 5:29
    prescribing the treatment for. And they usually
    don't do
  • 5:29 - 5:31
    it. Hopefully they don't do it.
  • 5:31 - 5:32
    P.S.: Yeah. OK.
  • 5:32 - 5:34
    So now with this context of what we mean
  • 5:34 - 5:38
    by a doctor and a pharmacist and medicines
    and
  • 5:38 - 5:41
    self-medication, all right. Let's get back
    to our mistake,
  • 5:41 - 5:44
    which we want to talk about. Right. So our
  • 5:44 - 5:46
    mistake that we want to talk about is a
  • 5:46 - 5:48
    way we have dealt with, or rather we used
  • 5:48 - 5:51
    to deal with bad symptoms on our code bases
  • 5:51 - 5:55
    and on our projects, right. So you, a lot
  • 5:55 - 5:58
    of times you see these in your code bases.
  • 5:58 - 6:01
    There's the symptom, or there are some issues,
    right.
  • 6:01 - 6:04
    So we obviously had a lot of those, in
  • 6:04 - 6:07
    every single project we have worked on, and
    this
  • 6:07 - 6:09
    is about how we dealt with that, right.
  • 6:09 - 6:13
    A.K.: Let's start off with one very simple
    one, or
  • 6:13 - 6:15
    at least the one which was the most easily
  • 6:15 - 6:15
    expressible.
  • 6:15 - 6:19
    P.S.: Yeah, as in, Tejas Dinkar, who has
  • 6:19 - 6:22
    already been mentioned several times in different
    talks, he
  • 6:22 - 6:24
    threatened us to throw off the stage if we
  • 6:24 - 6:28
    took anything more than twenty-nine minutes,
    fifty-nine seconds. So
  • 6:28 - 6:32
    we had to like really dumbify our, you know,
  • 6:32 - 6:35
    anecdotes. But you learn, we have like a quick
  • 6:35 - 6:36
    mention of different things which we would
    love to
  • 6:36 - 6:40
    talk about offline, but yeah. So thanks Dejas.
  • 6:40 - 6:43
    A.K.: So we'll get started on the first one.
    So
  • 6:43 - 6:46
    this was a project where we had a familiar
  • 6:46 - 6:49
    problem of regression bugs. We added new features,
    and
  • 6:49 - 6:53
    that kept breaking things. So this is what
    we
  • 6:53 - 6:56
    designed. We want this down. We want the number
  • 6:56 - 6:59
    of bugs down from 10 to 5, you know.
  • 6:59 - 7:01
    At that time it was, like, let's set up
  • 7:01 - 7:03
    this goal, let's try and achieve this. What
    did
  • 7:03 - 7:05
    we do? Oh, before that. Some facts about the
  • 7:05 - 7:09
    project, right. This was not a project we
    started
  • 7:09 - 7:12
    on from scratch, it was a lot of legacy
  • 7:12 - 7:13
    code, a lot of code that we did not
  • 7:13 - 7:18
    understand, and probably that's why we thought
    it was
  • 7:18 - 7:19
    bad. And the test coverage was-
  • 7:19 - 7:22
    P.S.: How many of you have taken over a code
    base from
  • 7:22 - 7:24
    another team and thought the code base was
    awesome?
  • 7:24 - 7:29
    Very small samples, so you realize what we
    mean
  • 7:29 - 7:32
    when we thought it was bad. So.
  • 7:32 - 7:33
    A.K.: Sure.
  • 7:33 - 7:36
    So test-coverage was low, which was probably
    one of
  • 7:36 - 7:39
    the reasons why people complained about the
    code base,
  • 7:39 - 7:42
    of course. So what's your guess?
  • 7:42 - 7:46
    P.S.: OK, so the problem we had was every
    time we checked
  • 7:46 - 7:49
    in something, built a new feature, touch any
    part
  • 7:49 - 7:50
    of the code base, we ended up breaking a
  • 7:50 - 7:52
    whole bunch of other things. And we would
    not
  • 7:52 - 7:54
    even know it right away, we would know it
  • 7:54 - 7:55
    over a period of time. Right, so this was
  • 7:55 - 7:58
    a regression problem. And given the facts,
    what would
  • 7:58 - 7:59
    you probably have done?
  • 7:59 - 8:01
    A.K.: I'll try and just
  • 8:01 - 8:04
    go back again, and then hopefully forward
    - we're
  • 8:04 - 8:05
    done [00:08:04]??.
  • 8:05 - 8:08
    P.S.: Yeah, based on some facts.
  • 8:08 - 8:14
    A.K.: Sure, so the low coverage was definitely
    a problem.
  • 8:14 - 8:17
    We thought, I mean, everybody agreed that
    we need
  • 8:17 - 8:20
    to start working on that, and you know, fix
  • 8:20 - 8:23
    the coverage. So we went in there, put the
  • 8:23 - 8:26
    coverage tool in place, you know. And then
    we
  • 8:26 - 8:28
    decided we will write tests for every bug
    we
  • 8:28 - 8:33
    caught. We got the coverage up, not very surprising,
  • 8:33 - 8:36
    I mean, we didn't manage to- you know, improving
  • 8:36 - 8:37
    the coverage-
  • 8:37 - 8:41
    P.S.: Yeah like, so we spend like the whole
    time-
  • 8:41 - 8:43
    A.K.: over a period of time.
  • 8:43 - 8:45
    P.S.: OK, so, like so this was a problem,
  • 8:45 - 8:46
    right. When you look at that, when you try
  • 8:47 - 8:48
    to abstract the, into, like what our thought
    process
  • 8:48 - 8:57
    was. It was something like this, right. Check.
    Yeah.
  • 8:57 - 8:59
    So, we had a symptom. In this case it
  • 8:59 - 9:04
    was low test-coverage, and - Jake - and we
  • 9:04 - 9:07
    had, we decided on what metric and tool to
  • 9:07 - 9:10
    use. In our case it was a simple line
  • 9:10 - 9:12
    coverage, right and, archive?? [00:09:10],
    this was back in
  • 9:12 - 9:15
    the days. And then we started solving that
    problem
  • 9:15 - 9:18
    that we had, and then, you know, hopefully,
    for
  • 9:18 - 9:21
    example, we were TDDing most of the new code
  • 9:21 - 9:24
    that we wrote, so coverage was good on that,
  • 9:24 - 9:28
    then start writing tests for things, which
    were, any
  • 9:28 - 9:30
    part of the code base which we touched where
  • 9:30 - 9:32
    there were no tests, we started adding tests
    there.
  • 9:32 - 9:34
    You know, a bunch of different things. So
    basically,
  • 9:34 - 9:36
    like, the idea was to improve the coverage
    and
  • 9:36 - 9:40
    keep on writing, right. So cool, so. And what
  • 9:40 - 9:43
    was the result? We ended up with, of course,
  • 9:43 - 9:45
    drastically improving our test coverage, so
    we were in
  • 9:45 - 9:48
    the late '90s for most the part of the
  • 9:48 - 9:51
    code base, which was awesome.
  • 9:51 - 9:53
    A.K.: A hundred, man, a hundred.
  • 9:53 - 9:56
    P.S.: A hundred, yeah. Or, yeah, sure.
  • 9:56 - 10:00
    Very good coverage. But things got only marginally
    better.
  • 10:00 - 10:02
    At this point, this was when we realized that
  • 10:02 - 10:04
    inspite of putting so much effort into actually
    improving
  • 10:04 - 10:07
    our test coverage, our actual goal was to
    actual
  • 10:07 - 10:11
    reduce the number of regression bugs. So we,
    we
  • 10:11 - 10:14
    were still no better than what we started
    off,
  • 10:14 - 10:17
    about two months back. So the developers were
    generally
  • 10:17 - 10:20
    very happy, now they were doing a lot more
  • 10:20 - 10:22
    TDD, they have a lot, they had manag- successfully
  • 10:22 - 10:25
    convinced the manag- you know the product
    manager and
  • 10:25 - 10:28
    the stake holders to spend more time on the
  • 10:28 - 10:30
    tech deck?? [00:10:29] and things like that.
    They they
  • 10:30 - 10:32
    also were happy. But the project manager was
    extremely
  • 10:32 - 10:34
    frustrated, because in spite of spending so
    much effort,
  • 10:34 - 10:37
    there was no real benefit from any of it,
  • 10:37 - 10:40
    right. So it's like one of those very classic
  • 10:40 - 10:42
    moments where you know, in Dilbert where,
    you know,
  • 10:42 - 10:45
    the, OK, in Dilbert developers are never happy,
    but
  • 10:45 - 10:47
    at least here we were happy and the project
  • 10:47 - 10:49
    manager was sad, right. A.K.: And we weren't
    happy
  • 10:49 - 10:50
    with the project managers as well.
  • 10:50 - 10:53
    P.S.: Yeah, and eventually we feel there was
    something wrong,
  • 10:53 - 10:56
    it's not like we take pleasure out of it.
    So, we
  • 10:56 - 10:59
    think this is a very big mistake, where we
  • 10:59 - 11:01
    spent almost two months of time without really
    realizing
  • 11:01 - 11:04
    where we were going wrong, right. So this
    mistake
  • 11:04 - 11:07
    and several mistakes across different projects
    which Dejas won't
  • 11:07 - 11:12
    let us go into, ended up making us realize
  • 11:12 - 11:16
    something very basic, right. And that's, this
    is basically
  • 11:16 - 11:17
    what, this is what we were going to say.
  • 11:17 - 11:18
    A.K.: OK, tell us a little bit.
  • 11:18 - 11:21
    P.S.: If we had like a lightning talk, this
    is what
  • 11:21 - 11:22
    we probably the only thing we would have put
  • 11:22 - 11:26
    up and left. So never improve a metric. Solving
  • 11:26 - 11:30
    a problem should automatically imrpve the
    metric that you're
  • 11:30 - 11:34
    measuring, right. So the focus is on, is never
  • 11:34 - 11:37
    on making a metric better. It's always about
    solving
  • 11:37 - 11:38
    the problem. And, the metric-
  • 11:38 - 11:39
    A.K.: This is like
  • 11:39 - 11:40
    one of those, one of those things that is,
  • 11:40 - 11:42
    it's very easily said and-
  • 11:42 - 11:42
    P.S.: Yeah, and it-
  • 11:42 - 11:43
    A.K.: You have to, at least for us, we
  • 11:43 - 11:45
    always fell in that trap of-
  • 11:45 - 11:46
    P.S.: Yeah, like
  • 11:46 - 11:48
    it almost sounds like 'do the right thing,'
    but,
  • 11:48 - 11:51
    yeah, like, it's very, it fits your common
    sense
  • 11:51 - 11:53
    very well, but then when you're caught up
    in
  • 11:53 - 11:55
    the daily, the day-to-day stuff in what you
    do
  • 11:55 - 11:57
    in a project, it becomes very easy for you
  • 11:57 - 11:59
    to miss the point here. So yeah, like this
  • 11:59 - 12:03
    is really what, what is essence of what we
  • 12:03 - 12:06
    are trying to say, right.
  • 12:06 - 12:08
    A.K.: So, so what really happened here.
  • 12:08 - 12:09
    Let's go a little bit more
  • 12:09 - 12:12
    into what we were trying earlier and what
    we
  • 12:12 - 12:15
    think we should have probably done. Instead
    of something
  • 12:15 - 12:19
    like this, which, which actually ended up
    attacking the
  • 12:19 - 12:23
    symptoms, or you know, targeting the symptom,
    we want
  • 12:23 - 12:25
    to do something like this: There is a symptom,
  • 12:25 - 12:26
    so just like always-
  • 12:26 - 12:27
    P.S.: This is where the
  • 12:27 - 12:30
    whole, like, the pharmacist and the doctor
    approach, yeah,
  • 12:30 - 12:32
    like, it's a very long-shot metaphor, we agree,
    but-
  • 12:32 - 12:35
    A.K.: The doctor thinking, which we hopefully
    want to
  • 12:35 - 12:40
    do, is first is just try and take a
  • 12:40 - 12:41
    guess at least at the problem, at least in
  • 12:41 - 12:43
    our context, maybe not the doctor's. But in
    our
  • 12:43 - 12:47
    context, take a quess at the problem, right.
    Think
  • 12:47 - 12:50
    what it might be. Then that hopefully will
    tell
  • 12:50 - 12:52
    you what you could do to probably solve the
  • 12:52 - 12:55
    problem, solve, you know, that could be the
    solution
  • 12:55 - 12:59
    which hope- will hopefully fix the problem,
    right. So
  • 12:59 - 13:02
    this kind of very similar, we are iterating??[00:13:00]
    over
  • 13:02 - 13:05
    this and hopefully, you know, improving on
    what we
  • 13:05 - 13:10
    thought was the problem. And how do you know,
  • 13:10 - 13:12
    then, that you know we are in fact improving
  • 13:12 - 13:14
    on the problem? How do we know that is
  • 13:14 - 13:14
    the problem?
  • 13:14 - 13:14
    P.S.: Like the whole, how do you
  • 13:14 - 13:15
    define them, right. So, how do we define them?
  • 13:15 - 13:18
    A.K.: And that's where we think, that's where
    we
  • 13:18 - 13:21
    think the metrics come in. Again, not metric,
    hopefully
  • 13:21 - 13:24
    metrics, because that lets us measure the
    problem. It
  • 13:24 - 13:28
    tells us at every point that, you know, you're
  • 13:28 - 13:30
    doing better, you know, it's improving. And
    hopefully you
  • 13:30 - 13:35
    also, when it gets done, right. So, yeah.
    So
  • 13:35 - 13:37
    this is, this is probably the approach that
    we
  • 13:37 - 13:40
    would like to try on, try from now on,
  • 13:40 - 13:44
    also. And, right, there, you know, the problem
    may
  • 13:44 - 13:46
    not be the, what is the one that we
  • 13:46 - 13:48
    started out to fix. Like, like, probably in
    our
  • 13:48 - 13:51
    previous case, you know, it, you should always
    be
  • 13:51 - 13:53
    open to the idea that the problem will, what
  • 13:53 - 13:55
    you thought was the problem was never the
    case,
  • 13:55 - 13:58
    and it was not showing up. I mean, you
  • 13:58 - 14:00
    were trying to, you were seeing the metrics
    improve,
  • 14:00 - 14:02
    but then the symptom never went away, right.
    So
  • 14:02 - 14:04
    be open to the notion that the problem could
  • 14:04 - 14:07
    be different, in which case, the important
    thing is
  • 14:07 - 14:09
    the solution is different and the metrics
    are different,
  • 14:09 - 14:10
    right. So yeah.
  • 14:10 - 14:12
    P.S.: Any guesses on what could
  • 14:12 - 14:15
    have been the problem on that project? Where
    the
  • 14:15 - 14:20
    regression bugs were written very high? It
    was duplication,
  • 14:20 - 14:23
    as in there was, like, rampant duplication
    all over
  • 14:23 - 14:26
    the place. And we would change something but
    forget
  • 14:26 - 14:28
    to change the same thing in some other place.
  • 14:28 - 14:30
    But, and because we didn't know about the
    code
  • 14:30 - 14:34
    base, yeah. We were just blindly adding tests.
    And
  • 14:34 - 14:37
    incrimentally going through different places,
    where each place where
  • 14:37 - 14:39
    we found that there were no tests, we added
  • 14:39 - 14:40
    a test, right. But that didn't really help
    us
  • 14:40 - 14:43
    with actually solving the problem of duplication.
    Yeah. So-
  • 14:43 - 14:45
    A.K.: The coverage number is something which
    is, it
  • 14:45 - 14:47
    easily drives you to just keep adding the
    specs,
  • 14:47 - 14:50
    and we will talk about, more about that soon.
  • 14:50 - 14:53
    P.S.: Yeah, so, if you really think about
    it,
  • 14:53 - 14:56
    of, basically what, at least, we would love
    to
  • 14:56 - 14:59
    believe that we have stopped doing this, right.
    So
  • 14:59 - 15:02
    Yogi mentioned this in the panel discussion
    yesterday, block-force
  • 15:02 - 15:06
    driven decisions, right. So that was really
    what we
  • 15:06 - 15:08
    were doing, essentially. Like, we were a bunch
    of
  • 15:08 - 15:11
    kids on this project, who'd see a problem
    at
  • 15:11 - 15:14
    the first, the, at the first, trigger we would
  • 15:14 - 15:17
    just start crawling the web, start crawling
    block force,
  • 15:17 - 15:21
    see different github projects, find a gem,
    install it,
  • 15:21 - 15:23
    start, like, monitoring it, measuring it,
    try to improve
  • 15:23 - 15:26
    it, you know. We don't really spending too
    much
  • 15:26 - 15:29
    time into figuring out what was really the
    problem,
  • 15:29 - 15:36
    right. And, especially, so in, OK - where
    does-
  • 15:37 - 15:40
    Especially so in Rails projects, where, you
    know, or
  • 15:40 - 15:43
    Ruby projects, where we believe that the number
    of
  • 15:43 - 15:47
    gems which actually bundle best practices
    is actually very
  • 15:47 - 15:49
    high, right. Here it's very easy for us to
  • 15:49 - 15:51
    fall into the trap of, OK, just choose a
  • 15:51 - 15:53
    gem, start using it, and then two months or
  • 15:53 - 15:55
    three months down the way you have no idea
  • 15:55 - 15:57
    where you used it in the first place. But
  • 15:57 - 15:59
    it's just there in your process, right. Yeah.
    Like
  • 15:59 - 16:01
    at least we used to find ourselves in that
  • 16:01 - 16:05
    trap all the time. So yeah. This is basically
  • 16:05 - 16:08
    what we stopped doing. OK, at this-
  • 16:08 - 16:09
    A.K.: [indecipherable]
  • 16:09 - 16:11
    P.S.: So this, this dude is basically Hari,
  • 16:11 - 16:14
    he's sitting way over there. Yesterday we
    were doing
  • 16:14 - 16:17
    a write-in??[00:16:14], and the only dude
    that, after this
  • 16:17 - 16:19
    point it's fine, but it's getting monotonous,
    it's very
  • 16:19 - 16:22
    black and white. And you need more images.
    And,
  • 16:22 - 16:25
    like, Jake and I were really, we really think
  • 16:25 - 16:25
    that-
  • 16:25 - 16:26
    A.K.: That's our image, Hari!
  • 16:26 - 16:29
    P.S.: We really think that we don't know how
    to add images
  • 16:29 - 16:31
    to a presentation. And we were like, OK fine
  • 16:31 - 16:35
    Hari, we'll just add your picture. And, yeah,
    so
  • 16:35 - 16:37
    thanks for- and as you notice, we are very
  • 16:37 - 16:39
    receptive of feedback, so.
  • 16:39 - 16:41
    A.K.: He hasn't spoken yet,
  • 16:41 - 16:42
    but his is full of interesting ones.
  • 16:42 - 16:45
    P.S.: Like it would have been funny if like
    his was
  • 16:45 - 16:47
    the presentation before we got the schedule,
    but yeah,
  • 16:47 - 16:51
    anyway. He'll be talking next. So yeah, when
    you
  • 16:51 - 16:53
    look at test coverage, right, how many of
    you
  • 16:53 - 16:59
    think you understand test coverage very well?
    Well enough.
  • 16:59 - 17:04
    OK, I mean, sure, yeah, like-
  • 17:04 - 17:08
    A.K.: This was one thing which at least took
    us as a
  • 17:08 - 17:10
    very obvious metrics thing, which we always
    get into,
  • 17:10 - 17:11
    and-
  • 17:11 - 17:12
    P.S.: When we were young and stupid as
  • 17:12 - 17:14
    against now old and stupid, right, we were,
    we
  • 17:14 - 17:16
    used to think oh, test coverage, what's that,
    it's
  • 17:16 - 17:19
    just - meh. It's so easy, right. But then
  • 17:19 - 17:22
    we realized even something so seemingly obvious
    had, like,
  • 17:22 - 17:25
    so many different shades of details. And once
    you
  • 17:25 - 17:28
    start understanding it and interpreting it
    in context is
  • 17:28 - 17:32
    when you really understand the, the complexity
    of that,
  • 17:32 - 17:36
    of that thing that you're measuring, right.
    Now think
  • 17:36 - 17:38
    of all the things that people at Flipkart
    measure.
  • 17:38 - 17:41
    I don't even know if they have a rational
  • 17:41 - 17:44
    behind every one of it. I'm hoping they do,
  • 17:44 - 17:46
    but you know like, it becomes very OK, we
  • 17:46 - 17:48
    just need to monitor these ten things. Why?
    Why
  • 17:48 - 17:51
    are you doing it? Right. So it should not
  • 17:51 - 17:53
    be like a checklist at every project on start.
  • 17:53 - 17:56
    You just start using it. So yeah, test coverage
  • 17:56 - 17:58
    is definitely one thing that we found where,
    on
  • 17:58 - 17:59
    start of every project we just set up a
  • 17:59 - 18:01
    coverage tool. We just wanted to talk about
    some
  • 18:01 - 18:04
    details on what we learned when doing that,
    so
  • 18:04 - 18:05
    yeah.
  • 18:05 - 18:07
    A.K.: So first we want to start off
  • 18:07 - 18:12
    the obvious one. The measuring. So controller
    specs versus
  • 18:12 - 18:16
    model specs coverage. I- I'm guessing it,
    does it
  • 18:16 - 18:21
    ring any bells? I'm- so, so I'm, think of
  • 18:21 - 18:24
    it this way, like, he has a question for
  • 18:24 - 18:27
    you, right? You have, let's take a simple
    case.
  • 18:27 - 18:29
    There is a single controller, you have a spec
  • 18:29 - 18:32
    around it, there's a corresponding module,
    you have a
  • 18:32 - 18:34
    spec around it, right. And then, you, with
    these
  • 18:34 - 18:35
    two tests you run your coverage, right.
  • 18:35 - 18:37
    P.S.: And you get some coverage from-
  • 18:37 - 18:39
    A.K.: I'm guessing it's going to be a hundred
    percent.
  • 18:39 - 18:44
    P.S.: Do you see a problem with this? Could
    there, rather, OK,
  • 18:44 - 18:49
    could there be a problem with this?
  • 18:49 - 18:54
    A.K.: What if you just removed the model spec?
  • 18:54 - 18:57
    P.S.: What will the coverage for model be?
    Is there a
  • 18:57 - 19:02
    chance that model's coverage is not zero?
  • 19:02 - 19:05
    A.K.: Your controller spec is still gonna
    be loading the model.
  • 19:05 - 19:05
    So your coverage-
  • 19:05 - 19:06
    P.S.: Depends on-
  • 19:06 - 19:06
    A.K.: -is still in question.
  • 19:06 - 19:08
    P.S.: The answer is it depends, right,
  • 19:08 - 19:10
    like, really depends on how you are testing
    your
  • 19:10 - 19:12
    controller. But most of all things, what we
    have
  • 19:12 - 19:14
    seen is, not every model is mocked out in
  • 19:14 - 19:17
    the controller. Well, it's a totally different
    debate, whether
  • 19:17 - 19:20
    should you mock your models or not, but if
  • 19:20 - 19:23
    you are not modelin- or, mocking, your models
    are
  • 19:23 - 19:25
    being loaded in your controller. So the controller
    spec,
  • 19:25 - 19:28
    when it is tested for, like when the coverage
  • 19:28 - 19:31
    is reported your models are being reported
    as well.
  • 19:31 - 19:34
    A.K.: While, while the, or the point of, the
  • 19:34 - 19:36
    point we are trying to make is, here, is
  • 19:36 - 19:38
    not whether you should, how you should test
    your
  • 19:38 - 19:42
    model specs and controller specs. What we
    do implore
  • 19:42 - 19:44
    you to do is make sure that when you're
  • 19:44 - 19:46
    doing, when you're looking at your coverage,
    you do
  • 19:46 - 19:48
    have in mind your testing strategy, which
    is, am
  • 19:48 - 19:51
    I actually mocking the model out or is it
  • 19:51 - 19:53
    also getting wrote as a part of my controller
  • 19:53 - 19:55
    spec? Is my controller spec also hitting the
    models,
  • 19:55 - 19:57
    right? Think about these things when, when
    you're looking
  • 19:57 - 20:00
    at these numbers, right. Or something that
    worked for
  • 20:00 - 20:02
    us which we tried to do was we started
  • 20:02 - 20:06
    monitoring the model specs coverage independently,
    and then started
  • 20:06 - 20:13
    looking at the controller specs in light of
    the,
  • 20:13 - 20:14
    in light of the model spec coverage. We wanted
  • 20:14 - 20:17
    the model spec coverage to be high, because
    at
  • 20:17 - 20:20
    least we wanted all, hopefully all our business
    logic
  • 20:20 - 20:23
    was in the model specs, and you know, that's
  • 20:23 - 20:29
    what we were keen on. Yes. Yeah, and then
  • 20:29 - 20:32
    the next one, the line coverage itself, I
    think
  • 20:32 - 20:33
    most commonly when we talk about coverage
    we just
  • 20:33 - 20:35
    talk about line coverage. But then there is
    a
  • 20:35 - 20:38
    bunch of other things as well, branch coverage,
    and
  • 20:38 - 20:40
    then unique path coverage.
  • 20:40 - 20:41
    P.S.: How many of you
  • 20:41 - 20:42
    pay attention to branch coverages?
  • 20:42 - 20:45
    A.K.: Or even monitor it?
  • 20:45 - 20:47
    P.S.: How many of you don't think it's
  • 20:47 - 20:53
    important? How many of you have no opinions?
    Cool.
  • 20:53 - 20:58
    Yeah. I mean, sure. We have it on projects
  • 20:58 - 21:01
    where it's been important, it's not been important,
    it's
  • 21:01 - 21:03
    fine. But it's just that, you just need to
  • 21:03 - 21:06
    know that it exists, and you need to train
  • 21:06 - 21:08
    your data, right, so.
  • 21:08 - 21:09
    A.K.: Just, hopefully it should
  • 21:09 - 21:12
    not be a single metric. Something usually
    seems wrong
  • 21:12 - 21:13
    if it is just gonna be about that one
  • 21:13 - 21:19
    metric. Next one. So reporting. Yeah, this
    one is
  • 21:19 - 21:21
    a bit tricky. What I usually don't like about
  • 21:21 - 21:24
    coverage tools and these tools in general
    is they
  • 21:24 - 21:27
    sometimes miss out this aspect of it. And
    they,
  • 21:27 - 21:28
    in an attempt to try and be nice to
  • 21:28 - 21:31
    you when you are very simple answered is try
  • 21:31 - 21:33
    and give you a number which inherently makes
    it
  • 21:33 - 21:36
    good or bad. There's nothing in between, and
    then
  • 21:36 - 21:39
    the focus is lost. Like you either start liking
  • 21:39 - 21:41
    it or you don't like it. You don't really
  • 21:41 - 21:46
    think about what is there in between, right.
    Yeah.
  • 21:46 - 21:48
    So the focus on, focus on some of the
  • 21:48 - 21:49
    aspects of where is the coverage, you know,
    what
  • 21:49 - 21:52
    does it mean, right. One thing that worked
    for
  • 21:52 - 21:55
    us was code climate in the region projects.
    I
  • 21:55 - 21:57
    really like it because they put a lot of
  • 21:57 - 22:00
    focus into the presentation aspect of it.
    Not just
  • 22:00 - 22:03
    the collection aspect of it. It really, you
    know,
  • 22:03 - 22:07
    takes you down to the, to the code, which
  • 22:07 - 22:10
    is missing the specs. Of course, they do also
  • 22:10 - 22:12
    other metrics like code quality, which I really
    like,
  • 22:12 - 22:14
    by the way. They have some notification things
    like
  • 22:14 - 22:17
    that, like, on, like Lexus a lot, whenever
    this
  • 22:17 - 22:19
    climate goes in, poof the coverage goes down
    or
  • 22:19 - 22:21
    something like that. [00:22:19] It doesn't
    break the builder
  • 22:21 - 22:24
    part, it doesn't break the build, but it lets
  • 22:24 - 22:25
    you know, and then you can deal with it
  • 22:25 - 22:27
    if you think it is important to deal with.
  • 22:27 - 22:30
    P.S.: OK, speaking of breaking build, how
    many of
  • 22:30 - 22:36
    you know what racheting, in builds? OK. So
    the
  • 22:36 - 22:38
    idea of racheting is basically you will never
    leave
  • 22:38 - 22:41
    your code base worse than what it already
    was,
  • 22:41 - 22:45
    right. So every commit basically makes sure,
    even if
  • 22:45 - 22:47
    it doesn't do any good, it doesn't do any
  • 22:47 - 22:49
    bad to your code base. So for example, if
  • 22:49 - 22:52
    you're, your current code coverage is at 70%,
    and
  • 22:52 - 22:54
    if this check-in makes it 69%, it will break
  • 22:54 - 22:56
    the build. Even though there's nothing functionally
    wrong with
  • 22:56 - 23:00
    it, you know, it's bad, right. We really think
  • 23:00 - 23:03
    it's a double-edged sword. I will, this is
    one
  • 23:03 - 23:05
    of those things which I have a, in theory
  • 23:05 - 23:09
    sounds very, very good, and direct, but in
    practice,
  • 23:09 - 23:11
    what it typically ends up doing is people
    end
  • 23:11 - 23:15
    up fretting the actual metric, and never about
    what
  • 23:15 - 23:18
    the problem is, right. Becau- this exactly
    does what
  • 23:18 - 23:20
    we said in the previous slide, which is coverage
  • 23:20 - 23:24
    is never red or green, right. Sometimes you
    are
  • 23:24 - 23:26
    OK with taking this hit because you want to
  • 23:26 - 23:29
    do something. I mean there are all, there
    are
  • 23:29 - 23:34
    so many reasons why which, in practic- in
    reality
  • 23:34 - 23:36
    you may have to do some bad things, and
  • 23:36 - 23:38
    eventually have to pay for it, but it's OK,
  • 23:38 - 23:41
    it's a conscious decision, right. But racheting
    invariably stops
  • 23:41 - 23:45
    that. It makes it very, you know, black and
  • 23:45 - 23:45
    white, right.
  • 23:45 - 23:47
    A.K.: Difficult for you to proceed at
  • 23:47 - 23:48
    that very moment.
  • 23:48 - 23:49
    P.S.: Yeah and it has a
  • 23:49 - 23:53
    more behavioral impact on the team, which
    is your,
  • 23:53 - 23:57
    your team members start hating either the
    racheting, or
  • 23:57 - 24:00
    they start hating the metric, or you know,
    we
  • 24:00 - 24:04
    had this one person who did not commit for
  • 24:04 - 24:06
    four days because they thought they did not
    have
  • 24:06 - 24:09
    enough test coverage. And, like, when we breached
    the
  • 24:09 - 24:12
    whole, OK, frequent check-ins, call check-in,
    and this person
  • 24:12 - 24:14
    was actually scared because they were about,
    they would
  • 24:14 - 24:18
    break the build, is actually a, very, very
    bad
  • 24:18 - 24:21
    signal, of, a bad sign from your team, right.
  • 24:21 - 24:23
    Now well, you can always argue, OK, this person
  • 24:23 - 24:25
    totally missed the point, we can say a bunch
  • 24:25 - 24:27
    of things, right, but this is the reality.
    So
  • 24:27 - 24:30
    we think that it's a good idea, so that
  • 24:30 - 24:32
    you might want to do it at certain points
  • 24:32 - 24:35
    in time. But yeah, be very, very careful about
  • 24:35 - 24:38
    the freakanomic implication that it has on
    your team,
  • 24:38 - 24:38
    right, always keep that in mind.
  • 24:38 - 24:40
    A.K.: And then
  • 24:40 - 24:43
    the very other popular thing, which is, you
    just
  • 24:43 - 24:46
    write that test to bump up the coverage, if
  • 24:46 - 24:47
    you're not just-
  • 24:47 - 24:47
    P.S.: Yeah, so there was, there
  • 24:47 - 24:50
    was just one more philosophy of full-on coverage??
    [00:24:50],
  • 24:50 - 24:52
    which is saying, OK, you just checked-in code,
    but
  • 24:52 - 24:54
    you don't have a test for this, but that's
  • 24:54 - 24:57
    OK. There's this other class which does not
    have
  • 24:57 - 24:59
    a test for it, which is easy, so let's
  • 24:59 - 25:01
    start this there and keep my coverage maintained
    right
  • 25:01 - 25:04
    there. There's so many, like, weird things
    people end
  • 25:04 - 25:06
    up doing, just because they are now worried
    about
  • 25:06 - 25:09
    coverage and not really worried about what
    it means
  • 25:09 - 25:11
    to- you know, what it means with what they
  • 25:11 - 25:12
    are doing, right. So it's-
  • 25:12 - 25:15
    is done with the best of intentions, but then
  • 25:15 - 25:17
    you get really, really bad, cause, like-
  • 25:17 - 25:17
    P.S.: Yeah.
  • 25:17 - 25:20
    So, OK, how, how did we go, I mean,
  • 25:20 - 25:22
    how do we now improve, like, so we still
  • 25:22 - 25:26
    take on a lot of projects, customer, we are,
  • 25:26 - 25:28
    we are mainly now in consulting, so we do
  • 25:28 - 25:30
    end up taking worker code bases, right. So
    what
  • 25:30 - 25:32
    do we do to improve coverage if we have
  • 25:32 - 25:35
    a bad one? One thing we realized is adding
  • 25:35 - 25:38
    unit tests to classes, existing classes, could
    be a
  • 25:38 - 25:41
    very dangerous thing, right. What that essentially
    means, is
  • 25:41 - 25:43
    you have, you know, you are cementing the
    current
  • 25:43 - 25:46
    design. Now I won't say, like prejudiously,
    it might
  • 25:46 - 25:49
    be bad to say not a good design, right.
  • 25:49 - 25:51
    But you are cementing it, right. If ever you
  • 25:51 - 25:54
    want to cement something, cement features.
    Cement functionality, right.
  • 25:54 - 25:57
    Which means you might want to write like a
  • 25:57 - 26:00
    much higher level test of, and, you know,
    ensure
  • 26:00 - 26:02
    that the functionality is the same, so that
    you
  • 26:02 - 26:04
    can go and refactor later. On this one project,
  • 26:04 - 26:07
    which Yogi, me, and Jake worked together on
    in
  • 26:07 - 26:11
    ThoughtWorks, it worked beautifully for us,
    where we were
  • 26:11 - 26:16
    strangling ?? to Hibernate [00:26:14]. Which
    meant that all
  • 26:16 - 26:19
    that entire unit tests and database level
    tests were
  • 26:19 - 26:22
    completely invalidated, they were useless,
    right. And because of
  • 26:22 - 26:24
    the way, and because we wanted to turn on
  • 26:24 - 26:26
    to caches?? [00:26:24], transactions changed.
    Like that meant the
  • 26:26 - 26:30
    whole app was rendered useless from a testing
    point
  • 26:30 - 26:31
    of view, right, from a safety net point of
  • 26:31 - 26:35
    view. But what came to our help was controller
  • 26:35 - 26:38
    and beyond level tests. So our biggest, we
    had
  • 26:38 - 26:41
    such good coverage there, that we went in
    and
  • 26:41 - 26:43
    we just modified a whole bunch of things,
    and
  • 26:43 - 26:45
    we like started deleting tests, you know.
    You get
  • 26:45 - 26:48
    a lot more flexibility and freedom inside
    your code
  • 26:48 - 26:50
    base when you know that you're not breaking
    any
  • 26:50 - 26:53
    functionality. So yeah like it's a really,
    really, like,
  • 26:53 - 26:56
    good thing, so definitely think about it when
    you're
  • 26:56 - 26:57
    inheriting a legacy codebase.
  • 26:57 - 26:59
    A.K.: Unit tests, unit tests
  • 26:59 - 27:01
    are great, but do keep in mind that they
  • 27:01 - 27:03
    might also come in the way of-
  • 27:03 - 27:04
    P.S.: Changes.
  • 27:04 - 27:04
    A.K.: Refactoring, and changes.
  • 27:04 - 27:05
    P.S.: Yeah.
  • 27:05 - 27:06
    A.K.: Big, big refactoring-
  • 27:06 - 27:08
    P.S.: So, OK. What - the whole measurement
  • 27:08 - 27:11
    reporting, racheting, improvement, all of
    this is basically saying,
  • 27:11 - 27:13
    always keep the problem you're solving in
    mind, right.
  • 27:13 - 27:16
    Rachet don't rachet - how do you decide? Well,
  • 27:16 - 27:18
    is it helping you to achieve your goal, or
  • 27:18 - 27:20
    the problem you have at hand? Sure, you know,
  • 27:20 - 27:26
    so do it. Otherwise don't do it, right. It's-
  • 27:26 - 27:27
    Yeah.
  • 27:27 - 27:29
    A.K.: I think-
  • 27:29 - 27:30
    P.S.: So, the second-
  • 27:30 - 27:31
    A.K.: Let's-
  • 27:31 - 27:33
    P.S.: -anecdote- I'll just be, quickly talk
    about-
  • 27:33 - 27:35
    A.K.: We have five minutes, so-
  • 27:35 - 27:36
    P.S.: Yeah, like,
  • 27:36 - 27:37
    the second anecdote was basically, OK, we
    had this
  • 27:37 - 27:42
    server which was under a very heavy load.
    And,
  • 27:42 - 27:49
    like thousands of requests, a minute, and
    only about
  • 27:49 - 27:52
    5% of those requests, in very seemingly arbitrary
    periods
  • 27:52 - 27:56
    of times, and arbirtrary controllers and actions,
    would pause,
  • 27:56 - 27:59
    and it would take a very long time, right.
  • 27:59 - 28:02
    So the- this problem was way more technical.
    It
  • 28:02 - 28:04
    had nothing to do with the behavior or, you
  • 28:04 - 28:09
    know, like, or practices part of, or side
    of
  • 28:09 - 28:11
    things. It was pure technical problem. And
    goal was
  • 28:11 - 28:13
    for us to find the root cause of this
  • 28:13 - 28:17
    unpredictable behavior and fix it, right.
    And, yeah, like
  • 28:17 - 28:20
    we were, like we can definitely talk about
    how
  • 28:20 - 28:22
    we went into, like a lot of different solving,
  • 28:22 - 28:25
    a lot of different symptoms, at one point
    even
  • 28:25 - 28:28
    suspecting JRuby. You know, like, so it's
    very, like,
  • 28:28 - 28:31
    it becomes very hard for you to figure out,
  • 28:31 - 28:32
    OK, what is a problem, what is a symptom,
  • 28:32 - 28:35
    and like, go very methodical about it, right.
    So
  • 28:35 - 28:38
    that's what this, this problem was going to
    be
  • 28:38 - 28:39
    about. But let's take this offline. At this
    point
  • 28:39 - 28:39
    we can-
  • 28:39 - 28:39
    A.K.: I mean, before that, yeah, let's
  • 28:39 - 28:43
    get out some questions.
  • 28:43 - 28:46
    P.S.: Yeah, let's take some
  • 28:46 - 28:48
    questions if you have any.
  • 28:48 - 28:49
    A.K.: Cool.
  • 28:49 - 28:51
    P.S.: Cool, thanks.
  • 28:51 - 28:52
    A.K.: Thanks.
  • 28:52 - 28:53
    P.S.: Thanks guys.
Title:
Garden City Ruby 2014 - Pharmacist or a Doctor - What does your code base need?
Description:

more » « less
Duration:
29:24

English subtitles

Revisions