< Return to Video

How to Stop Hating Your Tests

  • Not Synced
    Alright, high energy I love it
  • Not Synced
    The doors are closing
    and now we're covered
  • Not Synced
    Alright, great
  • Not Synced
    Can I get my slides up on the monitors?
  • Not Synced
    Alright, great
  • Not Synced
    Let me start my timer
  • Not Synced
    Where's my phone? Uh oh
  • Not Synced
    Who has my phone? Where's my timer?
  • Not Synced
    Alright we'll start
  • Not Synced
    Alright.
  • Not Synced
    So, there's this funny thing where
    every year conference season lines up with
  • Not Synced
    Apple's operating system release schedule
    and I'm a big Apple fanboy
  • Not Synced
    So I, on one hand really want to upgrade,
  • Not Synced
    and on the other hand,
    really want my slide deck to work
  • Not Synced
    So, this year, because they announced
    the iPad Pro I was pretty excited,
  • Not Synced
    I was like, maybe this year finally OS9
    is going to be ready for me
  • Not Synced
    to give a real talk out of
    and to build my entire talk out of it
  • Not Synced
    So this talk was built entirely in OS9
  • Not Synced
    So let's just start it up, see how it goes
  • Not Synced
    I'm a little nervous
  • Not Synced
    Alright, so, it's a little retro
  • Not Synced
    It takes a while to start up
  • Not Synced
    I built my entire presentation in AppleWorks
  • Not Synced
    So I gotta open up my AppleWorks presentation
  • Not Synced
    Ok, there it is
  • Not Synced
    I gotta find the play button
  • Not Synced
    And here we go
  • Not Synced
    And good, alright
  • Not Synced
    So this talk is,
    "How to stop hating your tests"
  • Not Synced
    My name is Justin
  • Not Synced
    I play a guy named "searls" on the Internet
  • Not Synced
    And I work at the best software
    agency in the world: Test Double
  • Not Synced
    So, why do people hate their tests?
  • Not Synced
    Well, I think a lot of teams start off
    in experimentation mode
  • Not Synced
    Like everything's fun, free,
    they're "pivoting" all the time
  • Not Synced
    And having a big test suite would really
    just slow down their rate of change
  • Not Synced
    and discovery
  • Not Synced
    But eventually we get to a point where
    we're worried that if we
  • Not Synced
    create a new change we might break things
  • Not Synced
    and it's important that
    things stay working
  • Not Synced
    so people start writing some test suites
  • Not Synced
    so they have a build so that
    when they push new code
  • Not Synced
    they know whether they just broke stuff
  • Not Synced
    But if we write out tests in a haphazard,
    unorganized way
  • Not Synced
    they tend to be slow, convoluted and every
    time that you want to change a thing
  • Not Synced
    we spend all day just updating tests
  • Not Synced
    and eventually teams get to this point
  • Not Synced
    where they just yearn for
    the good old days
  • Not Synced
    where they got to change stuff
    and move quickly
  • Not Synced
    And I see this pattern repeat so much
    that I'm starting to believe
  • Not Synced
    that an ounce of prevention
    is worth a pound of cure in this instance
  • Not Synced
    'Cause once you get to the end
    there's not much you can do
  • Not Synced
    You can say,
    "our test approach isn't working"
  • Not Synced
    And a lot of people would be like
  • Not Synced
    "Well I guess we're just
    not testing hard enough"
  • Not Synced
    And when you see a problem
    over and over again
  • Not Synced
    I really personally,
    I don't believe that the
  • Not Synced
    "Work harder comrade"
    approach is appropriate
  • Not Synced
    You should be always inspecting your
    workflow and your tools
  • Not Synced
    and trying to make them better
    if you keep running into the same issue
  • Not Synced
    Some other people they might say,
    "Ok well let's just buckle down,
  • Not Synced
    "remediate, testing is job one,
    let's really focus on testing for a while"
  • Not Synced
    But from the perspective of the people
    who pay us to build stuff
  • Not Synced
    testing is not job one
  • Not Synced
    It's at best job two
  • Not Synced
    From their perspective they want to
    see us shipping stuff
  • Not Synced
    Shipping new features
  • Not Synced
    And the longer we go with that
    impedance mismatch
  • Not Synced
    the more friction and tension
    we're going to have
  • Not Synced
    So that's not sustainable
  • Not Synced
    I said we're talking about prevention
  • Not Synced
    but if you're working in a big,
    legacy, monolithic, application, you know
  • Not Synced
    and you're not greenfield
  • Not Synced
    this is not a problem at all 'cause
    I've got this cool thing to show you
  • Not Synced
    There's this one weird trick
    to starting fresh with your test suite
  • Not Synced
    That's right, you're gonna learn_
    what the one weird trick is
  • Not Synced
    Basically just move your tests
    into a new directory
  • Not Synced
    aAnd then you make another directory
  • Not Synced
    Aand then you have two directories
  • Not Synced
    Aand you can write this thing
    called a shell script
  • Not Synced
    get this
  • Not Synced
    that runs both test suites
  • Not Synced
    And then, you know, eventually
    port them over
  • Not Synced
    and you're able to decommission the
    old test suite
  • Not Synced
    But, I would hesitate to even
    give a talk about testing
  • Not Synced
    because I am the worst kind
    of expert
  • Not Synced
    I have too much experience,
    navel-gazing about testing
  • Not Synced
    building open-sourcing tools
    around testing
  • Not Synced
    I've been on many many teams
  • Not Synced
    as the guy who cared just a little
    bit more about testing that everyone else
  • Not Synced
    and lots of high falutin, philosophical
    and nuanced Twitter arguments
  • Not Synced
    that really are not pertinent
    to anyone's life
  • Not Synced
    So my advice is toxic
  • Not Synced
    I'm overly cynical, I'm very risk averse
  • Not Synced
    and if told you what I really thought
    about testing
  • Not Synced
    it would just discourage all of you
  • Not Synced
    So, instead, my goal here today is to
    distill my advice down into
  • Not Synced
    just a few component parts
  • Not Synced
    The first part, we're going to talk
    about structure,
  • Not Synced
    the physicality of our tests
  • Not Synced
    like what the lines
    and files look like on disk
  • Not Synced
    We're going to talk about isolation
    'cause I really believe that
  • Not Synced
    how we choose to isolate
    the code that we're testing
  • Not Synced
    is the best way to
    communicate the concept
  • Not Synced
    and the value that we hope
    to get out of a test.
  • Not Synced
    And we're gonna talk about feedback like,
  • Not Synced
    do our tests make us happy or sad?
  • Not Synced
    Are they fast or are they slow?
  • Not Synced
    Do they make us more or less productive?
  • Not Synced
    And keep in mind we're thinking about
    this from the perspective of prevention
  • Not Synced
    Because these are all things that are much
    easier to do on day one
  • Not Synced
    than to try to shoehorn in on day 100.
  • Not Synced
    So at this point,
    in keeping with the Apple theme,
  • Not Synced
    my brother dug up this Apple II
    copy of Family Feud
  • Not Synced
    and it turns out it's really hard
    to make custom artwork in AppleWorks 6
  • Not Synced
    so I just ripped off the artwork
    from this Family Feud board
  • Not Synced
    We're going to use that to organize our slides
  • Not Synced
    It's a working board, that means if
    I point at the screen and say
  • Not Synced
    "Show me potato salad!"
  • Not Synced
    I get an X
  • Not Synced
    But unfortunately I didn't have
    100 people to survey
  • Not Synced
    I just surveyed myself 100 times
    so I know all the answers already
  • Not Synced
    So, uh, first round, we're going to
    talk about test structure
  • Not Synced
    And I'm going to say,
    "Show me 'Too Big To Fail'"
  • Not Synced
    People hate tests of big code
  • Not Synced
    In fact, have you ever noticed that
    the people who are really into testing
  • Not Synced
    and TDD, they really seem to hate
    big objects and big functions
  • Not Synced
    More than normal people
  • Not Synced
    I mean we all understand big objects
    are harder to deal with than small objects
  • Not Synced
    But one thing I've learned over the years
  • Not Synced
    is that tests actually make big objects
    even harder to manage
  • Not Synced
    which is counterintuitive,
    you'd expect the opposite
  • Not Synced
    And I think part of the reason is that
    when you've got big objects
  • Not Synced
    they might have many dependencies
  • Not Synced
    which means you have lots of test setup
  • Not Synced
    They might have multiple side effects
    in addition to whatever they return
  • Not Synced
    which means you have lots of verifications
  • Not Synced
    But what's most interesting is they have
    lots of logical branches
  • Not Synced
    Like depending on the arguments
    and the state
  • Not Synced
    there's a lot of test cases that you
    have to write
  • Not Synced
    And this is the one that I think
    is most significant
  • Not Synced
    So, uh, let's take a look at some code
  • Not Synced
    At this point I realize that OS9 is not unix
  • Not Synced
    So, I found a new terminal
  • Not Synced
    It's actually a cool new one
  • Not Synced
    It just came out this week
  • Not Synced
    So let's boot that up
  • Not Synced
    Yep, here we go
  • Not Synced
    Alright, we're almost there
  • Not Synced
    It's a little slow
  • Not Synced
    Alright, so, this is a
    fully operational terminal
  • Not Synced
    Alright, so we're gonna type like
    an arbitrary Unix command
  • Not Synced
    That works fine
  • Not Synced
    I'm gonna start a new test
  • Not Synced
    It's a validation method
    of a timesheet object
  • Not Synced
    To see whether or not people
    have notes entered
  • Not Synced
    And so, we're gonna say like
  • Not Synced
    if you have notes, and you're an admin
  • Not Synced
    and it's an invoice week or an off week
  • Not Synced
    and whether or not you've
    entered time or not
  • Not Synced
    All of those four boolean attributes
  • Not Synced
    they factor into whether or not
  • Not Synced
    that record is considered valid
  • Not Synced
    And at this point I wrote the first test
  • Not Synced
    but I'm like, "Uh, I've got
    a lot of other context to write"
  • Not Synced
    Let's start planning those out
  • Not Synced
    And like "Damn, this is a lot of tests
    that I would need to write"
  • Not Synced
    "to cover this case of just four booleans"
  • Not Synced
    And what I fell victim to there
  • Not Synced
    is a thing called the Rule of Product
  • Not Synced
    which is a thing from the
    school of combinatorics and math
  • Not Synced
    It's a real math thing
    because it has a wikipedia page
  • Not Synced
    And what is says essentially is that
  • Not Synced
    if you've got a method with four arguments
  • Not Synced
    you need to take each of those arguments
  • Not Synced
    and the number of possible values
    of each of them
  • Not Synced
    multiply them together
  • Not Synced
    and that gives you the total
    number of potential combinations
  • Not Synced
    or the total number of upper bound
  • Not Synced
    of test cases you might need to write
  • Not Synced
    So in this case with all booleans
  • Not Synced
    it's 2^4, so we have 16 test cases
  • Not Synced
    that we may have to write in this case
  • Not Synced
    And if you're a team that used to
  • Not Synced
    writing a lot of big objects,
    big functions
  • Not Synced
    you're probably in the habit of thinking
  • Not Synced
    "Oh I have some new functionality,
  • Not Synced
    "I'll just add one little more argument
  • Not Synced
    "what more harm could that do?"
  • Not Synced
    Other than double the number of
    test cases that I have to write!
  • Not Synced
    And so, as a result as somebody
  • Not Synced
    who trains people on testing a lot
  • Not Synced
    I'm not surprised at all to see
  • Not Synced
    like a lot of team who are used
    to big objects
  • Not Synced
    want to get serious about testing
  • Not Synced
    and then they're like,
  • Not Synced
    "Wow, this is really hard, I quit"
  • Not Synced
    So if you want to get serious about
    testing and have a lot of tests of code
  • Not Synced
    I encourage you, stop the bleeding.
  • Not Synced
    Don't keep adding on to your big objects.
  • Not Synced
    I try to limit objects to 1 public method
  • Not Synced
    and at most 3 dependencies
  • Not Synced
    which, to that particular audience
    is shocking
  • Not Synced
    The first thing they'll say is
  • Not Synced
    But then we'll have too many small things!
  • Not Synced
    How will we possibly deal with all the
    organized and carefully named
  • Not Synced
    comprehensible small things?
  • Not Synced
    And, you know, people get off on their
    own complexity right?
  • Not Synced
    So that's what makes them feel like a
    serious software developer.
  • Not Synced
    That's how hard their job is.
  • Not Synced
    They're like, "that sounds like
    programming on easy mode"
  • Not Synced
    and I'm like, "It is easy!"
  • Not Synced
    It's actually like, you know, not
  • Not Synced
    rocket science to build an enterprise
    crud application but
  • Not Synced
    you're making it that way.
  • Not Synced
    Just, write small stuff. It works.
  • Not Synced
    So, Next I want to talk about about
  • Not Synced
    how we hate when our tests go off script
  • Not Synced
    Code can do anything.
  • Not Synced
    Our programs should be
    unique and creative
  • Not Synced
    special unicorns of awesomeness,
  • Not Synced
    but tests can and should
    do only three things.
  • Not Synced
    They all follow the same script.
  • Not Synced
    Every test ever sets stuff up,
  • Not Synced
    invokes a thing,
  • Not Synced
    and then verifies a behavior.
  • Not Synced
    We're writing the same things
    over and over again.
  • Not Synced
    And it has these three phases:
    Arrange, Act, and Assert.
  • Not Synced
    A more English-natural way to say that
    would be: Given, When, Then.
  • Not Synced
    And when I'm writing a test I always
  • Not Synced
    intentionally call out all those phases
    really clearly and consistently.
  • Not Synced
    For example if I'm writing this is a
    MiniRest method,
  • Not Synced
    I always put exactly two empty new lines
  • Not Synced
    in every single x-unit style
    tests that I write.
  • Not Synced
    One after my Arrange,
    one after my action.
  • Not Synced
    Then it's really clear at a glance
  • Not Synced
    what's my arrange, what's my act,
    what's my assert.
  • Not Synced
    I always make sure to go in the
    correct order as well
  • Not Synced
    which is something
    people get wrong a lot.
  • Not Synced
    If I'm using something like RSpec
  • Not Synced
    and I have a lot of constructs available
  • Not Synced
    to me to specify what the intent is
  • Not Synced
    so I can say like 'let' and give a value
  • Not Synced
    to do a set up, so 'let' says
    I'm setting up a new thing.
  • Not Synced
    I can use 'before' to call out like
  • Not Synced
    this is an action with a side effect
    this is my act
  • Not Synced
    and that allows me to split up
    if I so choose
  • Not Synced
    those assertions into two separate blocks.
  • Not Synced
    So now at a glance
    somebody that knows RSpec
  • Not Synced
    they'll know exactly what phase
    each of those lines belongs in.
  • Not Synced
    I also try to minimize each phase
    to 1 action per line.
  • Not Synced
    so the test scope logic doesn't sneak in.
  • Not Synced
    The late great Jim Weirich wrote an
    awesome Ruby gem
  • Not Synced
    I hope you check it out
  • Not Synced
    it's called RSpec Given.
  • Not Synced
    I help maintain it now.
  • Not Synced
    He and Mike Moore ported it to
    MiniTest as well.
  • Not Synced
    I ported it a few years ago to Jasmine.
  • Not Synced
    Somebody else has taken it on
    and ported it Mocha.
  • Not Synced
    It's a really cool given-when-then
    conscious testing API.
  • Not Synced
    And what it does is
  • Not Synced
    you start from the same place as RSpec
    as you may have before,
  • Not Synced
    and we'll just say
    'given' instead of 'let'
  • Not Synced
    because that's more straightforward.
  • Not Synced
    'when' instead of 'before'
    so it's clear.
  • Not Synced
    But what really shines is that you'll see
  • Not Synced
    'then' is just just a little one-liner
  • Not Synced
    and I don't have a custom assertions API
  • Not Synced
    because it's actually interpreting
    the Ruby inside of that
  • Not Synced
    and able to split it up to give you
    great error messages.
  • Not Synced
    So it's a really really terse and yet
    successfully expressive testing API.
  • Not Synced
    Now you don't have to
    use that tool though
  • Not Synced
    to just write your test in a way
    that is conscious of given-when-then.
  • Not Synced
    They're easier to read regardless.
  • Not Synced
    They point out superfluous bits of
    test code that don't
  • Not Synced
    fit one of those test phases.
  • Not Synced
    And they can highlight certain
    design smells for instance-
  • Not Synced
    if you've got a lot of given steps
  • Not Synced
    maybe you have too many
    dependencies on your subject
  • Not Synced
    or too complex of arguments.
  • Not Synced
    If it takes more than one 'when' steps,
  • Not Synced
    then its probably the case where
    your API is confusing or hard to evoke.
  • Not Synced
    There's something awkward in
    how you use that object.
  • Not Synced
    And if you got too many of "then" steps
  • Not Synced
    then your code is probably doing too much.
  • Not Synced
    Or it's returning too complex of a type.
  • Not Synced
    Next step I want to talk about
    hard-to-read, hard-to-skim code.
  • Not Synced
    Some people are fond to saying
    'test code is code'
  • Not Synced
    but test code is untested code.
  • Not Synced
    So I try to minimize it.
  • Not Synced
    I try to make it as boring as possible
    for that reason.
  • Not Synced
    Because what I find is that
  • Not Synced
    a good test tells me a story of
    what the code undertest should look like.
  • Not Synced
    But if there's logic in the test
    it confuses that story
  • Not Synced
    and I'm spending most of
    my time reading that logic
  • Not Synced
    and making sure I got it right.
  • Not Synced
    Because I know there there's
    no test of that test.
  • Not Synced
    So test-scoped logic is
    not only hard to read,
  • Not Synced
    but if there are any errors
    they're very easy to miss.
  • Not Synced
    Maybe it's passing green
    for fantasy reasons.
  • Not Synced
    Maybe only the last item
    in this loop of data
  • Not Synced
    is actually executing
    over and over again.
  • Not Synced
    A lot of times people have this impulse.
  • Not Synced
    They say
  • Not Synced
    "Hey I've got a lot of
    "redundancy in my test."
  • Not Synced
    " I could really dry this up by"
  • Not Synced
    "just generating all of my test cases".
  • Not Synced
    You know, for example,
  • Not Synced
    this person did a
    roman numeral arcata.
  • Not Synced
    And they can see really clearly
  • Not Synced
    "On I could just
    have a data structure"
  • Not Synced
    "and make it really much more terse"
  • Not Synced
    looping over that data structure
  • Not Synced
    and then generating
    using defined method
  • Not Synced
    a new test method that would
    like, give a good message
  • Not Synced
    and it's a perfectly reasonable test
  • Not Synced
    and in this case it totally works fine.
  • Not Synced
    But, I still think its problematic.
  • Not Synced
    And the reason is,
    that person experienced test pain
  • Not Synced
    and their reaction was to go
    and make the test cleaner.
  • Not Synced
    Usually when we experience test pain
    the first thing I look at is
  • Not Synced
    Maybe there's something wrong with
  • Not Synced
    my production code that lead me there.
  • Not Synced
    And so, if you look at
    that person's production code
  • Not Synced
    you can see all that data is hiding
    ifs and elses.
  • Not Synced
    They've got all this dense logic in there.
  • Not Synced
    I would much rather take
    a look at the same thing
  • Not Synced
    and extract the same sort of
    data structure from that,
  • Not Synced
    so that I could then,
    instead of having all that if and else
  • Not Synced
    I'm looping over the same data structure
  • Not Synced
    and figuring out whatever rule I have to.
  • Not Synced
    So now I only need a few test cases
  • Not Synced
    in fact, i can just keep adding
    additional keys to that hash.
  • Not Synced
    And now I've covered
    a lot of test cases
  • Not Synced
    without needing a whole bunch of
    really explicit test cases.
  • Not Synced
    It's much cleaner this way.
  • Not Synced
    Sandi Metz, who's around.
  • Not Synced
    Is she here?
  • Not Synced
    Sandi, where are you at?
  • Not Synced
    Hey Sandi!
  • Not Synced
    So she's got a thing
  • Not Synced
    called the Squint Test.
  • Not Synced
    It helps her understand and cope
    with really big file listings.
  • Not Synced
    And she can draw a few conclusions.
  • Not Synced
    I don't have anything nearly fancy,
  • Not Synced
    but when I'm reading your test suite
    I really hope that I'm able to
  • Not Synced
    at a glance understand what's
    the thing under test.
  • Not Synced
    Specifically, where are all the methods?
  • Not Synced
    Are they in order,
    and are they symmetrical?
  • Not Synced
    Is it easy for me to find
    all the tests of one method?
  • Not Synced
    And I like to use,
    if I'm using Rspec for example,
  • Not Synced
    I like to use context
    to point out every logical branch
  • Not Synced
    and all the subordinate behavior
    underneath each logical branch.
  • Not Synced
    It's very easy to organize this way.
  • Not Synced
    And when you do it consistently,
    its easy to read tests.
  • Not Synced
    Additionally, as I said,
  • Not Synced
    Arrange, Act and Search
    should really pop in a consistent way.
  • Not Synced
    Now if I'm using a x-unit style
    testing tool like MiniTest,
  • Not Synced
    at least I like to see
    Arrange, Act and Assert
  • Not Synced
    straightforwardly throughout
    every single file listing
  • Not Synced
    and the names of the tests
    should mean something.
  • Not Synced
    Alright, next step.
  • Not Synced
    Lets talk about tests that are too magic.
  • Not Synced
    A lot of people hate tests
    that are too magic.
  • Not Synced
    Or - not magic enough.
  • Not Synced
    As it turns out because all software
    is, right?, a balancing act,
  • Not Synced
    and test libraries are no different.
  • Not Synced
    Expressiveness of our testing APIs
    exist along a long spectrum.
  • Not Synced
    You know, smaller APIs
    generally are
  • Not Synced
    slightly less expressive
    than things that have a larger API
  • Not Synced
    because they have more features.
  • Not Synced
    But you have to learn those features.
  • Not Synced
    So if you look at something like MiniTest
  • Not Synced
    it's very cool because
    it's classes and methods.
  • Not Synced
    We know that.
  • Not Synced
    So every test is a class
  • Not Synced
    we overwrite 'setup' and 'teardown'
    to override that behavior.
  • Not Synced
    Every new test is another method.
  • Not Synced
    Assert is very easy to use.
  • Not Synced
    Ryan's a funny guy so he's got
    some fun ones like
  • Not Synced
    i_suck_and _my _tests_are_order_dependent
  • Not Synced
    to get some custom behavior.
  • Not Synced
    But when you compare that to RSpec
    it's night and bay.
  • Not Synced
    RSpec has 'describe' and 'context'
    and they're are synonyms.
  • Not Synced
    'subject' and 'let' and they're similar.
  • Not Synced
    'before' 'after' and 'around'
  • Not Synced
    and 'each' 'suite' and 'all'
    for each of those.
  • Not Synced
    You've got 'it', you've got 'specify'
    which are similar.
  • Not Synced
    You've got 'object.should have_'
    and all those matches.
  • Not Synced
    You've got 'expect().to be_'
    and the mostly similar matchers.
  • Not Synced
    You've got shared_examples groups
  • Not Synced
    tagging, advanced CLI groups.
  • Not Synced
    There's a lot to learn in RSpec.
  • Not Synced
    Jim tried to write both ways
    when he designed *-given.
  • Not Synced
    He wanted a terse API:
    'Given' 'When' 'Then'
  • Not Synced
    with just a handful of other
    things he came across:
  • Not Synced
    'And' 'Invariant' and his
    'natural assertion' API.
  • Not Synced
    So it's very very terse
    but it's also
  • Not Synced
    sufficiently expressive
    for most people's tests.
  • Not Synced
    Now, because its not a
    standalone testing library,
  • Not Synced
    you're still standing on top of
    all of MiniTest or Rspec.
  • Not Synced
    So it is still physically complicated.
  • Not Synced
    But it's still nice to live from
    a day to day basis.
  • Not Synced
    I'm not here to say there's some
    right or wrong testing library
  • Not Synced
    or level of expressiveness.
  • Not Synced
    You just have to keep yourself aware
    of the trade-offs.
  • Not Synced
    Smaller testing APIs:
  • Not Synced
    they're easier to learn but
    they might encourage
  • Not Synced
    more one-off test helpers
    that we write
  • Not Synced
    and that, you carry that complexity.
  • Not Synced
    Whereas, a bigger testing API
    something like RSpec
  • Not Synced
    might help you yield
    really terse tests
  • Not Synced
    but to an uninitiated person
  • Not Synced
    they're just going to
    look like magic
  • Not Synced
    and you'll have to eat that
    onboarding cost
  • Not Synced
    if someone doesn't know RSpec.
  • Not Synced
    Finally this category -
    people hate tests that are
  • Not Synced
    accidentally creative.
  • Not Synced
    Because, in testing
    consistency is golden.
  • Not Synced
    If we look at a similar test
    as what we had before
  • Not Synced
    we're going to use 'let' to set up
    an author, a blog, and a comment
  • Not Synced
    but it's not clear at all
    what the test is.
  • Not Synced
    So I'm going to rename it 'subject'
  • Not Synced
    I always call the thing
    under test 'subject'
  • Not Synced
    and I always call the thing I get back
    from the thing I'm gonna assert on
  • Not Synced
    I always call that 'result' or 'result'
  • Not Synced
    100% of the time.
  • Not Synced
    So if I'm reading a
    really big nasty test
  • Not Synced
    at least i know what's being tested
    and what's being asserted on.
  • Not Synced
    This is a surprisingly daunting task
    in a lot of people's test suites.
  • Not Synced
    so if learn one thing today
  • Not Synced
    and you start calling the
    thing you're testing 'subject'
  • Not Synced
    this would have been worth
    all of this preparation.
  • Not Synced
    And when you're consistent
  • Not Synced
    inconsistency can actually
    carry nuance meaning.
  • Not Synced
    For example if I've got a
    handful of tests here
  • Not Synced
    I'm gonna look at em
    and be like
  • Not Synced
    Oh, wait!
    There's something weird about Test C
  • Not Synced
    That implies there must be something
    interesting about object C
  • Not Synced
    I should look into that.
  • Not Synced
    That's really useful.
    That really speeds me up.
  • Not Synced
    But when every test is inconsistent
    when every test looks way different
  • Not Synced
    I'll have to bring that same level of
    screw needed to each and every test
  • Not Synced
    and I have to read very carefully
    to understand what's going on -
  • Not Synced
    understand the story of the test.
  • Not Synced
    So, as a result
    if I'm adopting your test suite,
  • Not Synced
    I would much rather see
    hundreds of consistent tests
  • Not Synced
    even if they're mediocre
    even if they're crappy
  • Not Synced
    then even a handful of
    beautifully crafted brilliant
  • Not Synced
    custom tests that are way different.
  • Not Synced
    Because, every time I fix anything
    it's just a one-off thing.
  • Not Synced
    Also, readers are silly, right?
  • Not Synced
    They've got this funny habit of
    assuming all of code has meaning
  • Not Synced
    but especially in testing
    very often the stuff we put in our tests
  • Not Synced
    is just pluming to
    make our code execute properly.
  • Not Synced
    So i try to point out meaningless stuff
    to help my reader out.
  • Not Synced
    In particular,
    I make unimportant test code
  • Not Synced
    obviously silly and meaningless
    to the reader.
  • Not Synced
    In this instance I'm setting up
    a new author object
  • Not Synced
    and he's got a fancy name,
    phone number, and an email
  • Not Synced
    and they're all validatable,
  • Not Synced
    but it's not necessary for this method.
  • Not Synced
    So here, I'll just
    change his name to 'pants'
  • Not Synced
    and I'll remove his phone number
    because it's not necessary
  • Not Synced
    and I'll just change his email
    to 'pantsmail' and update my assertion
  • Not Synced
    and now, everyone in the room,
  • Not Synced
    before, you might have assumed you
    needed a real author but you didn't
  • Not Synced
    now everyone in the room can
    implement this method
  • Not Synced
    understanding exactly what it
    needs to really do.
Title:
How to Stop Hating Your Tests
Description:

more » « less
Video Language:
English
Duration:
43:39

English subtitles

Incomplete

Revisions