English subtitles

← 03-09 Improving Coverage

03-09 Improving Coverage

Get Embed Code
1 Language

Showing Revision 2 created 07/16/2014 by Udacity Robot.

  1. We're going to the test suite, we're going to go to the test removed.
  2. After we've removed everything from a tree, we're going to remove an element that we know
  3. is not on the tree and of course after we removed everything from the tree, anything we choose
  4. should not be on the tree so minus 999.
  5. We'll work this well as any, so we're going to go ahead
  6. and save that and run the coverage tool again.
  7. So this time something interesting happens.
  8. What happened is the removed method for this spaly tree on removal of minus 999
  9. so on the line that we just added causes an exception re-thrown in this splay function
  10. and so let's go back and look at those splay tree code.
  11. So when we removed an element from the tree but wasn't there, it's suppose to raise
  12. the exception key not found in tree.
  13. On the other hand, what its actually doing is failing quite a bit below here
  14. in the middle of the splay function when the code does a comparison against an element of type none
  15. and so that's probably not what the developer intended by adding just a little bit to our test suite
  16. we seem to have find a bug not anticipated by the developer of the splay tree
  17. and I think this example is illustrative for a couple of reasons.
  18. First of all the coverage tool, the very first time we run it told us something that we didn't know
  19. and this my experience in general that this is what happens when you use a coverage tool.
  20. Its basically a very similar to the first time you run a profiling tool on a piece of code where it turns
  21. out that usually this functions are using up CPU time are not the ones that you necessarily
  22. thought were using up CPU time--well, coverage tools are very similar.
  23. It often turns out that the stuff that you thought was going to run might not be running only some of it
  24. but often turns out that some of the stuff that you thought was going to run doesn't get run.
  25. So it told us something interesting and that's nice.
  26. Now on the other hand if the coverage tool haven't told us anything interesting that is to say
  27. if it told us that everything that we hoped was executing when we run the unit test
  28. was executing well then that's good too--we get to sleep a little easier.
  29. The second thing you noticed is a much more settle point and this point is that we're out of the test case
  30. to execute this one of code but it turned out that the bug wasn't right here.
  31. The bug was somewhere completely different buried in the splay routine
  32. and if you go back and look at the coverage information, it's going to turn out
  33. that the splay routine is entirely covered--that is to say every line of the splay routine
  34. was executed during the execution of the unit test for the splay tree.
  35. This serves to make a couple of points--first of all just because some code was covered
  36. especially at the statement level this does not mean anything about whether it contains bug in it.
  37. It just means that it run at least once.
  38. The other thing is that and the second thing is we have to ask the question,
  39. "What do we want to really read into the fact that we failed to cover something?"
  40. The thing to not read into it is a bit of failed piece of coverage is a mandate
  41. to write a test that covers this test case.
  42. That's what we did, that's not a good general lesson rather the way we should think about this
  43. is that the coverage tool is giving us a bit of evidence.
  44. It has given an example suggesting that our test suite is poorly thought out.
  45. That is to say that our test suite is failing to exercise functionality that is present in our code
  46. and what that means is we haven't thought about this problem
  47. very well and we need to rethink the test suite.
  48. So to summarize that, when coverage fails its better to try to think about why we went wrong
  49. rather than just blindly writing a test case and just exercise the code which wasn't covered.