YouTube

Got a YouTube account?

New: enable viewer-created translations and captions on your YouTube channel!

English subtitles

← Random Browser Input - Software Testing

Get Embed Code
1 Language

Showing Revision 3 created 05/25/2016 by Udacity Robot.

  1. Let's take a little bit different view of the same problem.
  2. So, what I want to do is draw a graph here, so in the level of code coverage
  3. with a random test cases are inducing on the system under test.
  4. So again, we're testing a web browser, and on the x axis,
  5. this is going to be a little bit fuzzy unfortunately.
  6. But what I want to show is how far into the web browser we execute.
  7. Which is just checking, for example, if the incoming data is even valid HTTP.
  8. Once we get valid HTTP, the browser is going to scan and make sure it got valid HTML.
  9. It's going to be doing lexical analysis and checking of HTML.
  10. If the input fails to be rejected by this kind of code, it's going to go on the rendering engine
  11. and finally, it might have sort of some more advanced processing
  12. which is dealing with things like forms, cookies, scripting and such.
  13. Okay so, we have this graph.
  14. So, now let's see what happens when we fuzz the web browser using totally random bits.
  15. Well, what's kind of most likely happen is most of those bits that come in,
  16. are now even going to be valid HTTP responses.
  17. So, we're going to get coverage rapidly drops off
  18. and what's left is almost always going to fail somewhere else.
  19. What we're going to see is, we're going to spend the bulk of our testing effort
  20. rejecting random sequences of bits very early on
  21. and very little of our testing effort, testing code here.
  22. Again, as I said, if that's what we're trying to do,
  23. if we really want to be stressing the early parts of the web browser code, then that's great.
  24. And random testing is perfectly good at that, but on the other hand,
  25. if we are interested in is for the broad coverage of the software under test, then we're going to fail.
  26. The red color indicates random bits.
  27. The next thing we can do is we could go random input generator
  28. totally respects the constraints of the HTTP protocol.
  29. Furthermore, we can adapt it, so the text that it generates contains valid lexical elements of HTML.
  30. That is to say it's composed of things like races with tags in them,
  31. other kinds of tags so this was the directive, but this isn't too hard to do.
  32. So, if we do something like that, I think I'm going to call that is,
  33. use green to represent protocol credit code.
  34. I'm using sort of fuzzy terms of LAN devising here, I'm not trying to use any kind of standard terms.
  35. So now what's going to happen is, hopefully we'll get pretty good coverage of the protocol code still.
  36. To get quite good coverage of lexical HTML processing and we're going to fall off the cliff again.
  37. Because as soon as we get to the render, it's going to become apparent,
  38. and we didn't try hard enough to generate valid HTML
  39. and we're going to get something to render very often.
  40. So now, what we've accomplished here while we pushed the coverage that we're getting
  41. on the software under test farther into the system, it is farther into the HTML processing chain,
  42. but still haven't pushed it very far.
  43. So, the next thing that we can do is use some sort of a grammar
  44. or some sort of a structural definition of HTML to generate random but valid HTML.
  45. The next thing is valid HTML.
  46. And so what's going to happen is, a coverage of the protocol code and the lexer may decrease,
  47. while on the other hand, we're going to be able to push
  48. into the HTML processing code quite deeply before falling off a cliff.
  49. So what've we done? We've traded off coverage in the early parts of the web browser
  50. which may well be so simple that we don't care about much about them for coverage farther in.
  51. And so finally, what we could do is, generate random code that includes
  52. elements of scripting, forms, whatever else that we're interested in testing
  53. and we can run that through and now we can start randomly testing our browser with this.
  54. What's going to happen now is, our coverage might decrease
  55. even a little bit more in the early parts because we're spending more time doing other things,
  56. but we're probably not going to fall off a cliff at all.
  57. And so, you can see that in most cases when we do random testing,
  58. what we're looking for is something like this kind of flat line,
  59. and what this flat line indicates is that we're covering
  60. all parts of the software under test roughly equally.
  61. What we're going to see is as we look through more random testing examples,
  62. is getting sort of a coverage curve like this often requires quite a lot of work, quite a lot of sensitivity
  63. to the structure of the input demand, but on the other hand,
  64. we get paid back for that work with random tests that can exercise the entire software under test
  65. and that's going to be a valuable thing in many cases.