
Title:
0332 Automated Whitebox Testing

Description:
0332 Automated Whitebox Testing

So the next topic that we're going to cover is called the "Automated Whitebox Testing."

And this isn't in the form of code coverage but what rather is way to get software tools to automatically

generate test for your code, so you wrote some code and the questions were going to ask is

"How to generate good test cases for it?"

And of course one answer is we can use the kind of techniques that

I've been talking about for this entire class. We can think about the code. We can make up inputs.

We can basically just work hard to get a good test coverage but another answer is we can run

one of this automated whitebox testing tools and so let's see how that works.

For this tools goal is to generate good path coverage of the code that you wrote.

So let's start of basically just making up random values for your code.

Let's say one on one for the inputs. So now let's just go ahead and execute the code.

The first question, is this a prime number? And so it's not.

It wasn't prime the first time but it's still not prime so we're going to return zero.

So now that the automated testing tool has seen a path through the code

that didn't take both of the if branches so we will try to construct the new set of inputs

for the function but take a different path.

So the most obvious choice point to start with is the first if.

So the question the tool is going to ask is, "How can a generated input that's prime?"

And so to do that of course, it's going to have to look at the code, the test for formality

so it's going to end up with this sets of constraints on the value of a which are going to be pass

to a constraint solving tool and the answer if the solver succeeds is going to be a new value of a

that passes the formality test, so let's say a is three.

Though automated whitebox testing tools come up with a new set of input dysfunction

its going to go ahead and run in again.

So this time the first test is going to succeed a is prime, we're going to increment b by 3

decrement a by 10 and now a is going to fail the formality test since let's assume

our formality check one at a time detect positive.

Now the new value of a minus 7 is going fail the formality test and we're going again return zero.

So the question we have is, was the tool learned?

What is learned is one execution that falls straight through.

Another execution that takes the first def badge, so now what its going to do

is try to build on that knowledge to generate inputs that also take the second branch.

So its going to take the first set of constraints that is the constraints of course a to be prime.

Its going to add another number set of constraints that force the updated value of a

that is to say a value of 10 lasts than the original value of a to be prime.

So its going to turn that all one to a set of constraints pass it to the solver

and the solver is either going to succeed in coming up with a new value of a or possibly it will fail

but let's assume it succeeded and so let's say that the value of a that comes with this time is 13.

We're going to execute the function again, 13 is prime, so we're going to add 3 to b

subtract 10 from a giving 3, 3 is prime, so now we're going to ask if b is an even multiple of 20.

If so we would return 7 but its not so we're going to return zero.

The third time through the function, its going to add a new constraint.

So not only are we keeping all our constraints on a but writing a new constrain on b

the b mod 20 has to come out to be zero and so this time the solver, let's say,

comes up with a is 13 and b is 20.

Now its going to execute the function another time, this time returning 7.

And so by iterating this process multiple times that is to say by running the code

and then using what it learned about the code build up a set of constraints to explore different pass

what we can do is generate a set of test inputs that taken together

that use good coverage for the code under test.

Unfortunately, I don't know of any automated whitebox testing tools that exists for Python

to receive programer is a tool called Klee that you should try out which implements this techniques

and I encourage you to do this, Klee is really an interesting tool.

And so as you might expect, in real situations of tool like this might fail to be able

to come up with a useful system of constraints or to solve them for really big codes

and in fact that's absolutely the case

These tools blow up and fail on very large codes but for smaller codes like the kind of thing I'm

showing you here and actually they're are really nice job of automatically generating good test inputs

and as it turns out these techniques are used fairly heavily by Microsoft to test their products

in the last several years for the finding of a very large number of bugs in real products.