
Removing any of these subsets will result in a passing test,

which means that we again have to increase the granularity by doubling n in here.

Now we split the input into six subsets,

resulting in six more attempts to remove individual parts

all of them will pass 

and then we have to increase the granularity again, which would be 12.

But now our input only has 8 characters,

and we can't split an 8 character input into 12 subsets

unless these subsets would be empty.

So we're going to fix this a bit by setting up this minimum function over here.

We make sure that n cannot grow beyond the size of the input,

which means that in our case n would now be 8.

This is the final stage of delta debugging

where it tries to remove one character after another,

but all of these removals result in passing test cases,

so what we get in the end is this very string

and we have now shown that every single character in here

is relevant, because removing it changes the outcome of a test from fail to pass.

So, what delta debugging gives us in the end is an input a simplified input

where every single part is relevant.

At this point you may wonder how many tests does delta debugging actually take?

You can see that at the very end when every single character has

to be removed individually, the number of tests of course

is proportional to the length of the simplified input.

In the worst casethat is, for very pathological examples

the complexity of delta debugging is squared with respect to its input.

However, there is a nice situation in which delta debugging

simply becomes a binary search and is as efficient as a binary search.

So, when does delta debugging need a logarithmic number of tests?

Is this when all tests fail?

Is this when there is always 1/2 that can be removed and fails?

Is this when all tests pass?

Or is this never? Check all that apply.