
And the answer is that difference in values can, at most, be X. So if the difference in value between B and A is larger than X,

value of A divided by X and the value of B divided by X, would round down to different numbers.

So that's why we know the amount of value lost in each mistake that we make is actually smaller than X.

And that is a very cool thing to know, because with these three things, knowing how many objects can be, at most,

in an optimum solution, how many mistakes we can have made. And having quantified how much value

we have lost in each mistake allows us to quantify the overall mistake that we can have made, at most.

And I call this the maximum value lost, as opposed to an optimum solution.

And that maximum value is, of course, well, there's, at most, N objects in an optimum solution.

That means we can make, at most, N mistakes, and each of those mistakes costs us less than X.

So the maximum value that we can lose overall by rounding down is smaller than N times X.

And that, of course, is the absolute error that the algorithm can make by this rounding down technique.

So now let's figure out the relative error here. And the relative error, since KNAPSACK is a maximization problem,

is of course the value of an optimum solution divided by the value of an approximate solution.

So let's just call the value of the optimum solution V opt, and opt stands for optimal here.

And in that case we can write the approximate solution as V opt minus N times X.

Now, this is not the precise value of the approximate solution, but it's a bound for the approximate solution.

So we know it cannot get any worse than that. So, but to be precise here, of course then we'll have to write it like this here.

Because we know that the approximate solution will have at least this value.

Actually, it will always be bigger, because we have a strict smaller than here, so we should more precisely even write it like this.

So it's smaller than V opt divided by V opt minus N times X.

Now, we can simply write this as one plus N times X over V opt minus N times X.

We know something about V opt because the value of an optimum solution can, at most, be the value of all of all objects combined,

and we call that number V. So we can rewrite this as one plus N times X over V minus N times X.

Now, let's have a closer look at X. How did we define X? Well, we said that X was equal to V over N

times one minus C, where C was some number between zero and one.

So if we plug the value of X into this equation here, what we get is one plus V times one minus C over V minus

V times one minus C. Which means that Vs here simply cancel out, and you get one plus one minus C

over one minus one plus C. Which is the same as one over C.

So I didn't tell you about C earlier, but what we just found out is that the approximation factor of our algorithm

is actually one over C. And you'll remember that the running time was also dependent on C.

So we already figured this one out, right? It was O of N squared over one minus C.

So a good approximation would be to set C close to one. Because then the relative error becomes almost one,

but in returnand you see that in the running time herethe running time actually becomes very large,

because if C is close to one, then you get a very small denominator here, which means that the overall running time,

since it's a fraction depending on this, gets very large.

But if you're happy with aand I'm going to just call it bad approximation, then C can actually be smaller than one.

C will always be larger than zero, so it'll always be a positive number, but it could be 0.5, 0.2 or even smaller,

what will then happen is, if C is smaller than oneso it's this very tiny fraction,

well, then actually this part here in the denominator will be very close to one.

So the running time is very good, but this fraction here will actually increase.

So in return, forgetting about the running time, you get a relative error that is much worse.

And this is exactly the principle behind the PTAS. So we're trading on the quality of the approximation factor,

which is this fraction hereone over Cagainst running time. More running time means we can get a better approximation factor.

A worse approximation factor, in return, will mean that the algorithm runs faster.

So this is already pretty amazing. Now, one last detail here: As you'll notice, the running time of the approximation algorithm

is actually polynomially dependent on C, which is why this PTAS is sometimes also referred to as a fully polynomial

time approximation scheme, or F PTAS. For the majority of polynomial time approximation schemes, the running time

actually depends exponentially on the approximation factor that you want to achieve.

But this is something for a more advanced course. I still sometimes find it mindboggling that KNAPSACK

is NPcomplete, meaning that, in terms of intractability, KNAPSACK is likely to be just as hard to solve

as nastier problems such as SAT or CLIQUE. So approximation, I think, shows very well

that even within NPcompleteness, there's a broad spectrum of difficulty, if you will.

So, some of the most difficult problems to approximate appear to be CLIQUE and INDEPENDENT set.

And then in the middle of the spectrum, we have problems like VERTEX COVER.

And then we have problems which have a PTAS or even a fully polynomial time approximation scheme,

such as KNAPSACK. All of these problems here are NPcomplete.

But some of them can be approximated with no constant factor at all.

Others have algorithms that, if you don't look closely, could almost be mistaken for being polynomial.

Of course, all of this is still under the assumption that P does not equal NP.

If P equals NP, then all of these problems here would be polynomial time solvable, anyway.

Now that we have looked at approximation, what else is there that you could try to tackle NPcomplete problems?

Next time we'll be basically poking around, meaning that we'll be looking at how randomization and random algorithms

could maybe help us tackle these problems here as well. So, see you next time.