
Title:
1402 Sloppiness

Description:

In the last unit, we have talked about how if we encounter a problem

that is NPcomplete, we can often, using very advanced algorithms,

still hope to find a solution for those problems, at least in practice.

Because you learned about techniques such as intelligent search trees and preprocessing.

Now, in this unit we're going to investigate something different.

So consider a challenging problem such as measuring the length of an elephant.

Now, doing that exactly is actually not that easy because the elephant might be moving

and it might not like the ruler down here.

So finding an exact solution such as the elephant has a length of 5.107 meters

might actually be quite a difficult thing to do.

On the other hand, if you allow for a bit of sloppiness, and say,

"Well, let's just estimate it at about 5 meters,"

then you can get a solution much faster.

What we're going to look at in this unit is if sloppiness,

or at least allowing for a bit of leeway, can actually help us solve

NPcomplete problems more efficiently.

And what I mean by sloppiness is the following: Let's say we wanted to solve shortest tour.

What we always asked was, very demandingly, what is the shortest tour?

No excuses, no leeway, no sloppiness. What is the shortest tour?

And what would happen now if we didn't ask what is the shortest tour,

but rather asked the following: What is a pretty short tour?

So for example, 20% within the optimum.

Then we become less demanding on what the algorithm is supposed to produce.

And now, of course, the question is, will this allow us to construct faster algorithms?

Now, before we dive into this, I would like you to quickly think about this approach for a bit.

In which scenarios might a sloppy, or notexact solution be acceptable?

So what I would like you to think about for a little bit is,

in what scenarios approximation to the best possible solutions would be acceptable?

So approximations is the more formal term for sloppiness,

and, of course, just to be precise, in this unit we're

going to be dealing with optimization problems.

You can also talk about approximations to decision problems,

but that's a bit of a different scenario, and, actually,

we'll get into that when we talk about randomization.

But this is not part of this unit here.

So would we be content with approximations if the stakes are low,

so we're solving an approximation problem where nothing is really critical?

I mean, you might spend a little bit more money,

but it's not going to mess up research or anything like that.

Would it be acceptable if the algorithm kind of sounds good?

We do not have a provable guarantee, but it still appears

to yield good solutions and make sense in general.

And would using an approximation be acceptable

if we find that exact solutions are simply out of the question?

We're using the best search tree; we're using preprocessing,

but still our instances are so large, and they behave so badly

that we just cannot find an exact solution.