
タイトル：
1816 Strategy Solution

概説：

And the correct answers here in my opinion are the first two.

The first one is even if the algorithm makes an error with 90% probability, running it only a few times,

it does guarantee a correct solution and that of course as you've seen before

these probabilities here have to be multiplied with each other if we run the algorithm a couple of times.

So, if you run it the first time, of course we have a 90% error chance,

but if you run it the second time, we only have (90%)² error chance.

If we run it three times, we have (90%)³.

And once we have run this algorithm 50 times,

we have an error probability of 90%^⁵⁰, which is about 0.5%,

and running it a couple of times more of course, we can even get this figure out much much lower.

So, since we only have to run it a constant number of times to get the error very very low,

it would mean that we're staying still in polynomial time,

but we're almost guaranteed a perfect solution.

Now, I'm not saying that this is impossible because

the laws of NP completeness indeed do not account for randomness

but of course it's very very unlikely

It's almost the same as with the approximation algorithm

where you wouldn't expect an approximation algorithm with

an approximation factor of 1.01 even though for some problems

it has not been proven that that is impossible but you still wouldn't expect it.

Now, the second choice here I think is also correct.

And that is basically my reasoning why I say you shouldn't expect this.

The number of potential solutions for an NPcomplete problem is exponential.

Since the algorithm only runs in polynomial time,

it can only afford to check a small part of that solution.

That is basically the image that we just had

where you only able to explore small areas of the whole solution space.

And of course you have to employ the strategy.

I just think that with this strategy

it is highly unlikely to get an algorithm with this performance up here.

It just doesn't make sense because you're still poking around a lot.

So why would you get a fixed error probability in an exponentially large space

if you can only afford to check a polynomial size part of it.

And finally as I said the laws of NP completeness indeed

do not account for randomness, but the thing is this

just because that is so it doesn't mean we should expect anything to happen.

Again, I think this is the same thing as with the approximation algorithm.

Of course for some problems it would theoretically be possible

to have a 1.01 approximation algorithm, but it just seems too unlikely to be true.