And the answer here is that we end up with 2^n simulations,
because each time that we encounter if-better we have to split into two possibilities
and then the next time, we encounter if-better we have to split each one of those two possibilities
into two possibilities each and so we end up with an exponential growth.
And that is why I told you in the beginning that simulating a non-deterministic RAM
on a deterministic RAM is possible, but it's a very unsatisfying simulation
because we pay for it with exponential time and what that means is, for example, for vertex cover,
yes, we can solve vertex cover on a non-deterministic RAM in polynomial time.
But if we do the stimulation of that non-deterministic code,
we end up with exponential time again.
Now of course the question would be, could you simulate a non-deterministic RAM more efficiently?
For example, only using a polynomial number of simulations.
So, in a way we're back where we started.
We found out that we can not simulate a non-deterministic RAM,
but that's going to take us exponential time.
So the question remains, is there a better way to simulate non-determinism
ideally with polynomial time or does simulating non-determinism
on a deterministic RAM always lead to exponential time?