(Offscreen) So welcome to the talk
from Niels
about dependency resolution in
polynomial time.
(Neils) I'm Neils, I'm doing my second
talk today.
This time about something slightly
more technical.
I wanted to do dependency resolution
in deterministic polynomial time,
which is what this is all about.
I didn't get as far with this as I wanted.
I wanted to have a tool ready and have
some results, which I didn't quite finish.
So these will be my findings work in
progress notes.
We have six topics I have to cover.
A quick introduction.
Then something about the hard problem.
Then the part the part where it actually
turns out to be highly tractable
in practice, if you don't think too much
about it.
And then there is some installing,
upgrading, and installing again
for how to improve your solvers.
Starting with introduction.
I'm hoping mostly to debunk the myth
that dependency resolution is a very
hard problem that we cannot solve,
and we should remove packages in the hopes
that they will somehow keep the archive
from exploding, and the dependency
resolvers to break, and apt to die,
and god knows what.
I've been working on Britney, which is a
different problem to solve, but it
involves the same techniques, or uses some
of the same techniques.
Some will be directly applicable to apt,
some will not.
There's not a one size fits all solution,
sorry.
And I much less haven't wrote it yet,
even if there was.
So defining the problem.
When you try to install a package on your
system,
there's actually two problems
being solved.
One is the part where apt figures out if I
have to install Eclipse.
You'll be needing a lot of Java packages,
you'll be needing a lot of other stuff.
And so it figures out which packages are
needed for this to make sense.
The separate part of the problem,
the second problem is,
once you have this list, they have to be
unpackaged and configured in a certain
order for this all to make sense.
They are in theory two distinct problems.
I'll be talking about the first problem,
because that's actually
dependency resolution.
The other thing is just ordering.
The ordering thing is certainly also very
important part.
I don't want to dismiss that, it's just
it is in fact theoretically a polynomial
time problem.
So to solve the ordering problem, you
basically compute all the action-
Given a set of actions that need
to be done,
then there are some constraints between
some of them,
like you unpack things before you
configure it.
Maybe you deconfigure something else
before that.
So you end up with a list of partial ??
constraints.
From that you build a graph with ordering.
It's fairly simple to do, in practice,
if you have the right tools for it.
Then without cycles, this is a trivial
things where it just orders it
and gives you this is the order,
go fix.
When there's cycles you will get a single
action consisting of multiple things
to do at the same time, which is really
impossible to do.
It turns out that the way we defined this
whole thing,
if you have a package that does not have a
post install script,
it does not need to be
configured separately.
That means that it's just unpacked or
not unpacked.
That tends to break most cylcles.
If you want, there's problems with
ordering constraints.
Help us to find a way to get rid of most
of the post install scripts,
such as the ones just running lvconfig
and nothing else.
That could solve some of the cycles,
and otherwise it's just feeding it to dpkg
and it works.
If you have a cycle, it's a separate
problem fixing that.
I won't be covering that.
As I said, the runtime is polynomial.
It's something like the size of the
package, a regular graph.
Which is fairly polynomial.
This even covers finding cycles.
Again breaking cycles is just removing
post install scripts.
And of course if you think this is too
simple,
you can always implement more features
on top of it,
such as trying to optimize for minimal
downtime of services and all that.
You can make any trivial problem very hard
very soon with that.
So, the hard problem.
First of all, the players.
We've got apt, aptitude, cupt, whatever.
We've got Britney.
We've got DOSE, edos, whatever its
name is today.
They solve the hard problem.
Or they should be trying to do.
Notable tools not affected by it.
dpkg.
So dpkg surely figures out if you are
missing dependencies. It says,
"I cannot resolve this, and I don't know
where to fetch it, so I'm just giving up."
And that is the only thing you can do.
Which means it only verifies the solution
which is known to be polynomial,
and is therefore is not a game player.
DAK rm is also doing a polynomial time
check, it just happens to be so
for other reasons.
That's basically the known players.
There are probably others.
But we're moving onto the
hard problem itself
So the problem we actually have is,
we've got version dependencies,
we've got alternative choices,
we've got virtual packages.
All three makes this problem hard.
You basically have to remove all possible
alternative choices, guesses, whatever,
to make this simple.
It just so happens if you move version
dependencies among other things,
things become really fun.
?? to solve the problem, but upgrading
gets really really spicy.
So how do we ensure that dpkg
is configured before your new package,
use a feature new dev package
is installed.
That's sort of impossible.
It becomes simple, but we end up with a
broken dependency ??,
so it's not really an option.
Technically there's also a way to make
it simple by having multiple versions
of things, and removing negative
dependencies,
but that's not necessarily
easy to do either.
Presumably file configs will get in our
way or something.
We have to redefine where we place all of
the files, that's not going to be
something we're doing anytime soon.
So yeah, it's not really an option.
To give you a better understanding of the
problem, please consider this example.
If we have coreutils at some version,
depending on either this version
of libc or a newer one.
From this can we immediately conclude
that if libc 2.19 is known to be
installable,
that coreutils will also be installable?
How many people think we can immediately
conclude something from this?
How many people think we cannot
conclude anything from this?
We'll that's good.
So it turns out that with negative
dependencies and without negative
dependencies, it doesn't really matter.
With negative dependencies, libc or
whatever it depends on could
just conflict with coreutils, we're
broken, it's out.
And without negative dependencies, you
could have one of the things where
it depends on a version that's newer or
older, and no.
It's not really trivial to do.
You can't conclude something locally,
in theory.
That's something that can
become a problem.
Anyway, it is highly tractable
in practice.
Because if you do have a break, conflicts,
or otherwise negative dependency,
it tends to be upper bound.
So if the previous version of coreutils
was installable with that version,
the new version will also probably be.
Likewise, most dependencies, circular or
otherwise, tend to be upper bound.
There are cases of version ranges which
is a lower bound and upper bound
at the same time.
They exist, they are just not so
common yet.
And that's a good thing.
The number of possible solutions to
any clause
actually tends to be fairly limited.
Of course there are the exceptions.
Packages from unstable.
They might just be missing, in which case
it's really hard to solve the dependency.
You've got mutually exclusive packages
all the sendmail stuff, MTAs.
The version ranges I mentioned before.
And then of course the strictly equal
versions, which we see inside
packages built from the same
source packages.
The redeeming feature here: same source.
Because that is actually very simple
to figure out
if they are from the same source.
So they tend to be upgraded ??
anyhow.
The new versions tend to be available
at the same time sorta thing.
The problem made hard in a nutshell, is
this contrived example for example.
You have a package you want to try,
and it depends on any number of foos.
Each of the foos can either be solved by
picking up the bar package
or the good one.
Which might be a hint to which one we
would choose if we want to solve it.
And the bar packages, depends on a
bad package that does not work
with the starting-package,
and we are broken.
And the good package just doesn't have any
dependencies and is therefore good.
In the perfect example you saw,
you try foo, good, foo1, and then good
and be done.
In practice, if you don't have some way
of ordering these so you know
one of them is better than the other.
You now have an N times M trace, where
you have to guess up and down, which
one am I doing, backtrack back and forth
and all, and it's going to be horrible.
And this contrived example can of course
be solved by some solvers that can see
the pattern, but you can always a
more contrived pattern
that's harder to understand.
So I tried to keep it somewhat simple.
The good thing is, very few people write
software like this.
And that is the most redeeming feature
about the package ??.
We do have exceptions.
Aspell-dictionaries, ispell-dictionaries.
Basically if you depend on
ispell-dictionary,
which is a virtual package provided by 50
different ispell packages,
or aspell packages, and then they depend
on something else after that.
So in theory they can be a part of
creating this problem.
Fortunately, they themselves are fairly
simple once you've passed them.
You also have multiarch foreign, or
multiarch allowed with
package interdependencies.
So if you have multiple architectures,
you in theory can have any number
of things satisfying the same clause.
This gives extra unnecssary work for most
resolvers,
if you enable them to do
multiarch and cross things.
We have 12 of them, but I suspect most
users of multiarch are somewhat limited
to maybe 3.
Possible exception being people like
writing resolvers and trying them out,
torture test them.
The real problem, if you had to do this in
the archive you would need to do,
for example something like, write
indifferent awk implementations.
We have three.
And then afterwards, for example,
have m different distinct,
but equally valid libc implementations.
It's not something we're likely to do in
the near future.
And you have to do this on multiple
levels,
because you can just presolve the
essential set most of the time.
The real problem: if you had to do this in
the archive, you would need to do,
for example, something like write
N different awk implementations.
We have 3.
Then afterwards for example, have
M different distinct but equally valid
libc implementations.
It's not something we're likely to do
in the near future.
And you have to do this on multiple
levels.
Because you can just presolve the
essential set most of the time,
this particular thing is not very
interesting.
And the data packages can in theory
blow up,
when you have truly interchangable data,
like we do with the aspell packages.
But they tend to either not have any
dependencies after the aspell,
after the data package, or they have a
simple loopback into what you came from.
So the blowup blowup tends to be limited
to one level.
If you have enough of those, it's still
going to be a problem,
but it keeps it simple.
Onto installing.
The easier case, as mentioned is when you
have a single suit
and a single architecture.
All things in the graph collapse into
what looks like a regular graph,
and therefore become simple,
trivial.
This actually happens so much that
if you take Lintian,
it has 26 distinct dependency clauses.
If you look at each of them,
when you only have a single suite
and architecture, there's at most
one package directly solving each
clause locally.
Then further down the graph somewhere
you depend on for example, debconf,
or debconf-2.0 which is one of these
alternatives, giving rise to its ??
Linitan itself is not subject to becoming
a backtrack point.
There's no guesswork when you reach
Lintian, which of its dependencies
should I pick, which version of
it and all.
That's not there, you have to go further
down to see that.
Actually, but the time you reach the
debconf thing,
it has already been solved by
something else, as I recall.
Oh, no wait. The dependencies of debconf
is already covered by the time you're
forced to resolve this choice,
so it's not really a big issue, either.
You just pick the debconf one, c-debconf
currently depends on debconf,
so it's trivial that way.
The question is, when do we have
these conditions.
We do that a lot actually.
If you do builts in unstable for example.
??, only one suite only one architecture,
piece of cake.
If you install packages of a pure
Jessie system, with multiarch,
you have the single suite one,
which limits a lot of them,
and then you have not the single
architecture.
That's what it is, but in pure Squeeze,
you won't have multiarch because
because it didn't work back then.
So that's even simpler.
Also you can do unstable + experimental,
or stable + backports if you fiddle a bit
with the end resolver and say that it is
not allowed to pick experimental,
unless it is explicitly requested to pick
experimental,
and from there it only picks
what it absolutely needs to solve it.
You basically get the same result with a
single suite restriction,
but that requires you to actually code
something for it.
And then there's Britney, with the testing
migration thing, and that's because
she basically takes things, moves it into
a new version of testing,
and tries to test that this is still
the same solution.
So she forces it into a single
suite currently.
So these are common cases where
it happens.
This is not everywhere where it happens,
so the stable upgrades are still funny
enough, not a single suite and all that.
It happens because there is only one
package and architecture.
Only one instance of package, you only
have one version, one architecture that
solves that particular dependency.
Whereas with multiarch you could have,
i386 and the amd64 version.
If you do an upgrade you could have the
old version and the new version
that may or may not satisfy it.
Also we have this thing where we
don't like libraries to have
alternative implementations, it sort of
breaks things horribly.
Especially when they are not actually
agreeing on the interface they provide.
There's the social aspect of it that we
don't really bother having
200 interchangeable implementations of
everything.
I think the record is something like
five or six different versions
of sendmail, implemenatations.
??
I don't think so, but ?? might have.
And even when you do have one of these
explosions,
it has to actually hide the breakage
beneath one of these choice
explosion alternative explosions for
it to be a problem.
You might have aspell over here on the
right hand side,
and you have a breakage over here which
?? to find out.
So if you use the resolver to just the
first explosion to solve the other part,
it realizes this is not working,
I'll bail out.
There's a couple things that we do to
make this easier.
The mos interesting thing is of course,
can we still make this simple without
the single suite and single
architecture restriction.
We can to some extent.
And that's where we move to upgrading,
in deterministic polynomial time.
This is where I spent most of my
time working.
So to start with, when we do an upgrade
from stable to stable,
and here I'm taking pure stable, no
backports and all.
The general recommended way to do it is
to replace Wheezy with Jessie,
hit apt-get update, and
upgrade afterwards,
and apt replaces all the old packages with
the new version,
if there is a new version.
We also want all the packages from Wheezy
that did not have any version in Jessie
to be removed, and there might be a new
essential package somewhere.
So long story short, upgrade if it's
present, remove if it is not present,
and then install the new essential
packages, if any.
They don't tend to change that often.
I'll be ignoring the installing part
for now.
It is of course valid and interesting,
but we'll skip it.
So let's take a simple example, somewhat
contrived, somewhat made up the numbers.
They are not too far from reality, but
they are not exact either.
We have have a system that we are
upgrading from Wheezy to Jessie.
I claim that there are 30 thousand
packages in Wheezy,
we've got 35 thousand in Jessie.
The system we are working with has
two thousand packages installed.
Mine here has something like 1200.
If you do a simple dpkg -L
and then do a line count, you can see
how many you have on your system,
plus-minus five or so, to give you an idea
of how many packages
you're actually working with.
We assume that all Wheezy package got a
new version in Jessie,
because I'm about to do a pop quiz!
So with these numbers, what is the maximum
problem size of this upgrade.
Any takers?
Is it 30?
Anyone for 35?
37?
One for 65.
One for 67.5 thousand.
And who believes that I was an asshole
and did not include the right answer?
Oh, so little faith.
I'm glad you believe me.
The right answer is 35 thousand.
The trick is, when we do the upgrade, we
replace Wheezy with Jessie,
we do an update, so all the Wheezy
packages not installed
on our system disappears.
Then you get all of the Jessie packages,
with the assumption that
no Jessie packages have a new version
compared to Wheezy.
You've got the two thousand from
Wheezy on your system,
plus the 35 thousand
packages from Jessie.
So that means your average
stable-to-stable upgrade,
if you do it this way, is actually only
about 40% of the worst case
that it could've been if you kept the
old stable as well.
There's also another awesome feature with
removing the old stable repository.
It means that every time you
upgrade a package,
your effective problem size
decreased by one.
That's not always useful, it's not
always awesome,
and it's not always the thing that solves
your problem,
but it means that if you can actually make
apt upgrade a single thing
every now and then, eventually it might
be able to figure out the rest of
the upgrade path, after ??
200 packages or so.
As we upgrade, we end up to moving towards
a single suite situation,
so the more things we upgrade, the more we
get to a single suite situation.
Again, most ?? pure stable to stable
upgrades.
And of course the right answer would've
been 65 thousand packages
if you kept your stable.
So that would've been a possible
answer as well.
Now upgrading.
This should be as easy as mark new
essential packages for install,
Mark everything that has a new version for
upgrade, remove stuff,
and then figure out if something is
missing, then install that.
This solves upgrading by installing, so
I'm not really interested in doing this,
because installing is hard.
We can do something smarter than
that for upgrading.
When we do upgrades in a polynomial
deterministic time,
I'm not going to give you a 100% solution,
that's not going to work.
If I could, I would be very very rich.
So, I intended this to sort of find some
easy low hanging fruits
that can be solved cheaply, and then you can
throw a general purpose resolver at
the rest of the problem, after you've done
the simple parts.
Or you can feed your solver a partial
solution and ask it to fix it up
so it would be a slightly
smaller problem size.
It relies on two things.
One, it exploits a lot of common patterns
on how we do dependencies.
And the other thing is, if you have a
valid system state,
so if you have a system where all of your
packages are in fact installable,
which dpkg tends to enforce,
you can verify that is true
in polynomial time.
You can also verify a change to it in
polynomial time.
So here I'm going to do a bit of theory,
and a bit of English inbetween.
We start with a not-broken system,
that is we have a set of installable
packages that are mutually
co-installable and all that.
We call that 'I'.
Then we can add things to I, we can remove
things from this set, we can take something
in the set and replace it with something else,
basically add and remove.
That can be done in linear time.
If I take a random package, mash it in,
that's constant, maybe,
depending on your implementation.
The theory is we can compute a new
set, where we remove something,
add something else, we get a new set.
Then we can verify this new set is indeed
still a valid solution in polynomial time.
So, a constant time modification
can be verified in polynomial time.
The issue of the day is, randomly feeding
packages in and out of the set
is not going to work very well.
So we need something smarter than
just random modifications.
Ah well, somebody might actually write
something that works with that but anyway.
So the first thing I did,
as I mentioned earlier,
we have these exclusively equal
dependencies inside binaries,
?? binaries from the same source.
So I grouped binaries from the same source.
If I was to mark one of them for upgrade,
I would immediately pull the rest of
them as well, if they were installed.
It also happens to sort out a very
common case of breaks-replaces,
when you move files between two binaries
in the same source.
Then you just try to upgrade each of these
groups, in some order.
Preferably deterministically,
but I didn't get that far.
And if the result of one of these
modifications is upgradable,
we can test that fairly cheap, we commit
it, and you just rinse-and-repeat
until you run out of things to test,
that leads to something new.
This works largely depending
on what you have installed,
unsurpisingly.
So if you have very few packages,
it may work better than if you have more,
And sometimes it works better when you
have more, and that's really exciting.
So I did an example Wheezy installation
based on what I have installed
on my machine.
Not quite, but close, well related.
So libc for example was immediately
upgradable by this procedure.
As well as some Java package,
worked fine.
man-db, eclipse, xterm, then I had to do a
couple packages before that,
which were all upgradable.
I think libc was primarily ?? and maybe
some libc linux thing.
But there are actually a set of packages
you can upgrade with this.
This is of course not testing on
configurations.
In this particular configuration, I could
upgrade dpkg like this,
but I don't expect you to be able
to do that in all cases.
In fact I find it highly unlikely
because in Wheezy to Jessie,
dpkg has tons of breaks for
all sorts of things,
so if you have the right thing there,
you break something else,
and that has to be upgraded
at the same time.
And that is sure to build
a loop eventually.
So basically what we are exploiting here
is the greater-than-equal version,
which is the common way of doing things.
We rebuild a package, it gets the new
dependencies on a higher
version of things.
That means the foo in stable ??.
Right so, everything depending on foo is
equally happy with the version
in Jessie as well, cause it's a lower
bound, and Jessie has a newer version.
So that works very well.
This is the common case for libraries
without ABI transitions.
Apparently including libc
and enables you to upgrade from the inside
out, from the core of the onion and out,
if you think of the graph as an onion.
The algorithm is fairly dumb,
unsurprisingly.
It cannot solve Perl, it can't talk Python
because they tend to involve
pulling in a new package so you have to
install that and all that.
In Perl it could technically solve if it
would merge groups together
into larger groups, but anyway the runtime
of this is something like I2 * I+E.
Which is upper bound by
I to the power of 4.
So it's fairly cheap, it is fairly polynomial.
It's very trivial to do.
We can do better than that as mentioned,
if we can have it figure out that
dpkg needs a new version of libc,
we could upgrade those together.
That's not a problem on my system,
but it might be on others.
Then there's the part where dpkg is
breaking all of this stuff,
so you might have to upgrade it
at the same time, or before dpkg.
Then end up with some sort of big loop or
three structure of groups you have to
merge together or upgrade together,
and it should just work.
There's also the part were we have to
handle rename packages.
This comes into ?? basically.
They become an API bump, which will be
very useful for stretch due to dcc5.
Basically if you want to do a same
restriction on this you could do something
like, we have a clause that is not
satisfied, but the thing we need for it
is a package introduced in the new source,
then we pull that.
If it has the same dependencies, we've
already solved it, that sort of thing.
There's also the part where people rename
the package from foo to foo replacement
or something like that.
Again, if it is from the same source, we
might do some magic here,
some heuristics here.
And also at some point you will end up
needing to install stuff,
that is actually required for
Wheezy to Jessie
because it pulls in a new a system.
If it had trivial no-guess solutions,
by which I mean the first thing is of the
alternative choices is a real package,
and there is only one of them,
you could solve this automatically
in a deterministic way.
And otherwise, give up and try
something else.
So this is the basic algorithm, and the
basic ideas I've put down,
and I've been fiddling a bit with,
and didn't get very far with so far.
Installing part 2.
So after we've learned this with
upgrading, we can now,
with single suite single architecture
it is trivial.
Upgrading we can usually ?? it,
reduce the problem size to some extent.
We can do better on the
installing part as well.
We can look at these
aspell packages again,
because they are one of the big problems.
We have a couple packages that appear
to be almost identical,
and then we also have packages that
are clearly superior to others.
Packages that are identical,
shows up like this.
So we ?? long enough in the aspell
package from Russia and the one
from Ukraine, something
very special happens.
You will notice that primary difference
between the fields I've selected are
the package name and the version.
Exiting, because that means that if I
can install aspell-uk,
that's the only thing on the system,
then the same solution is valid
for the Russian one.
So ?? they are different, but semantically
they differ only by name and version.
Sometimes you can have version
dependencies that always
satisfied ?? for example, then that's
the same game actually.
This is a simplified view because in
theory you could have a package
that refuses to work with the Ukranian
one, but not the Russian one,
and vice versa, so they actually become
distinct solutions.
But the general use of aspell dictionaries
tends to be any one of them,
and I really don't care
which one you pick.
So we find those by saying they have
the same effective dependencies clauses.
We can also look at the negative
dependencies, that's fairly important too,
they have to have the same there.
Here we need to remember that negative
dependencies, we do not think of them
as just directed, they're not.
So it really doesn't matter if foo breaks
bar or bar breaks foo,
the important part is they
don't work together.
Then we have assert that they satisfy
the same clause.
Both of them are a valid solution to the
same clause as the other one.
And this becomes interesting when you have
dependency ranges,
which is not truly a dependency range,
but you have two clauses that says,
I will need foo greater than
version 1, and I need it strictly
less than version 2, then one of them
contains both of them,
and the other one does not, when you
are doing upgrades for example.
It has to satisfy the same
clauses as well.
A little trick thing that I discovered
a little too late, but it's fun.
These things things can generally be done
in polynomial time, that ballpark.
I haven't really computed
the actual results.
But we can go further than that because
equivalent or identical packages
are easy to find, but there is
something better than that.
We can also have the case where one of the
packages is clearly better than the other.
For example, it has fewer of the same
effective dependencies.
So I need less to solve this package.
It has fewer of the same negative
dependencies, fewer things to
not work with this,
this is also good.
And finally it solves at least as many
dependency clauses as the other one.
This is typically something you find
in upgrade scenarios.
So your version and the new version do not
have and different dependencies
for example, then don't have any new
config relations, but the new version
might solve more ?? dependencies.
It might have more packages it can solve.
In that case, you can just unconditionally
take the new version,
because if a solution works
for the new version,
it may or may not work for the old one,
but if the solution works
with the old one,
it definitely works with the new one.
So that's sort of a freebie.
You're just solving for one of them, if
that doesn't work,
you know you don't have to try the other,
that's the important part.
And the point with these is basically
that identical packages are just
two-way substitutable.
This being a one-way substitution instead.
I haven't figured out how to find these,
in general, more effectively than
I do the identical ones.
The identical ones are easy to find,
but I would like to get some where
we can find these superior packages
trivially, because they are more
useful in general, or there might
be more of them as well.
This is as far as I got, including writing
slides five minutes before the talk.
So are there any questions?
There's a mic there,
or there's a runner.
[question] Thanks for your interesting
talk. I have two questions in fact.
The first question concerns your problem
of finding equivalent packages.
Are you aware of the ?? tool.
[Neils] I have through the article
about it.
The thing I remember with that was that it
wrote somewhere that
this should blow up exponentially
but it doesn't.
[question] We these are all theoretically
hard problems, or course.
[Neils] [laughing] Yes.
[question] So this tool in fact
tries to group to find out where to
classify packages with respect to
the switch other packages
that are coninstallable.
[Neils] Yes.
[question] And this works.
One of the steps is to group together
packages which behave the same.
With respect to coninstallability
with other packages.
So this would be your notion of ??
if I understood it correctly.
[Neils] It sounds like it.
[question] At least it is very similar,
so maybe we can do this offline later,
but this is definitely something you
should look at I think.
And my other question is about this
deterministic ptime installability.
Of course you are well aware of the fact
that this is a result which holds under
some restrictions of the problems,
of course,
as you don't plan to solve P = NP, so can
you, for instance, if you already know
that it has this good complexity when
you don't have any alternatives.
As you've already said these are
explicitly, not implicitly,
and also when you don't any conflicts,
either implicitly or explicitly,
you seem to have a different class.
Can you clarify precisely under which
conditions on the dependency graph
you get a ptime complexity of the problem.
[Neils] So actually the negative
dependencies are not an issue.
If you take a package and the entire
transitive dependencies below it
can always be solved by looking
elsewhere in this.
So you have a set of transitive
dependencies, and you can always
solve it by following a clause where
there's only one option,
or there's a package you've already picked
earlier.
Then locally this becomes a simple regular
graph, well you can massage it into one.
I don't think it happens often enough for
us to rely on it 100%,
I think there may be cases, or corners of
the graph where it happens locally,
and then when you leave that part,
go further down the stack,
or outside that part, you get back
to the original problem,
but will have parts of it that
are locally polynomial,
which will be interesting to find.