
Title:

Description:

Continuing with our review,

the next topics that we covered concern
bifurcation diagrams.

They're a way to see how the behavior of
a dynamical system changes as a parameter

is changed.

I think it's best to think of them as
being built up one parameter value

at a time.

So, for each parameter value, make a phase
line if it's a differential equation,

or a finalstate diagram for an
iterated function.

And you get a collection of these, and
then you glue these together to make a

bifurcation diagram.

So, here's one of the first bifurcation
diagrams we looked at.

This is the logistic equation with
harvest. The equation is down here.

And so, h is the parameter that
I'm changing.

H is here, it goes from 0 to 100 to 200
and so on.

And so, a way to interpret this is
suppose you want to know what's going on

at h is 100. Well, I would try to focus
right on that value, and I can see "aha,"

it looks to me like there is an attracting
fixed point here, and a repelling

fixed point there. So there are two fixed
points: one of them attracting and one of

them repelling, or repulsive.

And, what's interesting about this is that
so this is is the stable fixed point,

this would be the stable population of
the story I told involved fish in a lake

or an ocean, and h is the fishing rate,
how many fish you catch every year.

And that increases, and as you increase h,
the sort of steady state population of the

fish decreases, that makes sense.

But what's surprising is that when
you're here, and you make a tiny increase

in the fishing rate, the steadystate
population crashes and in fact disappears.

The population crashes.

So, you have a small change in h leading
to a very large qualitative change in the

fish behavior.

So, this is an example of a bifurcation
that occurs right here.

It's a sudden qualitative change in the
system's behavior as a parameter

is varied slowly and continuously.

So, we looked at bifurcation diagrams for
differential equations and we saw the

surprising discontinuous behavior, then we
looked at bifurcation diagrams for the

logistic equation, and we saw bifurcations
here is period 2 to period 4, but what was

really interesting about this was that
there's this incredible structure to this

and we zoomed in and it looked really
cool.

There are period 3 windows, all sorts of
complicated behavior in here.

So, there are many values for which the
system is chaotic.

The system goes from different period
to period in a certain way.

And this has a selfsimilar structure:
it's very complicated but there is some

regularity to this set of behavior for the
logistic equation.

So, then we looked at the period doubling
route to chaos a little bit more closely.

And in particular I defined this ratio,
delta.

It tells us how many times larger branch n
is than branch n+1.

So, delta is how much larger or longer
this is than that.

That would be delta 1. How much longer,
how many times longer is this length

than that?
That would be delta 2.

And we looked at the bifurcation diagrams
for some different functions,

and I didn't prove it, but we discussed
how this quantity, delta, this ratio

of these lengths in the bifurcation
diagram is universal.

And that means it has the same value for
all functions

provided, a little bit of fine print,
they map an interval to itself and have a

single quadratic maximum.

So, this value, which is I believe
known to be a rational and I think

transcendental, is known as Feigenbaum's
constant, after one of the people who made

this discovery of universality.

This is an amazing mathematical fact
and points to some similarities among

a broad class of mathematical systems.

To me, what's even more amazing is that
this has physical consequences.

Physical systems show the same
universality.

So, the period doubling route to chaos
is observed in physical systems.

I talked about a dripping faucet and
convection rolls in fluid, and one can

measure delta for these systems.
It's not an easy experiment to do,

but it can be done.

And the results are consistent with this
universal value, 4.669.

And so, what this tells us is that somehow
these simple onedimensional equations,

we started with a logistic equation,
an obviously madeup story about

rabbits on an island, that nevertheless
produces a number, a prediction

that you can go out in the real physical
world and conduct an experiment

with something much more complicated
and get that same number.

So, this is I think one of the most
surprising and interesting results

in dynamical systems.

So, then we moved from onedimensional
differential equations to twodimensional

differential equations.

So now, rather than just keeping track
of temperature or population, we're

going to keep track of two populations,
say R for rabbits and F for foxes.

And we would have now a system of two
coupled differential equations:

the fate of the rabbits depends on
rabbits and foxes, and the fate of the

foxes depends on foxes and rabbits.

So, they're coupled, they're
linked together.

And one can solve these using Euler's
method or things like it,

very, almost identically to how one would
for onedimensional differential equations

And you get two solutions:

you get a rabbit solution and a fox
solution.

And in this case, this is the
LotkaVolterra equation,

they both oscillate.
We have cycles in both rabbit and foxes.

But then, we could plot R against F.

So, we lose time information, but it will
show us how the rabbits and the foxes

are related.

And if we do that, we get a picture that
looks like this.

Just a reminder that this curve goes in
this direction.

And so, the foxes and rabbits are
cycling around.

The rabbit population increases,
then the fox population increases.

Rabbits decrease because the foxes
are eating them.

Then the foxes decrease because
they're sad and hungry because

there aren't rabbits around, and so on.

So, this is is similar to the phase line
for onedimensional equations,

but it's called a phase plane because it
lives on a plane.

And this hows how R and F are related.

Phase plane and then phase space is
one of the key geometric constructions,

analytical tools used to visualize
behavior of dynamical systems.

So, an important result is that there
can be no chaos, no aperiodic

solutions in 2D differential equations.

So, curves cannot cross in phase space.

The equations are deterministic, and that
means that every point in space, and

remember this is in phase space, so my
point in space gives the rabbit and fox

population, there's a unique direction
associated with the motion.

DF/DT, DR/DT, that gives a direction.
It tells you how the rabbits are

increasing, how the foxes are increasing.

If two phase lines ever cross, like they
do where my knuckles are meeting,

then that would be a nondeterministic
dynamical system.

There would be two possible trajectories
coming from one point.

So, the fact that two curves can't cross
in these systems limits the behavior.

They sort of literally paint themselves in
as they're tracing something out, tracing

a curve out in phase space.

So there can be stable and unstable fixed
points and orbits can tend toward infinity

of course, and there can also be limit
cycles attracting cyclic behavior, and we

saw an example of that.

But the main thing is that there can't be
aperiodic orbits.

And that result is known as the
PoincaréBendixson theorem.

It's about a century old.

And it's not immediately obvious;
it takes some proof.

Like I said, that's maybe why it's a
theorem and not just an obvious statement.

One could imagine, and people in the
forums have been trying to imagine

spacefilling curves that somehow never
repeat but also never leave a bounded area

But the PoincaréBendixson theorem says
that those solutions somehow

aren't possible.

So, the main result is that
twodimensional differential equations

cannot be chaotic.

That's not the case for threedimensional
differential equations, however.

So, here are the Lorenz equations.

Now, again it's a dynamical system, it's
a rule that tells how something changes

in time.

Here that something is x, y, and z, and
I forget what parameter values I chose

for sigma, rho, and beta.

And we can get three solutions:
x, y, and z.

And these are all curves plotted as a
function of time.

But we could plot these in phase space,
x, y, and z together.

And for that system, if we do that, we get
some complicated structure that

loops around itself and repeats.

It looks like the lines cross, but
they don't.

There's actually a space between them.

It looks like they cross because this is
a twodimensional surface trying to

plot something in 3D.

Alright, so just a little bit more about
phase space.

Determinism means that curves in phase
space cannot intersect.

But because the space is threedimensional
curves can go over or under each other.

And that means that there is a lot more
interesting behavior that's possible.

A trajectory can weave around and under
and through itself

in some very complicated ways.

What that means, in turn, is that
threedimensional differential equations

can be chaotic.

You can get bounded, aperiodic orbits,
and it has sensitive dependence as well.

And then we saw that chaotic trajectories
in phase space are particularly

interesting and fun.

They often get pulled to these things
called strange attractors.

So here's the Lorenz attractor or the
famous values for the Lorenz equation.

Strange attractors.
What are strange attractors?

Well, they're attractors, and what that
means is that nearby orbits get

pulled into it.

So, if I have a lot of initial conditions,
they all are going to get

pulled onto that attractor.

So, in that sense, it's stable.
If you're on that attractor and somebody

bumps you off a little bit, you'd get
pulled right back towards it.

That's what it means to be stable.

So, it's a stable structure
in phase space.

But the motion on the attractor is not
periodic the way most attractors that

we've seen are, or even fixed points.

But the motion on the attractor
is chaotic.

So, once you're on the attractor, orbits
are aperiodic and have

sensitive dependence on
initial conditions.

So, it's an attracting chaotic attractor.

Then we looked at this a little bit more
geometrically and I argued that the key

ingredients to make a strange attractor
or to make chaos of any sort, actually,

is stretching and folding.

So, you need some stretching to pull
nearby orbits apart.

The analogy I discussed was
kneading dough.

So, when you knead dough, you stretch it.

That pulls things apart, and then you fold
it back on itself.

So, the folding keeps orbits bounded.

It takes far apart orbits and moves them
closer together.

But stretching pulls nearby orbits apart,
and that's what leads to

the butterfly effect,
or sensitive dependence.

Now, stretching and folding, it may be
relatively easy to picture in

threedimensional space, either a space
of actual dough on a bread board

or a phase space.

But it occurs in onedimensional maps
as well, the logistic equation stretches

and folds.

And this can explain how onedimensional
maps, iterated functions, can capture

some of the features of these
higher dimensional systems.

And it begins to explain, also, how these
higher dimensional systems,

convection rolls, dripping faucets,
can be captured by onedimensional

functions like the logistic equation and
this universal parameter, 4.669.

So, in any event, stretching and folding
are the key ingredients for a chaotic

dynamical system.

So, strange attractors once more.
They're these complex structures that

arise from simple dynamical systems.

A reminder that we looked at three
examples: the Hénon map, the Hénon

attractor, which is a twodimensional
discrete, iterated function.

And then, two different sets of coupled
differential equations in three dimensions

the famous Lorenz equations, and also
the slightly less famous but equally

beautiful Rössler equations.

Again, the motion on the attractor is
chaotic, but all orbits get pulled

to the attractor.

So, strange attractors combine elements
of order and disorder.

That's one of the key themes
of the course.

The motion on the attractor is locally
unstable.

Nearby orbits are getting pulled apart,
but globally it's stable.

One has these stable structures, the same
Lorenz attractor appears all the time.

If you're on the attractor, you get pushed
off it, you get pulled right back in.

Alright, and the last topic we covered in
unit 9 was pattern formation.

So, we've seen throughout the course in
the first 8 units that dynamical systems

are capable of chaos.

That was one of the main results.

Unpredictable, aperiodic behavior.

But there's a lot more to dynamical
systems than chaos.

They can produce patterns, structure,
organization, complexity, and so on.

And we looked at just one example
of a patternforming system.

There are many, many ones to choose from.

But we looked at reactiondiffusion
systems.

So there, we have two chemicals that
react and diffuse.

And diffusion, that's just the random
spreading out of molecules in space,

diffusion tends to smooth out
differences, it makes everything

as boring and bland as possible.

But, if we have two different chemicals
that react in a certain way,

it's possible to get stable spatial
structures even in the presence

of diffusion.

Here are these equations I described
them in the last unit.

This is deterministic, just like the
dynamical systems we've studied before.

And it's spatially extended, because now
U and V are functions, not just of T, but

of X and Y.

So, these become partial differential
equations.

Crucially, the rule is local.

So, the value of U or the value of V,
those are chemical concentrations,

depend on some function of a current
value at that location, and on this

Laplacian derivative at that location.

So, we have a local rule in that the
chemical concentration here doesn't know

directly what the chemical concentration
is here; it's just doing it's own thing

at its own local location.

Nevertheless, it produces these
largescale structures.

So, just one quick example.

We experimented with reactiondiffusion
equations at the

Experimentarium Digitale site.

Here's an example that we saw emerging
from random initial conditions,

these stable spots appear.

And then we also looked at a video from
Stephen Morris at Toronto where two fluids

are poured into this petri dish, and like
magic, these patterns start to emerge

out of them.

So, Belousov Zhabotinsky has another
example of a reactiondiffusion system.

So, pattern formation is a giant subject.

It could be probably a course in and of
itself.

The main point I want to make is that
there's more to dynamical systems

than just chaos or unpredictability
or irregularity.

Simple, spatiallyextended dynamical
systems with local rules are capable

of producing stable, global patterns and
structures.

So, there's a lot more to the study of
chaos than chaos.

Simple dynamical systems can produce
complexity and all sorts of interesting

emergent structures and phenomena.