
Title:
12. Evolutionary stability: social convention, aggression, and cycles

Description:
Game Theory (ECON 159)
We apply the idea of evolutionary stability to consider the evolution of social conventions. Then we consider games that involve aggressive (Hawk) and passive (Dove) strategies, finding that sometimes, evolutionary populations are mixed. We discuss how such games can help us to predict how behavior might vary across settings. Finally, we consider a game in which there is no evolutionary stable population and discuss an example from nature.
00:00  Chapter 1. Monomorphic and Polymorphic Populations Theory: Definition
30:50  Chapter 2. Monomorphic and Polymorphic Populations Theory: Hawk vs. Dove
50:00  Chapter 3. Monomorphic and Polymorphic Populations Theory: Discussion
55:39  Chapter 4. Monomorphic and Polymorphic Populations Theory: Identification and Testability
Complete course materials are available at the Open Yale Courses website: http://open.yale.edu/courses
This course was recorded in Fall 2007.

All right, so this was the
definition we saw at the end

last time.
I've tried to write it a bit

larger, it just repeats the
second of those definitions.

This is the definition that
connects the notion of

evolutionary stability,
for now in pure strategies,

with Nash Equilibrium.
Basically it says this,

to check whether a strategy is
evolutionarily stable in pure

strategies,
first check whether

(Ŝ,Ŝ) is a symmetric
Nash Equilibrium.

And, if it is,
if it's a strict Nash

Equilibrium we're done.
And if it's not strict,

that means there's at least
another strategy that would tie

with Ŝ against Ŝ,
then compare how Ŝ

does against this mutation with
how the mutation does against

itself.
And if Ŝ

does better than the mutation
than the mutation does against

itself then we're okay.
One virtue of this definition

is it's very easy to check,
so let's try an example to see

that and also to get us back
into gear and reminding

ourselves what we're doing a
bit.

So in this examplesort of
trivial game but stillthe game

looks like this.
And suppose we're asked the

question what is evolutionarily
stable in this game?

So no prizes for finding the
symmetric Nash Equilibrium in

this game.
Shout it out.

What's the symmetric Nash
Equilibrium in this game?

(A,A).
So (A,A) is a symmetric Nash

Equilibrium.
That's easy to check.

So really the only candidate
for an evolutionarily stable

strategy here is A.
So the second thing you would

check is: is (A,A) a strict Nash
Equilibrium?

So what does strict Nash
Equilibrium mean?

It means that if you deviate
you do strictly worse.

So is (A,A) strict Nash?
Well is it strict Nash or not?

It's not.
So if you deviate to B,

you notice very quickly that
U(A,A) is equal to U(B,A).

Is that right?
It's just tie,

both of these get 1.
So it's not a strict Nash

Equilibrium so we have to check
our third rule.

What's our third rule?
So we need to check how does A

do against the possible
deviation which is herewhich

is B here, how does that compare
with B against itself?

So U(A,B) the payoff to A
against B is 1 and the payoff to

B against itself is 0:
1 is indeed bigger than 0,

so we're okay and A is in fact
evolutionarily stable.

So: just a very,
very simple example to

illustrate how quick it is to
check this idea.

I want to spend all of today
going over more interesting

examples, so having the payoff
of having invested in this last

time.
To start off with,

let's get rid of this rather
trivial example.

I want to think about evolution
as it's often applied in the

social sciences.
So one thing you might talk

about in the social sciences is
the evolution of a social

convention.
Sometimes you'll read a lot in

sociology or political science
about the evolution of

institutions or social
conventions and things like

thismaybe also in
anthropologyand I want to see

a trivial example of this to see
how this might work and to see

if we can learn anything.
The trivial example I'm going

to think about is driving on the
left or the right;

on the left or right side of
the road.

So this is a very simple social
convention.

I think we all can agree this
is a social convention and let's

have a look at the payoffs in
this little game.

So you could imagine people
drive on the left or drive on

the right, and if you drive on
the left and the other people

are driving on the right,
you don't do great and nor do

they;
and if you drive on the right

and they're driving on the left
you don't do great,

and nor do they.
If you both drive on the right,

you do fine.
But if you both drive on the

left you do a little better,
because you look a little bit

more sophisticated.
So this is our little game.

We can see that this,
this could be an evolutionary

game.
So you could imagine this

emergingyou can imagine a
government coming in and

imposing a law saying everyone
has to drive on the left or

everyone has to drive on the
right,

but you could also imagine at
the beginning of roads in

different societies in different
parts of the world,

people just started doing
something and then they settled

down to one convention or
another.

You can see how evolutionary
stability's going to play a role

here.
Well perhaps we should just

work it through and see what
happens.

So what are the potential
evolutionarily stable things

here?
What are the potentially

evolutionarily stable things?
Let's get some mikes up.

What's liable to be
evolutionarily stable in this

setting?
Anyone?

Student: Left,
left and right,

right are both candidates.
Professor Ben Polak:

Good, so our obvious candidates
here are left,

left and right,
right.

These are the two candidates.
More formally,

left is a candidate and right
is a candidate,

but you're right to say that
left, left and right,

right are both Nash Equilibria
in this game.

What's more,
not only are they Nash

Equilibria but what kind of Nash
Equilibria are they?

They're strict Nash Equilibria.
The fact they're strict,

both are strict,
so indeed left is

evolutionarily stable and right
is evolutionarily stable.

Let's just talk through the
intuition for that and make it

kind of clear.
So suppose you're in a society

in which everybody drives on the
left, so this is England and

suppose a mutation occurs and
the mutation you could think of

as an American tourist.
An American tourist is dropped

into English society,
hasn't read the guidebook

carefully,
starts driving on the right,

and what happens to the
tourist?

They die out pretty rapidly.
Conversely, if you drop an

unobservant Brit into America
and they drive on the right,

they're going to get squashed
pretty quickly,

so it's kind of clear why
everyone driving on the left or

why everyone driving on the
right is each of these are

perfectly good evolutionarily
stable social conventions.

But despite that simplicity,
there's kind of a useful lesson

here.
So the first lesson here is you

can have multiple evolutionarily
stable settings,

you could have multiple social
conventions that are

evolutionarily stable.
We shouldn't be surprised

particularly if we think there's
some evolutionary type force,

some sort of random dynamic
going on that's generating

social conventions other than
stable,

we should not be surprised to
see different social conventions

in different parts of the world.
In fact we do,

we do see parts of the world
like England and Japan,

and Australia where they drive
on the left and we see parts of

the world like France and
America where they drive on the

right,
so we can have multiple

evolutionarily stable
conventions.

There's another lesson here,
which is you could imagine a

society settling down to a
social convention down here.

You could imagine ending up at
a social convention of right,

right.
What do we know about the

social convention of everyone
driving on the right?

We know it's worse than the
social convention of everyone

driving on the left,
at least in my version.

So what do we regard,
what can we see here?

We can see that they're not
necessarily efficient.

These need not be equally good.
So it's hard to resist saying

that American society's driving
habits, if we think about the

alternatives to evolution,
are a good example of

unintelligent design.
So when you're talking about

this in a less formal way,
in your anthropology or

political science,
or sociology classes you want

to have in the back of your
mind, what we mean by

evolutionary stability and that
it doesn't necessarily mean that

we're going to arrive at
efficiency.

In some sense this is exactly
analogous to what we discussed

when we looked at games like
this with rational players

playing if you think about
coordination games.

These are essentially
coordination games.

That was a fairly
straightforward example.

To leave it up there let me,
there's another board here

somewhere, here we go,
let's look at yet another

example, slightly more
interesting.


So today I'm going to spend
most of today just looking at

examples and talking about them.
So here's another example of a

game we might imagine and once
again it's a twobytwo game,

a nice and simple game,
and here the payoffs are as

follows.
So down the diagonal we have

0,0, 0,0 and off the diagonal we
have 2,1 and 1,2.

So what is this game
essentially?

It's very, very similar to a
game we've seen already in

class, anybody?
This is essentially Battle of

the Sexes.
I've taken the Battle of the

Sexes Game, The Dating Game,
I've just twiddled it around to

make it symmetric,
so this is a symmetric version

of Battle of the Sexes or our
dating game.

You can think of this game also
in the context of driving

around.
So you could imagine that one

version of this game,
you sometimes hear this game

referred to as chicken.
What's the game chicken when

cars are involved?
Anybody?

What's the game chicken when
cars are involved?

It's probably a good thing that
people don't know this.

No one knows this?
Okay, all right somebody knows

it.
Can I get the mic way back

there?
Yeah shout it out.

Right, so you can imagine,
here we are on a road that

perhaps isn't big enough to have
driving and driving on the

right.
These two cars face each other,

they drive towards each other,
both of them going in the

middle of the road,
and the loser is the person who

swerves first.
If you think of A as being the

aggressive strategy of not
swerving and B as being the less

aggressive,
more benevolent strategy,

if you like,
of swervingso the best thing

for you is for you to be
aggressive and the other person

to swerve,
and then at least they remain

alive.
Conversely, if you're

benevolent and they swerve,
you remain alive but they win

and unfortunately now,
if you're both aggressive you

get nothing, and the way we've
written this game,

if you're both benevolent you
get nothing.

You can imagine making some
more negatives here.

So this is a game that seems
kind of important in nature,

not just in teenage male
behavior but in animal behavior,

since we're talking about
aggression and nonaggression.

So what's evolutionarily stable
in this game?

Well remember our starting
point is what?

Our starting point is to look
for symmetric Nash Equilibria.

So are there any symmetric Nash
Equilibria in this game and if

so what are they?
There are some Nash Equilibria

in pure strategies,
there are some Nash Equilibria

in this game,
for example,

(A,B) is a Nash Equilibrium and
(B,A) is a Nash Equilibrium,

but unfortunately they're not
symmetric,

and so far we're focusing on
games that are symmetric,

this is random matching.
There's no asymmetry in the

rowsin the row and column
player, although in the

handoutnot the handout,
in the reading packet I made

for youthey do also look at
some asymmetric versions of

games,
but for now we're just looking

at symmetry.
So neither (A,B) nor (B,A) will

serve our purpose because
they're not symmetric Nash

Equilibria.
In fact, there is no symmetric

pure strategy Nash Equilibrium
in this game.

So it can't be that if this was
a species, if this was an animal

that came in,
that had two possible

strategies, aggression or
passivity, it can't be the case

in this particular game that you
end up with 100% aggression out

there100% aggressive genes out
thereor 100% unaggressive

genes out there.
In either case,

if you had 100% aggressive
genes out there,

then it would be doing very,
very badly and you get an

invasion of passive genes and if
you had 100% passive genes out

there,
you'd get an invasion of

aggressive genes.
You can't have a pure ESS,

a pure evolutionarily stable
gene mix out there.

So what does that suggest?
It suggests we should start

looking at mixed strategies.
There is a mixed strategy,

there is a symmetric mixed
strategy Nash Equilibrium in the

game and we could go through and
work it out.

We all know nowprobably
you've been laboring through the

homework assignmentsso you all
know how to find a mixed

strategy Nash Equilibrium in
this game,

you have to set the mix to make
the other player indifferent

between her two strategies.
But this is a game we've seen

already, it's essentially Battle
of the Sexes,

so you probably remember from a
week ago what that equilibrium

mix is.
Can anyone remember what the

equilibrium mix in Battle of the
Sexes?

Its (2/3,1/3),
turns out that (2/3,1/3) is a

Nash Equilibrium here.
So if you go back to your notes

a week ago you'll find something
very much like that was a Nash

Equilibrium in the original
version of Battle of the Sexes.

A week ago it would have been
(2/3,1/3), (1/3,2/3) because

things weren't symmetric.
Now I've made things symmetric,

it's just (2/3,1/3) for both
players.

You can check it out.
So what's this telling us?

It's telling us that there's at
least an equilibrium in this

game in which 2/3 of the genes
are aggressive and 1/3 of the

genes are unaggressive.
But does that mean anything?

What could that mean?
In terms of biology what could

that mean?
Well, so far,

we've been looking at
evolutionarily stable pure

strategies and the
evolutionarily stable pure

strategies correspond to
evolutionarily stable situations

in nature which are what are
called monomorphic.

Monomorphic means one shape or
one type out there,

but you can also have
situations in nature where there

are actually stable mixed types
and they're called polymorphic.

It's probably not hyphenated,
it's probably just one word.

So you can have a monomorphic
population, that's what we've

focused on so far,
but you could also have a mixed

population.
Now for this to be a mixed

population we better change the
definition accordingly.

We'll come back and talk about
what it means in a second a bit

more, but first we better make
sure we have an okay definition

for it.
So what I'm going to do is I'm

going to drag this definition
down and do something you're not

meant to do usually in teaching,
I'm going to use the eraser to

correct my definition.
So here I have my pure strategy

definition and let me just
change it into a definition that

will allow for these polymorphic
populations.

So I'm going to change this
into a P, I'm going to change

this pure into mixed,
and everywhere you see an

Ŝ I'm going to put a P,,
and over here too and

everywhere you see an S' I'm
going to put a P.''

The reason I'm doing this this
way is I want to emphasize that

there's nothing new here,
I'm just writing down the same

definitions we had before,
except I'm now allowing for the

idea of populations being mixed,
and I'm also,

just to note in passing,
I'm also allowing for the

possibility that a mutation
might be mixed.

Did I catch them all?
So I've just gone through the

definition you have and I've
switched everything from pure to

mixed.


So in our example does the mix
2/3,1/3 satisfy the definition

above?
Let's go through carefully.

So (2/3,1/3) is a Nash
Equilibrium so we've satisfied

part A, it's a symmetric Nash
Equilibrium, so we're okay

there.
Is this equilibrium a strict

equilibrium?
Is this mixed population 2/3

aggressive and 1/3 unaggressive,
or this mixed strategy,

is it a strict equilibrium?
How do deviations do against it?

Anybody?
Go ahead.

Student: Equally well.
Professor Ben Polak:

Equally well,
thank you.

So it can't be a strict Nash
Equilibrium, because if we

deviated to A we'd do as well as
we were doing in the mix,

or if we deviated to B,
we'd do as well as we're doing

in the mix.
Another way of saying it is,

an A mutation does exactly as
well against this mix as the mix

does against itself,
and a B mutation does exactly

as well.
In fact, that's how we

constructed the equilibrium in
the first place.

We chose a P that made you
indifferent between A and B,

so in a mixed Nash Equilibrium
it can't be strict since it is

mixed.
In a mixed Nash Equilibrium,

a genuinely mixed Nash
Equilibrium by definition,

you're indifferent between the
strategies in the mix.

So to show that this is in fact
evolutionarily stable we'd have

to show rule B,
so we need to showwe need to

checklet's give ourselves some
room herewe need to check how

the payoff of this strategy,
let's call it P,

how P does against all possible
deviations and compare that with

how those deviations do against
themselves.

We have to make this comparison
 how does this mix do against

all other possible mixes versus
how those mixes do against

themselves?
We'd have to do this,

unfortunately,
we'd have to check this for all

possibleand now we have to be
carefulall possible mixed

mutations P''.
So that would take us a while,

it's actually possible to do.
If you do enough math,

it isn't so hard to do.
So rather than prove that to

you, let me give you a heuristic
argument why this is the case.

I'm going to try and convince
you without actually proving it

that this is indeed the case.
So here we are with this

population, exactly 2/3 of the
population is aggressive and 1/3

of the population is passive.
Suppose there is a mutation,

P'', that is more aggressive
than P, it's a relatively

aggressively mutation.
For example,

this mutation may be 100%
aggressive, or at least it may

be very, very highly
aggressively,

maybe 90% aggressive or
something like that.

Now I want to argue that that
aggressive mutation is going to

die out and I'm going to argue
it by thinking about this rule.

So I want to argue that the
reason this very aggressive

mutation dies out is because the
aggressive mutation does very

badly against itself.
Is that right?

If you have a very aggressive
mutant, the very aggressive

mutants do very,
very badly against themselves,

they get 0.
And that's going to cause them

to die out.
What about the other extreme?

What about a deviation that's
very passive?

So a very nice mutation,
a very passive type,

for example,
it could a 100% B or you know

99% B or 98% B,
how will that do?

Well it turns out,
in this game again,

it doesn't do very well against
itself and in addition,

the original mix,
the mixed P that is more

aggressive than this very
passive mutation does very well

against the mutation.
So the mix that's in there,

the mix that's in the
population already is relatively

aggressive compared to this very
passive mutation and so the

incumbent,
the relatively aggressive

incumbents are doing very,
very well on average against

the mutation,
and hence once again,

this equality holds.
So just heuristically,

without proving it,
more aggressive mutations are

going to lose out here because
they do very badly against

themselves,
and more passive mutations are

going to do badly because they
make life easy for P which is

more aggressive.
So it wasn't a proof but it was

a heuristic argument and it
turns out indeed to be the case.

So in this particular game,
a game you could imagine in

nature, a game involving
aggression and passivity within

this species,
it turns out that in this

example the only equilibrium is
a mixed equilibrium with 2/3

aggressive and 1/3 unaggressive.
And this raises the question:

what does it mean?
What does it mean to have a mix

in nature?
So it could mean two different

things.
It could mean that the gene

itself is randomizing.
It could mean that the strategy

played by the particular ant,
squirrel, lion,

or spider is actually to
randomize, right,

that's possible.
But there's another thing it

could mean that's probably a
little bit more important.

What's the other thing it could
mean?

It could mean that in the
stable mix, the evolutionarily

stable population,
for this particular spider say,

it could be that there are
actually two types surviving

stably in these proportions.
If you go back to what we said

about mixed strategies a week
ago, we said one of the possible

interpretations of mixed
strategies is not that people

are necessarily randomizing,
but that you see a mix of

different strategies in society.
Again in nature,

one of the impossible
interpretations here,

the polymorphic population
interpretation,

is that, rather than just have
all of the species look and act

alike, it could be there's a
stable mix of behaviors and/or

appearances in this species.
So let me try and convince you

that that's not an uninteresting
idea.

So again, with apologies I'm
not a biologist,

I've spent a bit of time on the
web this weekend trying to come

up with good examples for you
and the example I really wanted

to come up with,
I couldn't find on the web,

which makes me think maybe it's
apocryphal, but I'll tell you

the story anyway.
It's not entirely apocryphal,

it may just be that my version
of it's apocryphal.

So this particular example I
have in mind,

is to do with elephant seals,
and I think even if it isn't

true of elephant seals,
it's definitely true of certain

types of fish,
except that elephant seals make

a better story.
So imagine that these elephant

sealsit turns out that there
are two possible mating

strategies for male elephant
seals.

By the way, do you all know
what elephant seals are?

They're these bigpeople are
looking blankly at me.

You all have some rough image
in your mind of an elephant

seal?
You've all seen enough nature

shows at night?
Yes, no, yes?

Okay, so there are two male
mating strategies for the male

elephant seal.
One successful male mating

strategy is to be the head,
the dominant,

or a dominant elephant male,
male elephant seal,

and have as it were,
a harem of many female elephant

seals with which the male mates
with.

For the males in the room don't
get too happy,

these are elephant seals,
they're not you guys.

So one possible successful
strategy is to be a successful

bull elephant seal and have
many, many, many potentially

wives, so to be a polygamist.
Presumably to do that well a

good idea, a thing that would go
well with that strategy is to be

huge.
So you could imagine the

successful male elephant seal
being an enormous animal.

It looks like a sort of a
linebacker in football and

basically fights off all other
big elephant seals that show up.

But it turns out,
I think I'm right in saying if

I did my research correctly,
this is true among northern

elephant seals but not true
among southern elephant seals,

so the Arctic not the
Antarctic, but someone's going

to correct me.
Once it's on the web I'm going

to get floods of emails saying
that I've got this wrong.

But never mind.
So it turns out that this is

not quite evolutionarily stable.
Why is this not evolutionarily

stable?
What's the alternative male

strategy that can successfully
invade the large bull harem

keeper elephant seal?
Any guesses?

Anyone looking for a successful
career as an elephant seal,

as a male elephant seal?
Say that again?

Student: [Inaudible]
Professor Ben Polak:

Good, so good thank you.
Did people catch that?

Good, an alternative strategy
is to be a male elephant seal

who looks remarkably like a
female elephant seal.

Instead of looking like a
linebacker, they look like a

wide receiver.
I'm offending somebody in the

football team.
You get the idea, right?

What do they do?
They sneak in among these large

numbers of male elephant seals
and they just mate with a few of

them.
So they look like a female

seal, and they can hide among
the female elephant seals in the

harem,
and they mate with a few of

them and provided this is
successful enough it'll be

evolutionarily stable for the
female elephant seal to want to

mate with that too.
Now I forget if actually this

is exactly right,
but it's certainlyI did

enough research over the weekend
to know it's right at least in

some species,
and the nicest part of this

story is that at least some
biologists have a nice technical

name for this strategy that was
well described by our friend at

the back,
and the name for this strategy

is SLF, and since we're on film
I'm going to tell you what the S

and the L are,
but you'll have to guess the

rest.
So this is sneaky,

this is little,
and you can guess what that is.

So this turns out to be
actually quite a common

occurrence.
It's been observed in a number

of different species,
perhaps not with the full added

color I just gave to it.
So having convinced you that

polymorphic populations can be
interesting, let's go back to a

case,
a more subtle case,

of aggression and
nonaggression,

because that seems to be one of
the most important things we can

think of in animal behavior.
So let's go back and look at a

harder example of this,
where we started.

So as these examples get
harder, they also get more

interesting.
So that's why I want to get a

little bit harder.


So the chicken game,
the Battle of the Sexes game,

is not a particularly
interesting version of

aggression and nonaggression.
Let's look at a more general

version of aggression versus
nonaggression,

and let's look at a game that's
been studied a lot by biologists

and a little bit by economists
called hawkdove.

And again, just to stress,
we're talking about within

species competition here,
so I don't mean hawks versus

doves.
I mean thinking of hawk as

being an aggressive strategy and
dove as being a dovish,

a passive strategy.
So here's the game and now

we're going to look at more
general payoffs than we did

before.
So this is the hawk strategy,

this is the dove strategyhawk
and doveand the payoffs are as

follows.
V + Csorry, start again.

(VC)/2 and (VC)/2 and here we
get V/2 and V/2,

and here we get V and 0 and
here we get 0 and V.

So this is a generalization,
a more interesting version of

the game we saw already.
Let's just talk about it a

little bit.
So the idea here is there's

some potential battle that can
occur among these two animals

and the prize in the battle is
V.

So V is the victor's spoils and
we're going to assume that V is

positive.
And unfortunately,

if the animals fightso if the
hawk meets another hawk and they

fight one anotherthen there
are costs of fighting.

So the costs of fighting are C,
and again, we'll assume that

they're positive.
So this is the cost of fighting.

This more general format is
going to allow us to do two

things.
We're going to look and ask

what is going to be
evolutionarily stable,

including mixtures now.
And we're also going to be

allowed to ask,
able to ask,

what happens,
what will happen to the

evolutionarily stable mix as we
change the prize or as we change

the cost of fighting?
Seems a more interesting,

a richer game.
Okay, so let's start off by

asking could we have an
evolutionarily stable population

of doves?
So is D an evolutionarily

stable strategy?
I'll start using the term ESS

now.
So ESS means the evolutionarily

stable strategy.
Is D in an ESS.

So in this game,
could it be the case that we

end up with a population of
doves?

Seems a nice thing to imagine,
but is it going to occur in

this game in nature?
What do people think?

How do we go about checking
that?

What's the first step?
First step is to ask,

is (D, D) a Nash Equilibrium?
If it's evolutionarily stable,

in particular,
(D,D) would have to be a Nash

Equilibrium.
That's going to make it pretty

easy to check,
so is (D, D) a Nash Equilibrium

in this game?
It's not, but why not?

Because if you had a mutation,
keep on tempted to say

deviation, but you want to think
of it as mutation.

If I had a mutation of hawks,
the hawk mutation against the

doves is getting V,
whereas, dove against dove is

only getting V/2,
so it's not Nash.


So we can't have a
evolutionarily stable population

of doves, and the reason is
there will be a hawk mutation,

an aggressive type will get in
there and grow,

much like we had last week when
we dropped Rahul into the

classroom in Prisoner's Dilemma
and he grew,

or his type grew.
So second question:

is hawk an evolutionarily
stable strategy?

So how do we check this?
What we have to look at once

againand ask the
questionthis is the first

question to ask is:
is (H,H) a Nash Equilibrium?

So is it a Nash Equilibrium?
Well I claim it depends.

I claim it's a Nash Equilibrium
provided (VC)/2 is at least as

large as 0.
Is that right?

It's a Nash Equilibriumit's a
symmetric Nash Equilibrium

provided hawk against hawk
does better, or does at least as

well as, dove against hawk.
So the answer is yes,

if (VC)/2 is at least as big
as 0.

So now we have to think fairly
carefully, because there's two

cases.
So case one is the easy case

which is when V is strictly
bigger than C.

If V is strictly bigger than C,
then (VC)/2 is strictly

positive, is that right?
In which case,

what kind of a Nash Equilibrium
is this?

It's strict right?
So if V is bigger than C then

(H, H) is a strict Nash
Equilibrium.

The second case is if V is
equal to C, then (VC)/2 is

actually equal to 0,
which is the same as saying

that the payoff of H against H
is equal to the payoff of dove

against hawk.
That correct?

So in that case what do we have
to check?

Well I've deleted it now,
it'll have to come from your

notes, what do I have to check
in the case in which there's a

tie like that?


What do I have to check?
I have to checkin this case I

need to check how hawk does
against dove,

because dove will be the
mutation.

I need to compare that with the
payoff of dove against dove.

So how does hawk do against
dove?

What's the payoff of hawk
against dove?

Anybody?
Payoff of hawk against dove.

It shouldn't be that hard.
It's on the board:

hawk against dove.
Shout it out.

V, thank you.
So this is V and how about the

payoff of dove against dove?
V/2, so which is bigger V or

V/2?
V is bigger because it's

positive, so this is bigger so
we're okay.

So what have we shown?
We've shown,

let's just draw it over here,
we've shown,

that if V is at least as big as
C, then H is an evolutionarily

stable strategy.
So in this game,

in this setting in nature,
if the size of the prize to

winning the fight is bigger than
the cost that would occur if

there is a fight,
then it can occur that all the

animals in this species are
going to fight in an

evolutionarily stable setting.
Let me say it again,

if it turns out in this setting
in nature that the prize to

winning the fight is bigger,
or at least as big as,

the cost of fighting,
then it will turn out that it

be evolutionarily stable for all
the animals to fight.

The only surviving genes will
be the aggressive genes.

What does that mean?
So what typically do we think

of as the payoffs to fight and
the costs of fighting?

Just to put this in a
biological context.

The fight could be about what?
It could belet's go back to

where we started fromit could
be males fighting for the right

to mate with females.
This could be pretty important

for genetic fitness.
It could be females fighting

over the right to mate with
males.

It could also be fighting over,
for example,

food or shelter.
So if the prize is large and

the cost of fighting is small,
you're going to see fights in

nature.
But we're not done yet,

why are we not done?
Because we've only considered

the case when V is bigger than
C.

So we also need to consider the
case when C is bigger than V.

This is the case where the cost
of fighting are high relative to

the prize in the particular
setting we're looking at.

So again let's go back to the
example, suppose the cost of

fighting could be that the
animal could lose a leg or even

its life,
and the prize is just today's

meal, and perhaps there are
other meals out there.

Then we expect something
different to occur.

However, we've already
concluded that even in this

setting it cannot be the case
only to have doves in the

population.
We've shown that even in the

case where the costs of fighting
are high relative to the prizes,

it cannot be evolutionarily
stable only to have dove genes

around;
passive genes around.

So in this case,
it must be the case that if

anything is evolutionarily
stable it's going to be what?

It's going to be a mix.
So in this case,

we know that H is not ESS and
we know that D is not ESS.

So what about a mix?


What about some mix P?
We could actually put P the in

here.
We can imagine looking for a

mix P, 1 P that will be stable.
Now how we do go about finding

a possible mixed population that
has some chance or some hope of

being evolutionarily stable?
So here we are we're biologists.

We're about to set up an
experiment.

We're about to either
experiment, or about to go out

and do some field work out
there.

And you want to set things up.
And we're asking the question:

what's the mix we expect to
see?

What's the first exercise we
should do here?

Well if it has any hope to be
evolutionarily stable,

what does it have to be?
It has to be a symmetric Nash

Equilibrium.
So the first step is,

step one find a symmetric mixed
Nash Equilibrium in which people

will be playing P (,1 P).
It's symmetric,

so both sides will be playing
this.

So this is good review for the
exam on Wednesday.

How do I go about finding a
mixed equilibrium here?

Shouldn't be too many blank
faces.

This is likely to come up on
the exam on Wednesday.

Let's get some cold calling
going on here.

How do I find the mix strategy?
Just find anybody.

How do we findhow do I find a
mixed strategy equilibrium?

Student: Just use the
other player's payoff.

Professor Ben Polak: I
use the other player's payoffs

and what do I do with the other
person's payoffs?

Student: You set them
equal.

Professor Ben Polak: Set
them equal, okay.

So here it's a symmetric game.
It's really there's only one

population out there.
So all I need,

I need the payoff of hawk
against P or (P,

1P), I need this to be equal
to the payoff of dove against

this P.
So the payoff of hawk is going

to be what?
Just reading up from up

therelet's use our pointerso
hawk P of the time will meet

another hawk and get this
payoff.

So they'll getso P of the
time they'll get a payoff of

(VC)/2 and 1 P of the time
they'll meet a dove and get a

payoff of V.
And dove against this same mix

(P, 1 P): P of the time they'll
meet a hawk and get nothing and

1P of the time they'll meet
another dove and get V/2.

Everyone happy with the way I
did that?

That should be pretty familiar
territory to everybody by now,

is that right?
So I'm going to set these two

things equal to each other since
they must be equal,

if this is in fact a mixed
strategy equilibrium.

And then I'm going to play
around with the algebra.

So as to save time I did it at
home, so trust me on thisthis

is implication with the word
trust on top of ittrust me

that I got the algebra right or
check me at home.

This is going to turn out to
imply that P equals V/C.

So it turns out that there is
in fact a mixed Nash

Equilibirum.
There's a mixed Nash

Equilibrium which is of the
following form,

V/C and 1V/C played by both
players.

Is this a strict Nash
Equilibrium?

I've found the Nash
Equilibrium, is it strict?

Everyone should be shouting it
out.

Is it strict?
It can't be strict because it's

mixed right?
By definition it's not.

It can't be strict because we
know that deviating to H,

or, for that matter,
deviating to D yields the same

payoff.
So it can't be a strict Nash

Equilibrium.
So we need to check something.

So we need to check not strict;
so we need to check whether U

of P against P'' is bigger than
U of P'' against itself,

and we need to check this for
all possible mutations P''.

Again, that would take a little
bit of time to do in class,

so just trust me on it and once
again,

let me give you the heuristic
argument I gave to you before.

It's essentially the same
argument.

So the heuristic argument I
gave to you before was:

imagine a P',
a mutation, that is more

aggressive than our candidate
equilibrium.

If it's more aggressive then
it's going to do very,

very badly against itself
because C is bigger than V in

this case.
So it's actually going to get

negative payoffs against itself.
Since it gets negative payoffs

against itself,
it turns out that will cause it

to die out.
Conversely, imagine a mutation

that's relatively soft,
that's relatively dovish,

this mutation is very good for
the incumbents because the

incumbents essentially beat up
on it or score very highly on

it.
So once again,

the more dovish mutation will
die out.

So again, that isn't a proof
but trust the argument.

We need to show this but it
does in fact turn out to be the

case.
So what have we shown here?

It didn't prove the last bit,
but what we've argued is that,

in the case in which the cost
of fighting in nature are bigger

than the prizes of winning the
fight,

it is not the case that we end
up with a 100% doves.

So we don't end up with no
fights, for example:

no fights is not what we would
expect to observe in nature.

And we don't end up with a 100%
fights: 100% fights is not what

you expect to see in nature.
What we end up with is a

mixture of hawks and doves,
such that V/C is the proportion

of hawks.
So the fights that occur are

essentially V/C squared.
We can actually observe those

fights in nature.
What lessons can we draw from

this?
So biology lessons.


So we used a lot of what we've
learned in the last day or so to

figure out what the ESS was
there.

We kind of did the nerdy part.
Now let's try and draw some

lessons from it.


So the first thing we know
isI've hidden what we care

about hereso we know that if V
is smaller than C,

then the evolutionarily stable
mixed population has V/C hawks.

So let's just see how much of
this makes sense.

So as V goes up,
as the prizes go upif you

took the same species and put
them into a setting in which the

prizes tended to be largerwhat
would we expect to see?

Would we expect to see the
proportion of hawks up or down?

Up right: as V goes up,
we see more hawks.

What else do we see?
Not so surprisingly as C goes

upwe look at settings where
the cost of fighting is

higherwe tend to see more
doves.

Now this is in ESS,
so more hawks in the

evolutionarily stable mix.
And more doves in the

evolutionarily stable mix.
It's possible,

of course that the species in
question can recognize these two

different situations and be
coded differently,

to behave differently in these
two different situations but

that's beyond the class for now.
Perhaps a more interesting

observation is about the
payoffs.

Let's look at the actual
genetic fitness of the species

overall.
So in this mix what is the

payoff?
Well how are we going to figure

out what is the payoff?
So the payoff in this mix,

we can actually construct by
looking at the payoff to dove.

It doesn't really matter
whether you look at the payoff

to dove or the payoff to hawk.
Let's look at the payoff to

dove.
So the payoff was what?

It was, if you were dove,
then 1V/C of the time,

you met another dove and in
that instance you get a payoff

of V/2 and it must be the payoff
to being a dove is the same

since they're mixing.
This is the payoff.

So what's happening to this
payoff as we increase the cost

of fighting?
What happens as C rises?

So just to note out,
what happens as C goes up?

So you might think naively,
you might think that if you're

in a setting,
be it a social evolutionary

setting or a biology,
a nature evolutionary setting,

you might think that as the
cost of fighting goes up for you

guys in society,
or for the ants,

antelopes, or lions we're
talking about,

you might think the payoffs in
society go down.

Costs of fighting go up,
more limbs get lost and so on,

sounds like that's going to be
bad for the overall genetic

fitness of the species.
But in fact we don't find that.

What happens as C goes up?
The payoff goes up.

As C goes up,
the payoff goes up.

As we take C bigger,
this begets smaller,

which means this is bigger.
Everyone see that?

Just look at that term
(1V/C)(V/2),

it's actually increasing in C.
So how does that work?

As the costs of fighting go up,
it's true that if you do fight,

you're more likely to lose a
finger,

or a limb, or a claw,
or whatever those things are

called, or a foot or whatever it
is you're likely to lose.

But the number of fights that
actually occur in this

evolutionarily stable mix goes
down and it goes down

sufficiently much to compensate
you for that.

Kind of a remarkable thing.
So these animals that actually

are going to lose a lot through
fighting are actually going to

do rather well overall because
of that mix effect.

I feel like it's one of those
strategic effects.

Now of course that raises a
question, which is what would

happen if a particular part of
this species evolved that had

lower costs of fighting?
It could regrow a leg.

Sounds like that would do
pretty well, and that would be

bad news for the species as a
whole.

Third thing we can observe here
is what's sometimes called

identification.


So what does identification
mean here?

It means that by observing the
data in nature,

by going out and filming these
animals behaving for hours and

hours,
or changing their setting in a

lab and seeing how they
interact, or changing their

setting in the field and seeing
how they interact,

we can actually observe
something, namely the proportion

of fights.
Perhaps we can do better now

and actually look at their
genetics directly since science

has evolved, and we can actually
back out the V and the C.

By looking at the proportion of
hawk genes out there or hawkish

behavior out there,
we can actually identify what

must be the ratio of V to C.
We can tell what the ratio V/C

is from looking at data.
We started off with a little

matrix, I've just written in V's
and C's, I didn't put any

numbers in there.
We can't tell what V is,

we can't tell what C is,
but we can tell what the ratio

is by looking at real world
data.

So if you spend enough hours in
front of the TV watching nature

shows you could back this out.
Not literally,

you need to actually do some
serious work.

So this is a useful input of
theory into empirical science.

You want theory to be able to
set up the science so you can

back out the unknowns in the
theory: and that's called

identification,
not just in biology but in

Economics as well.
Now there's one other thing

you'd like theory to be other
than identifiable.

You'd like theory to be
testable.

You'd like theory to make
predictions that were kind of

outside of the sample you
started with.

Is what I'm saying familiar to
everyone in the room.

This is a very familiar idea
I'm hoping to everybody,

slightly philosophy but a very
familiar idea.

You have a new theory.
It's one thing for that theory

to explain the existing facts,
but you'd like it to predict

new facts.
Why?

Because it's a little easyit
might be a little bit too easy

to reverse engineer a model or
theory to fit existing facts,

but if it has to deal with new
facts, that's kind of exciting,

that's a real test.
Does that make sense?

So you might ask about this
theory, you might say,

well it's just a whole bunch of
'just so stories'.

Does anyone know what I mean by
a 'just so story'?

It's children's stories,
written by Kipling,

and people sometimes accuse a
lot of evolutionary theory as

being 'just so stories',
because you know what the fact

is already, and you reverse the
game, you come up with a story

afterwards.
That doesn't sound like good

science.
So you'd like this theory,

this theory that matches Game
Theory with evolution,

to predict something that we
hadn't seen before.

And then we can go out and look
for it, and see if it's there,

and that's exactly what we now
have.

So our last example is a
slightly more complicated game

again.


The slightly more complicated
game has three strategies and

the strategies are called,
I'll tell you what the

strategies are called in a
second actually,

I'll give you the payoffs first
of all.

So once again this is a game
about different forms of

aggression and we'll look at
other interpretations in a

second.
Once again, V is going to be

the prize for winning,
0 is going to be the prize for

losing, and 1 is if it's a tie.
I'm just short sighted enough

that I can't read my own
writing.

So I hope I've got this right,
there we go.

So this is the game,
and we're going to assume that

the prize V is somewhere between
1 and 2.

So V you can think of as
winning, 0 is losing,

and 1 is if it's a tie.
Does anyone recognize what this

game essentially is?
It's essentially rock,

paper, scissors.
And it turns out that when

biologists play rock,
paper, scissors they give it a

different name,
they call it "scratch,

bite, and trample."
Scratch, bite,

and trample is essentially the
tactics of the Australian

football team.
So scratch, bite,

and trample are the three
strategies, and it's a little

bit like rock,
paper, and scissors.

How do we change it?
First, we added 1 to all the

payoffs to make sure there are
no negatives in there and second

we added a little bit more than
1 to winning.

If we added 1 to
everythingsorry,

a little bit less than 1 to
winning.

So if we added 1 to everything
then V would have been 2,

but we kept V somewhere between
1 and 2.

So this is certainly a game,
you can imagine in nature,

there's three possible
strategies for this species and

the payoff matrix happens to
look like this.

So where's my prediction going
to come from?

Well since this is rock,
paper, scissors we know that

there's really only one hope for
an evolutionarily stable

strategy.
Since it's essentially rock,

paper, scissors what would,
if there is an evolutionarily

stable strategy or
evolutionarily stable mix,

what must it be?
(1/3,1/3,1/3).

So the only hope for an ESS is
(1/3,1/3,1/3)let's put that in

here.
So (1/3,1/3,1/3) and you can

check at home that that indeed
is a mixedstrategy equilibrium.

And the question is:
is this evolutionarily stable?

So we know it's a Nash
Equilibrium that I've given you,

and we know it's not a strict
Nash Equilibrium.

Everyone okay with that?
It can't be a strict Nash

Equilibrium because it's mixed.
So if this is an ESS it must be

the casewe'd have to check
thatlet's call this P again

like we've been doingwe have
to check that the payoff from P

against any other P' would have
to be bigger than the payoff

from P' against itself.
We need that to be the case.

So let P' be scratch.
So let's compare these things.

So U of P against scratch is
what?

Well you're playing against
scratch, you are 1/3 scratch,

1/3 bite, 1/3 trample.
So 1/3 of the time you get

1,1/3 of the time you get
nothing, and 1/3 of the time you

get V.
Is that right?

So your payoff is (1+V)/3.
How would we do if we're

scratch against scratch?
The payoff of scratch against

scratch is what?
No prizes for this,

what's the payoff of scratch
against scratch?

1.
Which is bigger, (1+V)/3 or 1?

Well look, V is less than 2 so
(1+V)/3 is less than 1.

So 1 is bigger.
So in this game the only hope

for an evolutionarily stable mix
was a (1/3,1/3,1/3) and it isn't

stable.
So here's an example.

In this example there is no
evolutionarily stable strategy.

There's no evolutionarily
stable mix.

Then the obvious question is,
what does that mean in nature?

Can we find a setting that
looks like rock,

paper, scissors in nature in
which nothing is ES?

If nothing is ES what's going
to happen?

We're going to see a kind of
cycling around.

You're going to see a lot of
the scratch strategy,

followed by a lot of the
trample strategy,

followed by a lot of the bite
strategy and so on.

So it turns out that's exactly
what you see when you look at

these, the example I've left you
on the web.

There's an article in nature in
the mid 90s that looked at a

certain type of lizard.
And these lizards come in three

colors.
I forget what the colors are,

I know I wrote it down.
One is orange;

one is yellow;
and one is blue.

And these lizards have three
strategies.

The orange lizard is like our
big elephant bull:

it likes to keep a harem of
manyor a large territory with

many female lizards in it to
mate with.

But that can be invaded by our
SLF strategy which turns out to

be the yellow lizard.
The yellow lizard can invade

and just mate with a few of
these female lizards.

But when there are too many of
these sneaky yellow lizards,

then it turns out that they can
be invaded by a blue lizard:

and the blue lizard has much
smaller territories,

it's almost monogamous.
So what happens in nature is

you get a cycle,
orange invaded by yellow,

invaded by blue:
harem keeper invaded by sneaky,

invaded by monogamous invaded
by harem keeper again.

Indeed, the population does
cycle around exactly as

predicted by the model,
so here's an example of

evolutionary theory,
via Game Theory,

making a prediction that we can
actually go off and test and

find.
This, for biologists,

was like finding a black hole,
it's a really cool thing.

We'll leave evolution here:
midterm on Wednesday,

we'll come back to do something
totally different next week.

See you on Wednesday.