
Title:
18. Imperfect information: information sets and subgame perfection

Description:
Game Theory (ECON 159)
We consider games that have both simultaneous and sequential components, combining ideas from before and after the midterm. We represent what a player does not know within a game using an information set: a collection of nodes among which the player cannot distinguish. This lets us define games of imperfect information; and also lets us formally define subgames. We then extend our definition of a strategy to imperfect information games, and use this to construct the normal form (the payoff matrix) of such games. A key idea here is that it is information, not time per se, that matters. We show that not all Nash equilibria of such games are equally plausible: some are inconsistent with backward induction; some involve nonNash behavior in some (unreached) subgames. To deal with this, we introduce a more refined equilibrium notion, called subgame perfection.
00:00  Chapter 1. Games of Imperfect Information: Information Sets
18:56  Chapter 2. Games of Imperfect Information: Translating a Game from Matrix Form to Tree Form and Vice Versa
35:11  Chapter 3. Games of Imperfect Information: Finding Nash Equilibria
49:59  Chapter 4. Games of Imperfect Information: Subgames
01:10:17  Chapter 5. Games of Imperfect Information: Subgame Perfect Equilibria
Complete course materials are available at the Open Yale Courses website: http://open.yale.edu/courses
This course was recorded in Fall 2007.

Professor Ben Polak:
So today we have a lot of

stuff to get through,
but it's all going to be fairly

formal.
We're not going to have time to

play a game today.
So it's going to be a day where

we have to learn some new ideas.
So the reason we need to go

through some new formal ideas
today is we've kind of exhausted

what we can do with the ideas
we've gathered so far.

So, just to bring us up to date
with where we are:

in the first half of the
semesterso before the

midtermwe looked at
simultaneous move games.

And one way to think about
those simultaneous move games

were games where,
when I make my choice,

I don't know what you've done,
and, when you make your choice,

you don't know what I've done.
Since the midterm we've been

looking at simple examples of
sequential move

gamessequential move games
under perfect informationin

which I typically do know what
you did when I get to make my

choice.
And you know I'm going to know

what you did when I get to make
my choice.

What I want to be able to do
moving forward is I want to be

able to look at strategic
situations that combine those

two settings.
I want to be able to analyze

games which involve both
sequential moves and

simultaneous move games.
In particular,

I want to see how we can extend
the technique we've been

focusing on for the last few
weeks which is backward

induction.
I want us to see how we can

extend the notion of backward
induction to cope with games

where some parts are sequential
and some parts are simultaneous.

So we're going to look at a lot
of examples and we're going to

introduce some new ideas,
and I'm going to try and walk

you through that today.
So that's our goal.

Let's start with an example.
So here's a very simple game in

which Player 1 moves first,
and has three choices.

Let's call them up,
middle, and down.

And then Player 2 moves,
and Player 2 has two choices

from each of these nodes,
and we'll call the choices

suggestively,
up and downup and down.

And here we'll just call them
left and right.

The payoffs are as follows,
(4,0), (0,4),

(0,4), (4,0),
(1,2), (0,0).

So this is just a standard game
of perfect information,

much like all the games we've
seen since the midterm.

In fact, it's a relatively easy
one.

So we know how to solve this
game.

We solve this game using what?
Using backward induction,

and that isn't so hard here.
We know that if Player 2 finds

herself up here,
she will choose 4 rather than

0;
if she finds herself here,

she'll choose 4 rather than 0;
and if she finds herself here,

she'll choose 2 rather than 1.
So Player 1 won't want to go up

here because he'll get 0,
and he won't want to go into

the middle because he'll get 0,
and he won't want tobut if he

goes down Player 1 will choose
left and Player 1 will get 1.

So Player 1 will choose down.
So backward induction predicts

that Player 1 chooses down and
Player 2 responds by choosing

left.
Just staring at this a second,

notice that the reason in this
gametaking a step back from

backward induction a secondthe
reason Player 1 did not want to

choose either up or middle was
because that move was going to

be observed by Player 2,
and in either case Player 2 was

going to crush Player 1.
So if Player 1 went up,

Player 2 was playing this sort
of "strictly competitive game"

with Player 1 and Player 2 could
pick a choice that gave 2 4 and

1 0.
Conversely, if Player 1 chose

middle, Player 2 could crush
Player 1 by choosing up,

which gave, once again,
Player 2 4 and Player 1 0.

So there was a good reason here
to avoid going into the part of

the game following up or middle,
and the reason was 2 has a huge

secondmover advantage in those
parts of the game.

Is that clear to everybody?
So I now want to consider a

similar, but importantly
different game,

so I'm going to draw the game
again,

but before I draw it,
let me say what I'm going to

do.
So I want to introduce a new

idea, and the new idea is going
to be that Player 2 will not be

able to distinguish between up
or middle.

So let's just say it again.
So if Player 1 chooses down,

Player 2 will observe that,
just as we've done before in

our standard perfectinformation
games,

but if Player 1 chooses either
up or middle,

I want to capture the idea that
Player 2 doesn't know which of

those two choices was made.
That's clearly going to change

the game a lot and the first
question is, how do we represent

that idea in a tree?
So let me try and show a good

way to represent that in a tree.
So the game has the same

structure to it.
Player 1 is again choosing

between up, middle,
or down.

And Player 2 once again is
choosing here,

up or down, up or down,
and here left or right.

And the payoffs haven't changed.
They're still (4,0),

(0,4), (4,0),
(0,4), (1,2) and (0,0).

So that's exactly the same as
you have in your notes already,

but now I want to adapt this
tree to show how we indicate

that Player 2 cannot
distinguishcannot observe

whether 1 chose up or middle,
but can observe if Player 1 has

chosen down.
The way we do that very simply,

we draw a dotted line between
the two nodes of Player 2

between which 2 cannot
distinguish.

So this idea here,
what this dotted line indicates

is that these two nodes are set
in the same information set.

So our new idea here is the
idea of an information set.


The payoffs are meant be the
same on the left as on the

right.
So the idea here is that Player

2 cannot distinguish these two
nodes.

Player 2 knows that she's in
this information set:

she knows that Player 1 must
have chosen either up or middle,

she knows that Player 1 did not
choose down, but she doesn't

know whether she's really here
or here.

Now what happens in this game?
This game is a very different

game.
Why is it a different game?

Well let's try and apply that
loose intuition we talked about

before.
We said previously,

in the old game,
that if Player 1 chose up,

2 knew that Player 1 had chosen
up,

and observed that by choosing
down Player 2 could crush 1.

And if Player 1 chose middle,
Player 2 could observe that

Player 1 had chosen middle and
this time by choosing up could

crush 1.
The problem is that now in this

new game Player 2 doesn't know
whether she's here,

in which case she would want to
choose down, or here,

in which case she'd want to
choose up.

So Player 2's choice is not so
obvious anymore.

That simple backward induction
argument has disappeared.

Moreover, Player 1 knows that
Player 2 will not be able to

observe between up or middle,
so it isn't necessarily the

case that Player 1 will want to
choose down anymore.

It's still true that if Player
1 did choose down that Player 2

would be able to observe that
and will choose left,

so that part of the argument's
the same.

What do we think is going to
happen here?

Well we don't know,
but let me give a suggestion

what might happen here.
Player 1 might say,

hey I could randomize between
up and middle.

I could choose half the time up
and half the time middle.

If I choose half the time up
and half the time middle,

Player 2 isn't going to
knowin generalisn't going to

know what I've done.
It isn't quite clear what

Player 2's going to do and since
I'm randomizing between up or

middle whatever Player 2's going
to do,

I'm going to get half the time
4 and half the time 0 for an

expected value of 2.
So to say it again,

so Player 1 might decide in
this game to randomize

fiftyfifty between up and
middle,

knowing that half the time
therefore he will get 4 and half

the time he'll get 0 for an
expected value of 2,

which notice is better than he
got by choosing down.

So this change in this game,
change in the information in

this game, not only led to a
different game but led to a very

different outcome.
So here 1 might,

for example,
might randomize between up and

middle, and over here we know
exactly what 1 does,

1 chooses down.
So we get very different

outcomes because of this change
in information in the game,

and the theme of today is that
information is going to matter.

The way we're going to model
information is by thinking about

these information sets.
And as we go through today,

I want to start giving you some
formal definitions.

So this is the idea,
now let's look at the formal

definition.
There's going to be a lot of

writing today,
so I hope you brought a notepad

with some room on it.
So the first formal definition

of the day comes off that last
example.

The formal definition is the
idea that I want to capture:

I want to capture the idea that
players down the tree may not

know exactly what was done up
the tree.

And the formal definition is
going to go through the idea of

an information set.


So an information set of Player
iin this case involves Player

2 but more generally of Player
iis a collectionor a set if

you likeis a collection of
Player i's nodes between

whichI guess it can be more
than twoso let's say

among which i cannot
distinguish.


Now it's going to turn out
that, by clever use of

information sets,
we're going to be able to use

our technology,
our technology of drawing

trees, to capture all sorts of
interesting and increasingly

complicated information
settings.

In this particular game,
it's the case that Player 1

knew that Player 2 was not going
to be able to distinguish

between up or middle in this
tree,

and Player 1 knew that Player 2
would be able to distinguish in

the left hand tree.
We can even use information

sets in a more elaborate tree to
capture the idea that Player 1

may not know what Player 2's
going to know.

But I won't do that now:
I'll leave that later,

and you'll see some examples of
that on the homework.

So we have our formal
definition.

This is going to be the first
of our big tools of the day,

but let me just put down a few
things that we have to be

careful about:
a couple of rules.

So these information sets have
to obey certain rules and

certain things are not allowed.
So in particular the following

is not allowed.
Here's a tree in which Player 1

moves first and Player 2 does
not observe Player 1's move.

So these two nodes are Player
2's nodes.

They're in the same information
set, which means Player 2 is not

meant to be able to distinguish
between these two nodes.

And suppose however the tree
looked like this.

Okay, so I claim that this is
crazy.

We couldn't allow this.
It wouldn't make any sense to

allow this.
Can anyone see why?

Why is this not really a
sensible tree?

Everyone see that?
Why is that not a sensible tree?

Student: If Player 2
knows that he has three choices

then he'll know he's at the top
node.

Professor Ben Polak:
Exactly, in this tree you

haven't got the payoffs in,
but if Player 2 observes that

she has three choices,
she knows she must be at the

top node.
If she observes she has two

choices she must be at the
bottom node.

So in this tree,
it was supposed to be the case

that 2 didn't know whether she
was here or here,

but merely by observing how
many choices she has,

she could infer whether she was
at the top node or the bottom

node.
So that can't make any sense.

So this is not allowed,
so we'll put a red cross

through that one.
Now the second thing that's not

allowed is a little bit more
subtle, and actually is an

interesting thing.
This is just kind of

bookkeeping, but the second
thing is more interesting.

So let's have a look at it.
Here's a more interesting tree.

Player 1 moves first.
Player 2 observes that move,

and Player 2 moves second.
And then at the bottom of this

Player 1 may have another chance
to move again.

So again I haven't put payoffs
in here.

Player 1 moves first;
Player 2 moves second;

and, if Player 2 chooses down
here or up there,

then Player 1 gets to move
again.

Now I claim again that this is
not a sensible tree.

It's not a sensible arrangement
of information sets.

Can anyone see why this isn't
sensible?

Why is this not sensible?
Steven?

Student: Player 1 knows
what node he's at based on the

first choice that he made.
Professor Ben Polak:

Exactly, so to get to the upper
node here for Player 1,

Player 1 must have chosen up
before, and to get to the lower

node here, Player 1 must have
played down before.

So provided that Player 1
remembers his or her own move,

she knows where she is.
Is that right?

So provided that Player 1 can
recall what she herself did

earlier on in the tree she
should be able to distinguish

these things.
So we're going to rule this

out, but I just want to make a
remark here.

There's an assumption in ruling
it out and the assumption is

we're assuming perfect recall or
perfect memory.


And people don't alwaysin the
real world, players don't always

have perfect recall.
There are two reasonsand

we're always going to assume
this, but let me just make a

remark.
There are two reasons why

people might not have perfect
recall.

One reason is,
like me, they're getting old.

They simply can't remember what
they did yesterday.

So while I'm driving home I
know roughly how many traffic

lights I have to go through
before I turn right,

but I sometimes forget which
traffic light I'm at and I turn

right too early or too late.
That doesn't happen to you

guys, but it happens to me as
I'm getting a bit senile,

so old age would rule out
perfect recall.

A more important example,
perhaps, is if players of games

are themselves institutions.
It's sometimes useful,

and we've often talked about it
in this class,

to imagine a player of a game
being a firm or a country,

or some kind of institution in
which the actual decisions may

be being taken by different
actual people within the firm,

institution, or country.
This assumption of perfect

recall is saying that the
players within the institution

knew what the other players
within that same institution

were doing.
If we're modeling General

Motors as one player,
this assumption is assuming

that the chief financial officer
and the chief executive officer

at GM are observing each other's
actions: are on the same page.

The left hand knows what the
right hand's doing.

We are typically going to
assume that, but just to make

the point: it is an assumption,
and it's quite interesting to

see what happens if you relax
it.

So with that in mind,
we can move to our next

definition.
and this is something I've

referred to early on in the
class, but I want to be formal

now.
Now we can be formal.

We've talked earlier on in this
class about the idea of perfect

information.
So, for example,

when we talked about Zermelo's
theorem, we talked about games

of perfect information.
We said informally what this

wasa game of perfect
information is a game where each

player in the game can observe
all previous moves.

That was our informal
definition, but we can now give

a formal definition very simply.
Perfect information is a

setting where all information
sets in the treegames of

perfect information are games
where all information sets in

the treecontain just one node.
I want to be clear here.

What we're saying here is,
if we have a tree in which

every information set is a
singleton,

we basically don't have to
bother with any dotted lines:

that's a game of perfect
information.

And that shouldn't be a
surprise to anybody here because

that's exactly how we drew trees
since the midterm.

Is that right?
Of course, the novelty is we're

now going to be allowed to look
at games of imperfect

information.
The reason we're doing this is

because it will be
interestingas in the example

we've just seento think about
games where information is not

perfect.
So what is the definition of

imperfect information?
Imperfect information's formal

definition is "not perfect
information."


We've defined what perfect
information is.

Imperfect information is the
rest.

In the real world there's a lot
of games that turn out to have

imperfect information.
There's lots of strategic

situations where I'm going to be
able to observe some things that

you've done,
but other things,

I won't know quite what you've
done.

Okay, let's go straight to an
example.

So I don't think we really need
to keep that definition very

focal, so let's get rid of that
board.

Let's do an example.
We'll be doing many examples

today.
So this example is going to be

a tree in which Player 1 moves
first.

Player 2 cannot observe this
move;

and sometimes rather than
labeling both of these nodes

with a 2, I'll just put a 2 on
the information set,

just to indicate that both of
these nodes belong to Player 2.

So Player 2 moves second.
And we'll call Player 1's move

up or down;
and we'll call Player 2's move

left or right,
kind of suggestively.


So what's the information set
here?

The information set is
indicating the fact that Player

2 cannot observe whether Player
1 moved up or down.

Player 2 cannot observe whether
Player 1 chose up or down.

Now, why does that matter?
I haven't put the payoffs in

yet but I will in a second.
It matters because had this

game been a game of perfect
information, had this

information sethad there been
two information sets here,

this dotted line not been there
, then Player 2 could have

chosen separately whether to
choose left or right at this

node,
or left and left or right at

this node.
But since Player 2 doesn't know

whether she's at the upper node
or the lower nodeshe doesn't

know whether Player 1 chose up
or downshe really only has one

choice to make here.
She's either choosing left at

both nodes or she's choosing
right at both nodes.

And just to pull it back to our
first example in the class,

we saw the same feature there.
When we moved from a game of

perfect information to a game of
imperfect information we reduced

the choices available for Player
2.

Here Player 2 could choose
separately up or down,

at these two different nodes.
But here Player 2 only makes

one choice that has to apply to
both nodes because Player 2

cannot distinguish those two
nodes.

So let's have a look and see,
once we put some payoffs on,

what it does in this particular
game.

So here's some payoffs:
(2,2), (1,3),

(3, 1) and (0,0).
So once again Player 2 cannot

separately choose at the upper
node or the lower node,

she's either choosing left or
she's choosing right.

But it turns out that this game
is a little easier than the game

we started with.
Why is it easier than the game

we started with?
It's easier than the game that

we started with because,
from Player 2's point of view,

whether she thinks she's up
here or whether she thinks she's

down here, she has the same best
choice in either case.

If she thinks she's at the
upper node then by choosing left

she'll get 2 and right she'll
get 3, so right seems better.

If she thinks she's at the
lower node then choosing left

gets 1 and right gets 0,
so once again right is better.

So in fact, in this particular
game, regardless of whether

Player 2 thinks that Player 1
chose up,

and hence she's at the upper
node, or whether Player 2 thinks

that Player 1 chose down,
and hence she's at the lower

node, Player 2 is going to make
the same choice in this game,

namely right.
So notice that this particular

game actually solves out rather
like backward induction.

Even though Player 2's choice
is a little bit more

complicatedshe doesn't know
where she isit's actually

clear what Player 2 will do at
this information set.

Now if we push this forward a
little bit harder,

we can see why.
Player 1 in this game has two

strategies up or down.
And Player 2 has two

strategies, she either chooses
left or right.

Notice that she only has two
strategies because she has to

choose the same thing at these
two nodes, she doesn't know

where she is.
Okay, so let's draw up the

matrix for this game and see if
it looks familiar.

So Player 1 is choosing between
up or down.

And Player 2 is choosing
between left or right.

And the payoffs are as follows:
(up, left) is (2,2);

(up, right) is (1,3);
(down, left) is (3,1);

and (down, right) is (0,0).
So what game is this?

It wasn't meant to be a trick
question.

Somebody waved their arm in the
air.

What game is this?
Student: Prisoners'

Dilemma.
Professor Ben Polak:

This is Prisoners' Dilemma.
This is an old friend of ours.

This is Prisoner's Dilemma,
a game we saw the very first

day.
But notice what have we seen

here?
This is Prisoner's Dilemma that

we've seen many,
many times, that's almost

unbearably familiar to most of
you.

Now here's Prisoner's Dilemma
as represented the way in which

we talked about games before the
midterm.

But here is the same game.
This is also Prisoner's

Dilemma, but now I've drawn it
in a tree.

Here I drew it in a matrix,
and here I drew it in a tree.

Now that we have information
sets we can represent all the

games that we've studied before
the midterm.

All the games that were
simultaneous move games we can

study using trees by building
information sets.

What's the key observation here?
It doesn't really matter

whether Player 1 moves first or
Player 2 moves first.

It doesn't really matter what's
happening temporally in this

game.
What matters is information.

When Player 1 makes her move
she doesn't know what Player 2's

going to do.
She doesn't know what 2 is

doing.
And when Player 2 makes her

move she doesn't know what 1 is
doing: and that's a simultaneous

move game, even if time is
passing.

The key is information,
not time.

Now, on the way here I snuck
something in and I should just

tell you what I snuck in.
I snuck in what a strategy is.

I went from a tree,
or an extensive form game,

to a normal form game,
and we've already done that a

couple of times before in the
class.

We did it with the entry game,
for example,

about a week ago.
But there all we did was we

defined what a strategy was in a
game of perfect information.

And just to remind you,
a strategy in a game of perfect

information is a complete plan
of action.

It tells the player in question
what they should do at each of

their nodes.
But now we have to be a bit

more careful.
We can't have a strategyonce

we move to imperfect
informationwe can't have a

strategy tell you what to do at
each of your nodes,

because you yourself can't
distinguish between those nodes.

So we need to adapt our
definition of a strategy to make

it appropriate for these more
complicated games.

So let's just adapt it in the
obvious way.

Definition, I'll just define
pure strategies for now.

A pure strategy of Player i is
a complete plan of actionso

this is the same as before.
But what does it mean to be a

complete plan of action?
It can't tell me what to do at

every single node.
That can't be the right

definition because I can't
distinguish nodes.

So all that it can be doing is
telling me what to do at each

information set.
So it specifies what Player i

should doshould perhaps is the
wrong word, let's just say

will do at each of i's
information sets.


So if you go back about a week
you'll see almost exactly the

same definition of a strategy,
but the previous definition

told i what to do at each node
and this one just tells i what

to do at each information set.
What I'm doing is tidying up

the previous definition so we
can apply it to the more

interesting games we're going to
look at from now on.

So now we have the definition
of a strategy,

we can carry on the idea we've
just seen here.

So what's the idea here?
Any game you give me in the

form of a tree,
I can rewrite the game in the

form of a matrix.
So let's see some other

examples of that idea.


A lot of new ideas today,
but some of them are just

tidying up and kind of
bookkeeping, and some of them

are more interesting.
So let's start with a tree.

Let's make it a slightly more
interesting tree than the one

we've seen before.
Actually that's too

interesting, let's go a little
bit slower.

Let's have Player 1 have two
choices, and Player 2 have three

choices.
So here's a simple tree,

and let's put some payoffs in,
but let me just put some

letters in for payoffs rather
than put in numbers.

So we'll call these actions up
and down.

And we'll call this action
left, middle,

and right;
and left, middle, and right.

And we'll call the payoffs
(A1,A2), (B1,B2),

(C1,C2), (D1,D2),
(E1,E2), and (F1,F2),

so just to keep track of it.
I want to show you how we take

this tree and turn it into a
matrix.

So how do we turn it into a
matrix?

Well we look and say how many
strategies has Player 1 got and

how many strategies has Player 2
got?

So Player 1 here just has two
strategies, up or down.

And Player 2 has three
strategies, either left,

middle or right.
Again they can't choose

separately at these two nodes,
so they really just have three

choices: left,
middle, or right.

Leave a space here in your
notebook, leave a space to the

right here, and let's draw the
matrix for this tree down here.

So here's my matrix.
Player 2 is choosing left,

middle, or right and Player 1
is choosing up or down.

The payoffs go in the obvious
way.

So (A1,A2), (B1,B2),
(C1,C2), (D1,D2),

(E1,E2), and (F1,F2).
So everyone understand that was

just a simple exercise to show
we can go from an extensive

form, a tree,
to the normal form,

a matrix?
That was easy right.

However, there's an interesting
thing here.

It isn't obvious that,
if I just gave you the matrix,

it isn't obvious that this is
the tree from which it came.

Let me draw another tree that I
claim corresponds to that same

matrix.
Here's another tree.

So this other tree instead of
having Player 1 move first,

it's going to have Player 2
move first.

Player 2 better have three
choices and we better call them

left, middle,
and right.

And it better be the case that
Player 1 is in one big

information set and Player 1
only has two choices,

which we'll call up and down
because that's what this matrix

is telling us.
It's telling us Player 2 had

three choices and Player 1 had
two choices.

So that's true in the matrix
I've drawn.

And let's be a little bit
careful where the payoffs are.

So (left, up),
that's easy:

that's going to be (A1,A2).
(Left, down) is going to be

(D1,D2).
(Middle, up) is going to be

(B1,B2).
(Middle, down) is going to be

(E1,E2).
(Right, up) is going to be

(C1,C2) and (right,down) is
going to be (F1,F2).

So I have to be a little bit
careful where I put in the

payoffs, but I think that's
right what I just did.

Notice that what I did here:
I started from this tree.

It was an easy operation to
construct the matrix,

so easy that it was kind of
boring,

and it's not that hard to see
that I can go the other way and

construct this other tree from
the matrix.

This is also a tree in which
Player 2 has three strategies

and Player 1,
this is all Player 1,

has two strategies.
So what, what have we learned

from this?
Well let's look at this more

carefully.
This tree is a tree in which

Player 1 moved first and Player
2 didn't observe Player 1's

choice.
Is that right?

This is a tree in which Player
2 moved first and Player 1

didn't observe 2's choice.
What are we noticing here?

They're really the same game.
There's no difference between

these two games.
They're really the same game.

It doesn't matter whether it's
Player 1 moving first and Player

2 who's unable to observe 1's
choice,

or whether it's Player 2 who is
moving first and Player 1 who is

unable to observe 2's choice.
All that matters is that

neither player could observe the
other person's choice before

they got to move,
they both correspond to exactly

the same game.
So what's the message here?

The message is something we've
talked about before in the

class, but I'm trying to be a
bit more formal about it.

The message is that what
matters is information,

not time.
Clearly, time isn't an

irrelevant thing.
I couldn't know something that

hasn't happened yet,
so time is going to have

effects in information,
but ultimately what matters is

information.
What do I know and when did I

know it?
So the key idea that we're

trying to capture with these
information sets,

just to repeat,
is what did the player know and

when did they know it,
that famous expression from the

Watergate Trials.
Okay, let's look at a more

interesting example,
and see if we can actually talk

about what's going to happen in
these games.


So by the end of today I want
to have enough machinery so we

can actually start analyzing
these games and predicting

what's going to happen.


So as we go on we'll get more
complicated, so let's get a

little bit more complicated now.
Once again, here's a game in

which Player 1 is going to have
two choices and we'll call those

choices Up or Down.
It's getting to be a familiar

theme.
Once again, Player 2 is going

to move next and now,
this time, just to keep things

simple,
we'll have Player 2 just have

two choices left or right,
or left or right.

But now I want to make things
more interesting,

let's have Player 1 move again.
So if Up, right happens then

Player 1 gets to move again,
in which case Player 1 is going

to choose up or down.
I'll use a [little]

u and a [little]
d to distinguish it from the

ones further to the left of the
tree.

So this is a very simple tree,
Player 1 moves first.

Player 2 moves secondI forgot
to put a 2 in here.

And then if (Up,
right) has occurred then Player

1 gets to move again.
Let's put some payoffs in.

So let's have this be (4,2),
(0,0), (1,4),

(0,0) again and (2,4).
Let's just carry on analyzing

this game using exactly the
methods we've been talking about

in the class today so far.
So the first thing I'm going to

do is I want to turn this into a
matrix.

The first thing to do on that
route is to try and figure out

how many strategies does Player
1 have, and how many strategies

does Player 2 have.
Before we even do that let's

try and figure out how many
information sets they have.

So I claim that Player 2 just
has the one information set,

is that right?
But Player 1 has two

information sets.
This information set at the

beginning of the game and then
potentially this second

information further down the
tree.

A strategy must tell youa
strategy must tell the player

what to do at each of their
information sets.

So the strategies for Player 1
are what?

Well one strategy is Up and
then up again,

another strategy is Up and then
right,

another strategy is Down and
then up, and a fourth strategy

is Down and then right.
Notice something which we've

seen already in this class
before: there's a little bit of

a redundancy here.
These two Down strategies force

the game into a part of the tree
where this node will not arise.

To put it less grandly,
if Player 1 chooses Down,

she knows that she won't have
to make a choice of up or down

later on.


Yes.
Sorrythank you.

Let me start again since that
was the wrong notation.

So Player 1's choices are Up
and then up;

Up and then down;
Down and then up;

and Down and then down.
Thanks Jake, sorry.

Now, why are there four
strategies?

It's a bit of a surprise
perhaps because if Player 1

chooses Down then she knows she
will never have to make a choice

at her second information set.
Nevertheless,

when we write down a strategy
we have to write down an

instruction for every single
information set.

So we include both of those
strategies.

Strategies for Player 2 here
are a little bit easier.

Strategies for Player 2 are
just left or right.

With that in mind,
let's draw up the matrix.

So Player 1 here has four
strategies and they are (Up,

up), (Up, down),
(Down, up), and (Down,

down).
Player 2 has two strategies,

and they are left or right.
Everyone okay so far?

We're just basically
transferring things across,

and then we have to transfer
the payoffs across so (Up,

up) followed by left is going
to be (4,2).

(Up, up) followed by right is
going to be (0,0).

So Up, right, up is (0,0).
((Up, down),

left) is the same as Up,
left, down so it's going to be

(4,2).
((Up, down),

right) is going to be Up,
right, down so it's going to be

(1,4).
((Down, up),

left) is the same as saying
Down, left so it's going to be

(0,0).
((Down, up),

right) is going to be (2,4).
((Down, down),

left) is once again going to be
(0,0) and ((Down,

down), right) is once again
going to be (2,4).

Does everyone see how I got the
payoffs?

I just used those strategies to
tell me which way I'm going

through the tree.
If I combine them,

it gives me an entire path and
gets me to an end node.

You can see this redundancy we
talked about.

We pointed out that these
things are kind of the same

thing, and you can see in the
matrix that the bottom four

squares of the matrix have
repetition.

This row is the same as that
row.

Everyone happy with that?
Okay, we have a matrix.

Let's analyze it by finding the
Nash equilibria in this game.

So to find the Nash equilibria
in this game,

we're going to find best
responses.

So let's just start by asking
what is the best response to

left?
So if Player 2 chooses left,

Player 1's best response is
either (Up, up) or (Up,

down).
If Player 2 chooses right then

Player 1's best response is
either (Down,

up) or (Down,
down).

Everyone okay so far?
If Player 1 chooses (Up,

up) then Player 2 is going to
choose left.

If Player 1 chooses (Up,
down) then Player 2's best

response is to choose right.
If Player 1 chooses (Down,

up) then Player 2's best
response is to choose right.

And if Player 1 chooses (Down,
down) then Player 2's best

response is to choose right.
So this is kind of slow and I

just want to be careful.
I'm going slow for a reason.

We're going to gradually get
harder, and I want to be a

little bit careful.
I can see people are looking a

little sleepy around the room.
I know it's kind of lunchtime.

If you see your neighbor
getting sleepy give them a good

sharp elbow because I think this
isn't a good time to fall

asleep.
Sometimes I'm worried you're

going to miss something,
and it's only going to get

harder and you're going to miss
things.

Alright, so what are the Nash
equilibria in this game?

We know how to do that.
The Nash equilibria must be

(Up, up) followed by left.
Make sure I get them all.

(Down, up) followed by right;
and (Down, down) followed by

right.
I want these three Nash

equilibria.
Okay, so that wasn't such a big

deal.
I've got three equilibria in

this game.
And if I'd simply given you

this game in the first half of
the semester,

I hadn't shown you the
treeyou've never seen this

treeI just gave you this game
and said find the Nash

equilibria in this game,
and that had been a question on

the midterm,
we'd have stopped here.

We'd have said:
okay, I found these Nash

equilibria.
Maybe you'd have gone on and

found mixed ones I don't know
but essentially we'd be done at

this point.
Let's say again.

If we'd started as we would
have done before the midterm

with me giving you a payoff
matrix and asking you to find

the Nash equilibria,
then at this point we'd be done.

We'd have found the three Nash
equilibria, at least the three

purestrategy Nash equilibria.
The problem is if we go back to

the tree, to the dynamic
gamethe game that has some

action going on in itand
actually look at this game,

it's not clear that all of
these Nash equilibria are really

equally plausible.
Can anyone see what might be a

bit implausible about some of
these Nash equilibria?

What's implausible about them?
Any taker on this?

Well, let's look at this game
again.

This game is a little bit
complicated.

It's not clear what one should
do here perhaps,

and perhaps it's not clear what
Player 2 should do here,

because after all,
Player 2 doesn't know where he

is and he doesn't know whether
Player 1,

if Player 1 gets to move again
is going to choose up or down

but: what's the but?
Can we get a mike on Patrick?

Student: So if you look
at it backwards,

you can cross out Player 1's
second choice.

He's always going to choose
down so that's (1,4) at that

node.
So then you know Player 2 is

always going to choose right
because his payoff is always 4.

So then Player 1's not going to
haveI mean Player 1 knows

which to choose then.
He's going to choose down.

Professor Ben Polak:
Good.

So let's just walk through what
Patrick just said.

That's very good.
So if we just analyze this game

the way we've been taught to
analyze trees,

essentially using backward
induction,

we first of all observe that if
Player 1 gets to move again here

she'll know where she is,
and she'll know she's choosing

between 1 and 0.
She's going to choose down.

Is that right?
She's going to choose down.

But knowing this,
Player 2, even though Player 2

doesn't know where he is,
Player 2 actually has a pretty

easy choice to make.
He knows that if he chooses

left, he either gets 2 or 0,
but if he chooses right he gets

4.
4 is bigger than 2.

4 is bigger than 0.
So Player 2 is actually going

to choose right.
And given that,

given that Player 2 is going to
choose right,

Player 1 is essentially
choosing between 1,

if she chooses up,
which would be followed by

right and down;
and 2, which is what happened

if she chooses down followed by
right.

So this game we can essentially
analyze through backward

induction.
It's not quite backward

induction because we had to add
in this little piece about 2 not

knowing where she was,
but it turned out,

no matter where she was,
she had a dominant strategy,

she had a better strategy once
she figures out that Player 1 is

going to choose down.
Is that right?

If we go back and look at these
Nash equilibria,

the prediction that we just
got, which is what?

Down for Player 1,
right for Player 2 and then

down again for Player 1.
That strategy is this one.

So one of these Nash equilibria
corresponds to our sensible

analysis of this tree,
but the other two do not.

These two Nash equilibria are
inconsistent with backward

induction.
They're perfectly good Nash

equilibria, if we'd given you
this matrix at the midterm

you'd have thought that it's
just fine.

But it turns out both of these
Nash equilibria involve Player 1

choosing a strategy up,
that we know that Player 1 is

not going to do if reached.
And one of these Nash

equilibria involves Player 2
choosing a strategy left,

that, in fact,
she's only choosing because she

thinks Player 1 is going to
choose up,

which, in fact,
we just argued Player 1 is not

going to do.
The people at the back,

there's a little bit too much
volume bouncing off the wall

there, so just keep it down in
the balcony.

Thank you.
So these two Nash equilibria,

they're perfectly good Nash
equilibria to the game,

but they don't make any sense.
They're completely inconsistent

with the way we've learned to
talk about games.

Now we've seen this before.
We saw it in the entry game.

This is a much more
complicated, much more

interesting example,
but we saw in the entry game

when there was one entrant
entering into a market,

that in that game there were
actually two Nash equilibria and

one of them we argued was
incredible.

Here it's a bit more
complicated, but nevertheless,

these two equilibria seem like
bogus equilibria or phony

equilibria,
or equilibria we wouldn't

really believe in.
The reason we don't believe in

them is that they don't
correspond to backward induction

and our common sense intuition
is about backward induction.

So we need some new notion,
the aim of the class has been

what?
We want to be able to model

games that have both sequential
moves and simultaneous moves,

and we want to be able to look
at the games and use our

techniques from both halves of
the class.

We want to be able to use the
idea of Nash equilibrium from

the first half of the class,
and we want to be able to use

the idea of backward induction
from the second half of the

class.
But what we're learning here is

that Nash equilibrium,
if we just take the notion of

Nash equilibrium and plunk it
down on these sequential move

games,
it will produce equilibria that

don't make any sense.
So we need a more refined

notion of equilibrium,
a better notion of equilibrium

than just Nash equilibrium to
deal with these settings where

we have both simultaneity and
sequential moves.

We have both some perfect
information and some imperfect

information.
That was one example.

Let me give you a second
example if that example wasn't

yet convincing.
Let me leave that example up.


So far we've seen that Nash
equilibrium gets us into trouble

in these games and we've seen it
got us into trouble because it

basically conflicted with our
backward induction intuitions.

Now I'm going to show you a
different game and we're going

to see again that Nash
equilibria is going to get us

into trouble.
This is a going to be a

threeplayer game.
We will get more complicated as

we go along.
So another example,

this time with three players.
So as the examples get harder I

need you to be more alert to see
if you can follow them.

So this is a more complicated
tree.

Here's a tree in which Player 1
moves first and chooses between

A or B and if Player 1 chooses A
the game is over,

she gets 1, and the other two
players get nothing.

If she chooses B then Players 2
and 3 get to play a little game

down here in which 2 moves first
in this little subgame and 3

moves second,
and the payoffs in this

subgame are as follows.
Again, using Player 1's payoff

first.
So there's (0,1,

1), (0,0, 2),
(0,0, 1) and (2,1,

0).
So this is quite a complicated

game, it's got three players for
a start, so it's going to be a

little bit hard to draw it up in
a matrix,

but nevertheless,
let me try and do that.

So I claim that we can model
this game as follows.

It's a game in which Player 1
is choosing which matrix,

let's call this Matrix A and
Matrix B.

Player 1 is choosing the
matrix, Player 2 is choosing,

let's call them up and down:
Player 2 is choosing up or

down.
And Player 3 is choosing left

or right.
Notice in this game Players 2

and 3 actually can observe the
choice of A or B to start with.

So let's try and put in the
payoffs in the correct places.

It's not always easy to do,
but let's try.

So A is easy,
if Player 1 chooses A,

then the payoffs in this matrix
are somewhat trivial.

Because if Player 1 chooses A,
whatever anyone else does the

payoff is 1,0,
0.


Somewhat uninteresting matrix
over there.

But if Player 1 chooses B then
life gets more interesting.

Then if Player 2 chooses up and
Player 3 chooses left,

we end up here so that's
(0,1,1).

If Player 2 choosesthis is 2
and this is 3if Player 2

chooses up and Player 3 chooses
right then we're at (0,0,2),

so this is (0,0,2) going in
here.

If Player 2 chooses down and
Player 3 chooses left then we're

at (0,0,1).
Everyone okay with that?

If Player 3 chooses down then
we're down here which is (2,1,

0).
Okay so here's a little game.

Player 1 is choosing the
matrix, Player 2 is choosing the

row in the matrix,
albeit trivially on the left

hand side and Player 3 is
choosing the column in the

matrix,
again, albeit trivially on the

left hand side.
We don't really care about this

picture very much.
Okay, so now what?

Well, once again we could look
for Nash equilibria in this

game.
It turns out there are lots of

Nash equilibria in this game.
Let me just show you one Nash

equilibrium and then we'll talk
about it.

So I claim that there are lots
of Nash equilibria,

and one of them is the Nash
equilibrium (A,

up, left).


So let's just see where that is
in the tree first of all.

So Player 1 chose A,
Player 2 chose up and left,

but it followed A so we end up
here, we end up at (1,0,

0).
So (A, up, left) is this box in

the tree.
Now, let's just check that that

actually is a Nash equilibrium.
So we all know how to do this

from the first half of the
class.

So to check that that's a Nash
equilibrium we better check that

no individual player can do
better by deviating.

So let's start with Player 1.
If Player 1 deviates holding

Player 2 and 3 fixed,
then Player 1 will be switching

the matrix from Matrix A to
Matrix B.

Is that correct?
So we'll move from this box in

the left hand matrix to the
equivalent box in the right hand

matrix.
Player 1's payoff will go from

1 to 0, so Player 1 doesn't want
to deviate.

Everyone happy with that?
Player 1 doesn't want to

deviate here.
How about Player 2?

If Player 2 deviates,
holding Players 1 and 3 fixed,

then Player 1 is going to
switch rows in this matrix,

so we'll move from this box to
this box.

Player 2 was making 0 before.
She's still making 0,

so she has no incentive to
deviate.

And the same argument applies
for Player 3 because she will be

choosing the column holding the
row in the matrix fixed,

so once again she gets 0 in
either case.

So everyone happy with that?
So that actually is a Nash

equilibria, and again,
if this had been on the

midterm I could have set this
up,

I could have given you these
matrices or the story behind

them, and you'd have foundI
could have asked you whether

this was a Nash equilibrium and
the answer would have been yes.

But I claim that once again
this is just not a believable

Nash equilibrium.
It is a Nash equilibrium,

formally it's a Nash
equilibrium, but it's not a

plausible prediction for how
this game is going to be played.

Why is it not a plausible
prediction for how this game's

going to played?
Anyone see?

Stare at the tree a bit.
So in the information here,

the premidterm information,
it's fine.

But knowing about the actual
structure of this game I claim

this makes no sense at all.
Why does it make no sense?

Well, notice that if Player 1
were to switch her action from

the prescribed action A to
action B, then we'd be here.

Notice that the tree from here
on in looks like a little game,

is that right?
The tree from here on looks

like a little game.
So this thing here,

let's put it in green,
is a little game within the

game.
It's a subgame.

This subgame really only
involves two players.

The two players that it
involves are Player 2 and Player

3.
Player 1 is done,

Player 1 has put us into this
game, but now this little

subgame, it's a little subgame
involving just Player 2 and

Player 3.
So we can analyze this little

subgame.
If we analyze this little

subgame, what will it give us?
What will we find?

So let's look at this subgame.
So look at the green subgame,

the game that would have
happened had Player 1 chosen B.

This is a subgame involving
just Players 2 and 3.

So why don't I just forget
Player 1.

We knowI mean Player 1 is
part of the game,

he's getting payoffs,
but Player 1 has made their

move, they're not really
involved any more.

So let's just look at this game
as a game involving Players 2

and 3, and let's look at the
matrix for Players 2 and 3.

So it actually corresponds to
the matrix above,

it's a matrix in which Player 2
is choosing up and down.

There it is up and down and
simultaneously Player 3 is

choosing left or right.
There it is left or right at

this information set and the
payoffs are (1,1),

(0,2), (0, 1) and (1,0).


So this is, I claim,
a representation of this little

green game.
Perhaps we should put this in

green as well.
This thing corresponds to that

thing.
Everyone okay with that?

So if Player 1 had chosen B
rather than A,

then we'd be involved in a
little game, a game within a

game, or a subgame involving
just Players 2 or 3.

And we can analyze that game.
That's a straightforward game.

Here it is.
And what would we do with that

game?
We'd look for the Nash

equilibria in that game.
So let's look for the Nash

equilibrium of this game.
So what do we notice about this

game?
So if Player 3 chooses left

then Player 2 would rather
choose up.

If Player 3 chooses right then
Player 2 should choose down.

If Player 2 chooses up then
Player 3 would rather choose

right, because 2 is bigger than
1.

And if Player 2 were to choose
down than Player 3 would choose

right again, because 0 is bigger
than 1.

So in fact, in this little
subgame, actually,

Player 3 has a dominant
strategy.

If it turned out that we got
involved in this little

subgame, Player 3 has a
dominant strategy which is to

play right,
and moreover,

this subgame has just one Nash
equilibrium.

If I'd given you this subgame
on its own, it's clear that the

Nash equilibrium of this
subgamethis game within a

gameis (down,
right).

So what's that telling us?
It's telling us if Player 2 and

3 ever get called upon to play
in this gameand that only

happens when Player 1 chooses
Bif Player 2 and 3 ever get

called upon to play in this
game,

we know from when we were young
or at least from before the

midterm, we know that they're
going to play Nash equilibrium

in that subgame.
And the Nash equilibrium in the

subgame is going to have Player
3 choosing right and Player 2

choosing down.
But the equilibrium we talked

about, this equilibrium we
argued before about,

(A, U, L)the equilibrium we
talked about before,

(A, U, L), doesn't involve
Player 2 choosing down.

In fact she chose up.
And it doesn't involve Player 3

choosing right.
In fact she chose left.

So let's sum up.
We found an equilibrium of this

game, this equilibrium of this
game was (A, U,

L) but I claim this is not a
plausible equilibrium.

It's not a plausible
equilibrium because it predicts

that if we actually were to play
the game within the game,

we wouldn't play equilibrium.
Let me say it again,

in the whole game (A,
U, L) is an equilibrium.

But I claim it's a silly
equilibrium because it involves

the prediction that,
if in fact we ever got into the

game within the game,
we would no longer play

equilibrium,
and that doesn't seem right.

If we're going to believe in
equilibrium, we should be

consistent and believe in
equilibrium throughout.

So this brings us to a new idea
and the new idea is going to

have two parts to it.
The first part is kind of on

the board already.
It's something we talked about

informally.
It's the notion of a subgame.

What's a subgame?
It's a game within a game.

I've been using that informally
but we need to start thinking

about, more formally,
what it means.


So I talked about it informally.
I said that green object is the

game that would be played were
Player 1 to choose B.

We talked about other subgames
in this class.

We talked about the subgame
that would happen in the entry

game if one of those rival pizza
companies moved in,

in the Miami market or
something, the game within a

game.
When we talked about the Tour

de France we talked about there
being a game within a game that

is about when you break away.
But now I want to be formal

about this notion of a game
within a game and introduce some

nomenclature.
So the formal definition is

this.
A subgame is a part of a game,

informally that looks like a
game within the tree,

and it has three properties.


It satisfies the following
three properties.

So one since it looks a game
itself the subgame must start

from a particular point.
So it startsthe subgame must

startit starts from a single
node.

Let's just look at the example.
In the example we just looked

at, the subgame started from
this node here.


Second, it comprises all
successors to that node.


So in our example,
here's our subgame.

Here's our green subgame.
Here's the node it starts from.

Here are all the nodes that are
successors of that node:

these are the children,
and these are the

grandchildren.
If you have this grandparent

node you have to have all of his
children, and all of his

grandchildren.
So it comprises all the

successors of that node.
And finallythis is

importantit does not break up
any information sets.


So a subgame,
informally, it's just a game

within the game.
But slightly more formally,

I can't put one node that's
part of an information set into

this subgame unless I'm going
to put all the nodes that are

part of that information set
into the subgame.

Let's have a look at some
examples.

We've got one example up there,
the entry game looked something

like this.
So what are the subgames here,

no secrets here,
this is a subgame.

There's actually another
subgame, can anyone see what

the other subgame is?
The whole game is a subgame.

The whole game is itself a
subgame, somewhat trivially.

So this particular game,
which is the schematic of the

entry game, it has actually two
subgames but only one proper

subgame.
Here's a more complicated

example.


This is actually going to be
quite a complicated example just

to make life interesting.
So 1 is going to move,

then 2 is going to move,
and then 1 is going to move

again.
This is all one big information

set for Player 1.
And 1 is going to move like

this.
So again, without payoffs,

this is a little tree and the
key point here is this is an

information set.
Let's look let's stareat

this tree a second and figure
out what are and aren't

subgames.
So first of all,

this was a subgame and this
was a subgame.

What about this thing here?
Is that a subgame?

It's not a subgame.
What rule does it break?

It's breaking up an information
set, so that's no good because

of rule three.
What about this thing?

That doesn't break up an
information set.

I've got the whole information
set in there.

Is that any good?
No that's no good because it

doesn't start from a singleton
node, so that's no good it

violates 1.
If we do this,

we look at this piece.
That piece there,

that's also no good.
Why is that no good?

Again it breaks up an
information set,

so this is no good again
because of rule 3.

So you can practice at home
drawing trees and trying to

identify what are and what are
not subgames.


So with the definition of a
subgame now formalit's

basically just formalizing
something we've talked about

before which is the idea of a
game within the gameI want to

introduce our newwhat's going
to be our new solution concept.

And this is going to be the
solution concept we're going to

use essentially almost until the
final.

Definition, so just remember
what our task is.

Our task is to come up with a
solution concept that picks up

the idea from the first half of
the semester,

namely Nash equilibrium,
but does so in a way that

respects what we've learned in
the second half of the semester,

namely that games have
sequential elements and people

move by backward induction.
So, in particular,

what we want to rule out are
those Nash equilibria that

instruct players down the tree
to play in subgames according

to strategies that are not Nash
equilibria.

Say it again,
we want to rule out those Nash

equilibria that instruct people
way down the tree to play

according to something which is
not a Nash equilibrium.

We want our new notion to say:
wherever you find yourself in a

tree, play Nash equilibrium and
that's exactly what the

definition's going to say.
So a Nash equilibrium,

S1*, S2* all the way up to SM*
is a subgame perfect

equilibriumso that's a
SPEit's a subgame perfect

equilibrium if it induces a Nash
equilibrium in every subgame of

the game.
So to be a subgame perfect

equilibrium, it has to itself be
a Nash equilibrium of course,

but it also has to instruct
players to play a Nash

equilibrium in every subgame.
Let's take that immediately

back to our examples.
In this example,

we know that this is a
subgame.

We know that,
in this subgame,

there is only one Nash
equilibrium;

and that Nash equilibrium
involves Player 2 choosing down

and Player 3 choosing right.
So we know that Player 2 is

going to choose down according
to that equilibrium,

and Player 3 is going to choose
right according to that

equilibrium.
So if we now have to look for

an equilibrium of the whole
gamelet's go back to Player

1's choicePlayer 1 if they
choose A will get 1,

if they chose B then they know
that this Nash equilibrium will

be played so they'll get 2.
They prefer 2 to 1,

so the subgame perfect
equilibrium here is Player 1

chooses B, Player 2 chooses
down, and Player 3 chooses

right.
This is an equilibrium of the

game and it induces an
equilibrium in the subgame.

So in that example,
the subgame perfect

equilibrium is found by first of
all looking in the subgame;

find the equilibrium in the
subgame;

and then go back and look at
the equilibrium in the whole

game.
The equilibrium that we end up

with, it is a Nash equilibrium
in the whole game,

but more importantly,
it induces a Nash equilibrium

in the subgame.
Let's just go back to our other

example then I'll stop.
So our other example was here.

Here was our other example.
And we claimed,

hang on everybody,
we claimed that the good

equilibrium here,
the one we believed in was

(Down, down, right).
Where are the subgames in this

game?
Where are the subgames in this

tree?
Anybody?

So I claim there's only one
real subgame here and that's

this piece.
This is a subgame.

What's the Nash equilibrium of
this somewhat trivial subgame?

The Nash equilibrium of this
somewhat trivial subgame is

that Player 1 must choose down.
So for a Nash equilibrium to be

a subgame perfect
equilibriumhere are three Nash

equilibria: one,
two, three.

For this Nash equilibria to be
a subgame perfect equilibrium,

it's got to instruct Player 1
to choose down in the trivial

subgame and here it is.
This is our subgame perfect

equilibrium in this game.
Now I know today was a lot of

formal stuff,
a lot of new ideas,

and when we come back on Monday
we'll first of all give you a

game that refreshes these ideas
and then we'll go straight to

applications.
So trust me:

there will be applications.
It will be useful.

See you on Monday.
There's a homework to come in,

there's another on the web
going out.
