English subtitles

← 18. Imperfect information: information sets and sub-game perfection

Get Embed Code
1 Language

Showing Revision 1 created 08/13/2012 by Amara Bot.

  1. Professor Ben Polak:
    So today we have a lot of
  2. stuff to get through,
    but it's all going to be fairly
  3. formal.
    We're not going to have time to
  4. play a game today.
    So it's going to be a day where
  5. we have to learn some new ideas.
    So the reason we need to go
  6. through some new formal ideas
    today is we've kind of exhausted
  7. what we can do with the ideas
    we've gathered so far.
  8. So, just to bring us up to date
    with where we are:
  9. in the first half of the
    semester--so before the
  10. mid-term--we looked at
    simultaneous move games.
  11. And one way to think about
    those simultaneous move games
  12. were games where,
    when I make my choice,
  13. I don't know what you've done,
    and, when you make your choice,
  14. you don't know what I've done.
    Since the mid-term we've been
  15. looking at simple examples of
    sequential move
  16. games--sequential move games
    under perfect information--in
  17. which I typically do know what
    you did when I get to make my
  18. choice.
    And you know I'm going to know
  19. what you did when I get to make
    my choice.
  20. What I want to be able to do
    moving forward is I want to be
  21. able to look at strategic
    situations that combine those
  22. two settings.
    I want to be able to analyze
  23. games which involve both
    sequential moves and
  24. simultaneous move games.
    In particular,
  25. I want to see how we can extend
    the technique we've been
  26. focusing on for the last few
    weeks which is backward
  27. induction.
    I want us to see how we can
  28. extend the notion of backward
    induction to cope with games
  29. where some parts are sequential
    and some parts are simultaneous.
  30. So we're going to look at a lot
    of examples and we're going to
  31. introduce some new ideas,
    and I'm going to try and walk
  32. you through that today.
    So that's our goal.
  33. Let's start with an example.
    So here's a very simple game in
  34. which Player 1 moves first,
    and has three choices.
  35. Let's call them up,
    middle, and down.
  36. And then Player 2 moves,
    and Player 2 has two choices
  37. from each of these nodes,
    and we'll call the choices
  38. suggestively,
    up and down--up and down.
  39. And here we'll just call them
    left and right.
  40. The payoffs are as follows,
    (4,0), (0,4),
  41. (0,4), (4,0),
    (1,2), (0,0).
  42. So this is just a standard game
    of perfect information,
  43. much like all the games we've
    seen since the mid-term.
  44. In fact, it's a relatively easy
    one.
  45. So we know how to solve this
    game.
  46. We solve this game using what?
    Using backward induction,
  47. and that isn't so hard here.
    We know that if Player 2 finds
  48. herself up here,
    she will choose 4 rather than
  49. 0;
    if she finds herself here,
  50. she'll choose 4 rather than 0;
    and if she finds herself here,
  51. she'll choose 2 rather than 1.
    So Player 1 won't want to go up
  52. here because he'll get 0,
    and he won't want to go into
  53. the middle because he'll get 0,
    and he won't want to--but if he
  54. goes down Player 1 will choose
    left and Player 1 will get 1.
  55. So Player 1 will choose down.
    So backward induction predicts
  56. that Player 1 chooses down and
    Player 2 responds by choosing
  57. left.
    Just staring at this a second,
  58. notice that the reason in this
    game--taking a step back from
  59. backward induction a second--the
    reason Player 1 did not want to
  60. choose either up or middle was
    because that move was going to
  61. be observed by Player 2,
    and in either case Player 2 was
  62. going to crush Player 1.
    So if Player 1 went up,
  63. Player 2 was playing this sort
    of "strictly competitive game"
  64. with Player 1 and Player 2 could
    pick a choice that gave 2 4 and
  65. 1 0.
    Conversely, if Player 1 chose
  66. middle, Player 2 could crush
    Player 1 by choosing up,
  67. which gave, once again,
    Player 2 4 and Player 1 0.
  68. So there was a good reason here
    to avoid going into the part of
  69. the game following up or middle,
    and the reason was 2 has a huge
  70. second-mover advantage in those
    parts of the game.
  71. Is that clear to everybody?
    So I now want to consider a
  72. similar, but importantly
    different game,
  73. so I'm going to draw the game
    again,
  74. but before I draw it,
    let me say what I'm going to
  75. do.
    So I want to introduce a new
  76. idea, and the new idea is going
    to be that Player 2 will not be
  77. able to distinguish between up
    or middle.
  78. So let's just say it again.
    So if Player 1 chooses down,
  79. Player 2 will observe that,
    just as we've done before in
  80. our standard perfect-information
    games,
  81. but if Player 1 chooses either
    up or middle,
  82. I want to capture the idea that
    Player 2 doesn't know which of
  83. those two choices was made.
    That's clearly going to change
  84. the game a lot and the first
    question is, how do we represent
  85. that idea in a tree?
    So let me try and show a good
  86. way to represent that in a tree.
    So the game has the same
  87. structure to it.
    Player 1 is again choosing
  88. between up, middle,
    or down.
  89. And Player 2 once again is
    choosing here,
  90. up or down, up or down,
    and here left or right.
  91. And the payoffs haven't changed.
    They're still (4,0),
  92. (0,4), (4,0),
    (0,4), (1,2) and (0,0).
  93. So that's exactly the same as
    you have in your notes already,
  94. but now I want to adapt this
    tree to show how we indicate
  95. that Player 2 cannot
    distinguish--cannot observe
  96. whether 1 chose up or middle,
    but can observe if Player 1 has
  97. chosen down.
    The way we do that very simply,
  98. we draw a dotted line between
    the two nodes of Player 2
  99. between which 2 cannot
    distinguish.
  100. So this idea here,
    what this dotted line indicates
  101. is that these two nodes are set
    in the same information set.
  102. So our new idea here is the
    idea of an information set.
  103. The payoffs are meant be the
    same on the left as on the
  104. right.
    So the idea here is that Player
  105. 2 cannot distinguish these two
    nodes.
  106. Player 2 knows that she's in
    this information set:
  107. she knows that Player 1 must
    have chosen either up or middle,
  108. she knows that Player 1 did not
    choose down, but she doesn't
  109. know whether she's really here
    or here.
  110. Now what happens in this game?
    This game is a very different
  111. game.
    Why is it a different game?
  112. Well let's try and apply that
    loose intuition we talked about
  113. before.
    We said previously,
  114. in the old game,
    that if Player 1 chose up,
  115. 2 knew that Player 1 had chosen
    up,
  116. and observed that by choosing
    down Player 2 could crush 1.
  117. And if Player 1 chose middle,
    Player 2 could observe that
  118. Player 1 had chosen middle and
    this time by choosing up could
  119. crush 1.
    The problem is that now in this
  120. new game Player 2 doesn't know
    whether she's here,
  121. in which case she would want to
    choose down, or here,
  122. in which case she'd want to
    choose up.
  123. So Player 2's choice is not so
    obvious anymore.
  124. That simple backward induction
    argument has disappeared.
  125. Moreover, Player 1 knows that
    Player 2 will not be able to
  126. observe between up or middle,
    so it isn't necessarily the
  127. case that Player 1 will want to
    choose down anymore.
  128. It's still true that if Player
    1 did choose down that Player 2
  129. would be able to observe that
    and will choose left,
  130. so that part of the argument's
    the same.
  131. What do we think is going to
    happen here?
  132. Well we don't know,
    but let me give a suggestion
  133. what might happen here.
    Player 1 might say,
  134. hey I could randomize between
    up and middle.
  135. I could choose half the time up
    and half the time middle.
  136. If I choose half the time up
    and half the time middle,
  137. Player 2 isn't going to
    know--in general--isn't going to
  138. know what I've done.
    It isn't quite clear what
  139. Player 2's going to do and since
    I'm randomizing between up or
  140. middle whatever Player 2's going
    to do,
  141. I'm going to get half the time
    4 and half the time 0 for an
  142. expected value of 2.
    So to say it again,
  143. so Player 1 might decide in
    this game to randomize
  144. fifty-fifty between up and
    middle,
  145. knowing that half the time
    therefore he will get 4 and half
  146. the time he'll get 0 for an
    expected value of 2,
  147. which notice is better than he
    got by choosing down.
  148. So this change in this game,
    change in the information in
  149. this game, not only led to a
    different game but led to a very
  150. different outcome.
    So here 1 might,
  151. for example,
    might randomize between up and
  152. middle, and over here we know
    exactly what 1 does,
  153. 1 chooses down.
    So we get very different
  154. outcomes because of this change
    in information in the game,
  155. and the theme of today is that
    information is going to matter.
  156. The way we're going to model
    information is by thinking about
  157. these information sets.
    And as we go through today,
  158. I want to start giving you some
    formal definitions.
  159. So this is the idea,
    now let's look at the formal
  160. definition.
    There's going to be a lot of
  161. writing today,
    so I hope you brought a notepad
  162. with some room on it.
    So the first formal definition
  163. of the day comes off that last
    example.
  164. The formal definition is the
    idea that I want to capture:
  165. I want to capture the idea that
    players down the tree may not
  166. know exactly what was done up
    the tree.
  167. And the formal definition is
    going to go through the idea of
  168. an information set.
  169. So an information set of Player
    i--in this case involves Player
  170. 2 but more generally of Player
    i--is a collection--or a set if
  171. you like--is a collection of
    Player i's nodes between
  172. which--I guess it can be more
    than two--so let's say
  173. among which i cannot
    distinguish.
  174. Now it's going to turn out
    that, by clever use of
  175. information sets,
    we're going to be able to use
  176. our technology,
    our technology of drawing
  177. trees, to capture all sorts of
    interesting and increasingly
  178. complicated information
    settings.
  179. In this particular game,
    it's the case that Player 1
  180. knew that Player 2 was not going
    to be able to distinguish
  181. between up or middle in this
    tree,
  182. and Player 1 knew that Player 2
    would be able to distinguish in
  183. the left hand tree.
    We can even use information
  184. sets in a more elaborate tree to
    capture the idea that Player 1
  185. may not know what Player 2's
    going to know.
  186. But I won't do that now:
    I'll leave that later,
  187. and you'll see some examples of
    that on the homework.
  188. So we have our formal
    definition.
  189. This is going to be the first
    of our big tools of the day,
  190. but let me just put down a few
    things that we have to be
  191. careful about:
    a couple of rules.
  192. So these information sets have
    to obey certain rules and
  193. certain things are not allowed.
    So in particular the following
  194. is not allowed.
    Here's a tree in which Player 1
  195. moves first and Player 2 does
    not observe Player 1's move.
  196. So these two nodes are Player
    2's nodes.
  197. They're in the same information
    set, which means Player 2 is not
  198. meant to be able to distinguish
    between these two nodes.
  199. And suppose however the tree
    looked like this.
  200. Okay, so I claim that this is
    crazy.
  201. We couldn't allow this.
    It wouldn't make any sense to
  202. allow this.
    Can anyone see why?
  203. Why is this not really a
    sensible tree?
  204. Everyone see that?
    Why is that not a sensible tree?
  205. Student: If Player 2
    knows that he has three choices
  206. then he'll know he's at the top
    node.
  207. Professor Ben Polak:
    Exactly, in this tree you
  208. haven't got the payoffs in,
    but if Player 2 observes that
  209. she has three choices,
    she knows she must be at the
  210. top node.
    If she observes she has two
  211. choices she must be at the
    bottom node.
  212. So in this tree,
    it was supposed to be the case
  213. that 2 didn't know whether she
    was here or here,
  214. but merely by observing how
    many choices she has,
  215. she could infer whether she was
    at the top node or the bottom
  216. node.
    So that can't make any sense.
  217. So this is not allowed,
    so we'll put a red cross
  218. through that one.
    Now the second thing that's not
  219. allowed is a little bit more
    subtle, and actually is an
  220. interesting thing.
    This is just kind of
  221. bookkeeping, but the second
    thing is more interesting.
  222. So let's have a look at it.
    Here's a more interesting tree.
  223. Player 1 moves first.
    Player 2 observes that move,
  224. and Player 2 moves second.
    And then at the bottom of this
  225. Player 1 may have another chance
    to move again.
  226. So again I haven't put payoffs
    in here.
  227. Player 1 moves first;
    Player 2 moves second;
  228. and, if Player 2 chooses down
    here or up there,
  229. then Player 1 gets to move
    again.
  230. Now I claim again that this is
    not a sensible tree.
  231. It's not a sensible arrangement
    of information sets.
  232. Can anyone see why this isn't
    sensible?
  233. Why is this not sensible?
    Steven?
  234. Student: Player 1 knows
    what node he's at based on the
  235. first choice that he made.
    Professor Ben Polak:
  236. Exactly, so to get to the upper
    node here for Player 1,
  237. Player 1 must have chosen up
    before, and to get to the lower
  238. node here, Player 1 must have
    played down before.
  239. So provided that Player 1
    remembers his or her own move,
  240. she knows where she is.
    Is that right?
  241. So provided that Player 1 can
    recall what she herself did
  242. earlier on in the tree she
    should be able to distinguish
  243. these things.
    So we're going to rule this
  244. out, but I just want to make a
    remark here.
  245. There's an assumption in ruling
    it out and the assumption is
  246. we're assuming perfect recall or
    perfect memory.
  247. And people don't always--in the
    real world, players don't always
  248. have perfect recall.
    There are two reasons--and
  249. we're always going to assume
    this, but let me just make a
  250. remark.
    There are two reasons why
  251. people might not have perfect
    recall.
  252. One reason is,
    like me, they're getting old.
  253. They simply can't remember what
    they did yesterday.
  254. So while I'm driving home I
    know roughly how many traffic
  255. lights I have to go through
    before I turn right,
  256. but I sometimes forget which
    traffic light I'm at and I turn
  257. right too early or too late.
    That doesn't happen to you
  258. guys, but it happens to me as
    I'm getting a bit senile,
  259. so old age would rule out
    perfect recall.
  260. A more important example,
    perhaps, is if players of games
  261. are themselves institutions.
    It's sometimes useful,
  262. and we've often talked about it
    in this class,
  263. to imagine a player of a game
    being a firm or a country,
  264. or some kind of institution in
    which the actual decisions may
  265. be being taken by different
    actual people within the firm,
  266. institution, or country.
    This assumption of perfect
  267. recall is saying that the
    players within the institution
  268. knew what the other players
    within that same institution
  269. were doing.
    If we're modeling General
  270. Motors as one player,
    this assumption is assuming
  271. that the chief financial officer
    and the chief executive officer
  272. at GM are observing each other's
    actions: are on the same page.
  273. The left hand knows what the
    right hand's doing.
  274. We are typically going to
    assume that, but just to make
  275. the point: it is an assumption,
    and it's quite interesting to
  276. see what happens if you relax
    it.
  277. So with that in mind,
    we can move to our next
  278. definition.
    and this is something I've
  279. referred to early on in the
    class, but I want to be formal
  280. now.
    Now we can be formal.
  281. We've talked earlier on in this
    class about the idea of perfect
  282. information.
    So, for example,
  283. when we talked about Zermelo's
    theorem, we talked about games
  284. of perfect information.
    We said informally what this
  285. was--a game of perfect
    information is a game where each
  286. player in the game can observe
    all previous moves.
  287. That was our informal
    definition, but we can now give
  288. a formal definition very simply.
    Perfect information is a
  289. setting where all information
    sets in the tree--games of
  290. perfect information are games
    where all information sets in
  291. the tree--contain just one node.
    I want to be clear here.
  292. What we're saying here is,
    if we have a tree in which
  293. every information set is a
    singleton,
  294. we basically don't have to
    bother with any dotted lines:
  295. that's a game of perfect
    information.
  296. And that shouldn't be a
    surprise to anybody here because
  297. that's exactly how we drew trees
    since the mid-term.
  298. Is that right?
    Of course, the novelty is we're
  299. now going to be allowed to look
    at games of imperfect
  300. information.
    The reason we're doing this is
  301. because it will be
    interesting--as in the example
  302. we've just seen--to think about
    games where information is not
  303. perfect.
    So what is the definition of
  304. imperfect information?
    Imperfect information's formal
  305. definition is "not perfect
    information."
  306. We've defined what perfect
    information is.
  307. Imperfect information is the
    rest.
  308. In the real world there's a lot
    of games that turn out to have
  309. imperfect information.
    There's lots of strategic
  310. situations where I'm going to be
    able to observe some things that
  311. you've done,
    but other things,
  312. I won't know quite what you've
    done.
  313. Okay, let's go straight to an
    example.
  314. So I don't think we really need
    to keep that definition very
  315. focal, so let's get rid of that
    board.
  316. Let's do an example.
    We'll be doing many examples
  317. today.
    So this example is going to be
  318. a tree in which Player 1 moves
    first.
  319. Player 2 cannot observe this
    move;
  320. and sometimes rather than
    labeling both of these nodes
  321. with a 2, I'll just put a 2 on
    the information set,
  322. just to indicate that both of
    these nodes belong to Player 2.
  323. So Player 2 moves second.
    And we'll call Player 1's move
  324. up or down;
    and we'll call Player 2's move
  325. left or right,
    kind of suggestively.
  326. So what's the information set
    here?
  327. The information set is
    indicating the fact that Player
  328. 2 cannot observe whether Player
    1 moved up or down.
  329. Player 2 cannot observe whether
    Player 1 chose up or down.
  330. Now, why does that matter?
    I haven't put the payoffs in
  331. yet but I will in a second.
    It matters because had this
  332. game been a game of perfect
    information, had this
  333. information set--had there been
    two information sets here,
  334. this dotted line not been there
    --, then Player 2 could have
  335. chosen separately whether to
    choose left or right at this
  336. node,
    or left and left or right at
  337. this node.
    But since Player 2 doesn't know
  338. whether she's at the upper node
    or the lower node--she doesn't
  339. know whether Player 1 chose up
    or down--she really only has one
  340. choice to make here.
    She's either choosing left at
  341. both nodes or she's choosing
    right at both nodes.
  342. And just to pull it back to our
    first example in the class,
  343. we saw the same feature there.
    When we moved from a game of
  344. perfect information to a game of
    imperfect information we reduced
  345. the choices available for Player
    2.
  346. Here Player 2 could choose
    separately up or down,
  347. at these two different nodes.
    But here Player 2 only makes
  348. one choice that has to apply to
    both nodes because Player 2
  349. cannot distinguish those two
    nodes.
  350. So let's have a look and see,
    once we put some payoffs on,
  351. what it does in this particular
    game.
  352. So here's some payoffs:
    (2,2), (-1,3),
  353. (3, -1) and (0,0).
    So once again Player 2 cannot
  354. separately choose at the upper
    node or the lower node,
  355. she's either choosing left or
    she's choosing right.
  356. But it turns out that this game
    is a little easier than the game
  357. we started with.
    Why is it easier than the game
  358. we started with?
    It's easier than the game that
  359. we started with because,
    from Player 2's point of view,
  360. whether she thinks she's up
    here or whether she thinks she's
  361. down here, she has the same best
    choice in either case.
  362. If she thinks she's at the
    upper node then by choosing left
  363. she'll get 2 and right she'll
    get 3, so right seems better.
  364. If she thinks she's at the
    lower node then choosing left
  365. gets -1 and right gets 0,
    so once again right is better.
  366. So in fact, in this particular
    game, regardless of whether
  367. Player 2 thinks that Player 1
    chose up,
  368. and hence she's at the upper
    node, or whether Player 2 thinks
  369. that Player 1 chose down,
    and hence she's at the lower
  370. node, Player 2 is going to make
    the same choice in this game,
  371. namely right.
    So notice that this particular
  372. game actually solves out rather
    like backward induction.
  373. Even though Player 2's choice
    is a little bit more
  374. complicated--she doesn't know
    where she is--it's actually
  375. clear what Player 2 will do at
    this information set.
  376. Now if we push this forward a
    little bit harder,
  377. we can see why.
    Player 1 in this game has two
  378. strategies up or down.
    And Player 2 has two
  379. strategies, she either chooses
    left or right.
  380. Notice that she only has two
    strategies because she has to
  381. choose the same thing at these
    two nodes, she doesn't know
  382. where she is.
    Okay, so let's draw up the
  383. matrix for this game and see if
    it looks familiar.
  384. So Player 1 is choosing between
    up or down.
  385. And Player 2 is choosing
    between left or right.
  386. And the payoffs are as follows:
    (up, left) is (2,2);
  387. (up, right) is (-1,3);
    (down, left) is (3,-1);
  388. and (down, right) is (0,0).
    So what game is this?
  389. It wasn't meant to be a trick
    question.
  390. Somebody waved their arm in the
    air.
  391. What game is this?
    Student: Prisoners'
  392. Dilemma.
    Professor Ben Polak:
  393. This is Prisoners' Dilemma.
    This is an old friend of ours.
  394. This is Prisoner's Dilemma,
    a game we saw the very first
  395. day.
    But notice what have we seen
  396. here?
    This is Prisoner's Dilemma that
  397. we've seen many,
    many times, that's almost
  398. unbearably familiar to most of
    you.
  399. Now here's Prisoner's Dilemma
    as represented the way in which
  400. we talked about games before the
    mid-term.
  401. But here is the same game.
    This is also Prisoner's
  402. Dilemma, but now I've drawn it
    in a tree.
  403. Here I drew it in a matrix,
    and here I drew it in a tree.
  404. Now that we have information
    sets we can represent all the
  405. games that we've studied before
    the mid-term.
  406. All the games that were
    simultaneous move games we can
  407. study using trees by building
    information sets.
  408. What's the key observation here?
    It doesn't really matter
  409. whether Player 1 moves first or
    Player 2 moves first.
  410. It doesn't really matter what's
    happening temporally in this
  411. game.
    What matters is information.
  412. When Player 1 makes her move
    she doesn't know what Player 2's
  413. going to do.
    She doesn't know what 2 is
  414. doing.
    And when Player 2 makes her
  415. move she doesn't know what 1 is
    doing: and that's a simultaneous
  416. move game, even if time is
    passing.
  417. The key is information,
    not time.
  418. Now, on the way here I snuck
    something in and I should just
  419. tell you what I snuck in.
    I snuck in what a strategy is.
  420. I went from a tree,
    or an extensive form game,
  421. to a normal form game,
    and we've already done that a
  422. couple of times before in the
    class.
  423. We did it with the entry game,
    for example,
  424. about a week ago.
    But there all we did was we
  425. defined what a strategy was in a
    game of perfect information.
  426. And just to remind you,
    a strategy in a game of perfect
  427. information is a complete plan
    of action.
  428. It tells the player in question
    what they should do at each of
  429. their nodes.
    But now we have to be a bit
  430. more careful.
    We can't have a strategy--once
  431. we move to imperfect
    information--we can't have a
  432. strategy tell you what to do at
    each of your nodes,
  433. because you yourself can't
    distinguish between those nodes.
  434. So we need to adapt our
    definition of a strategy to make
  435. it appropriate for these more
    complicated games.
  436. So let's just adapt it in the
    obvious way.
  437. Definition, I'll just define
    pure strategies for now.
  438. A pure strategy of Player i is
    a complete plan of action--so
  439. this is the same as before.
    But what does it mean to be a
  440. complete plan of action?
    It can't tell me what to do at
  441. every single node.
    That can't be the right
  442. definition because I can't
    distinguish nodes.
  443. So all that it can be doing is
    telling me what to do at each
  444. information set.
    So it specifies what Player i
  445. should do--should perhaps is the
    wrong word, let's just say
  446. will do at each of i's
    information sets.
  447. So if you go back about a week
    you'll see almost exactly the
  448. same definition of a strategy,
    but the previous definition
  449. told i what to do at each node
    and this one just tells i what
  450. to do at each information set.
    What I'm doing is tidying up
  451. the previous definition so we
    can apply it to the more
  452. interesting games we're going to
    look at from now on.
  453. So now we have the definition
    of a strategy,
  454. we can carry on the idea we've
    just seen here.
  455. So what's the idea here?
    Any game you give me in the
  456. form of a tree,
    I can rewrite the game in the
  457. form of a matrix.
    So let's see some other
  458. examples of that idea.
  459. A lot of new ideas today,
    but some of them are just
  460. tidying up and kind of
    bookkeeping, and some of them
  461. are more interesting.
    So let's start with a tree.
  462. Let's make it a slightly more
    interesting tree than the one
  463. we've seen before.
    Actually that's too
  464. interesting, let's go a little
    bit slower.
  465. Let's have Player 1 have two
    choices, and Player 2 have three
  466. choices.
    So here's a simple tree,
  467. and let's put some payoffs in,
    but let me just put some
  468. letters in for payoffs rather
    than put in numbers.
  469. So we'll call these actions up
    and down.
  470. And we'll call this action
    left, middle,
  471. and right;
    and left, middle, and right.
  472. And we'll call the payoffs
    (A1,A2), (B1,B2),
  473. (C1,C2), (D1,D2),
    (E1,E2), and (F1,F2),
  474. so just to keep track of it.
    I want to show you how we take
  475. this tree and turn it into a
    matrix.
  476. So how do we turn it into a
    matrix?
  477. Well we look and say how many
    strategies has Player 1 got and
  478. how many strategies has Player 2
    got?
  479. So Player 1 here just has two
    strategies, up or down.
  480. And Player 2 has three
    strategies, either left,
  481. middle or right.
    Again they can't choose
  482. separately at these two nodes,
    so they really just have three
  483. choices: left,
    middle, or right.
  484. Leave a space here in your
    notebook, leave a space to the
  485. right here, and let's draw the
    matrix for this tree down here.
  486. So here's my matrix.
    Player 2 is choosing left,
  487. middle, or right and Player 1
    is choosing up or down.
  488. The payoffs go in the obvious
    way.
  489. So (A1,A2), (B1,B2),
    (C1,C2), (D1,D2),
  490. (E1,E2), and (F1,F2).
    So everyone understand that was
  491. just a simple exercise to show
    we can go from an extensive
  492. form, a tree,
    to the normal form,
  493. a matrix?
    That was easy right.
  494. However, there's an interesting
    thing here.
  495. It isn't obvious that,
    if I just gave you the matrix,
  496. it isn't obvious that this is
    the tree from which it came.
  497. Let me draw another tree that I
    claim corresponds to that same
  498. matrix.
    Here's another tree.
  499. So this other tree instead of
    having Player 1 move first,
  500. it's going to have Player 2
    move first.
  501. Player 2 better have three
    choices and we better call them
  502. left, middle,
    and right.
  503. And it better be the case that
    Player 1 is in one big
  504. information set and Player 1
    only has two choices,
  505. which we'll call up and down
    because that's what this matrix
  506. is telling us.
    It's telling us Player 2 had
  507. three choices and Player 1 had
    two choices.
  508. So that's true in the matrix
    I've drawn.
  509. And let's be a little bit
    careful where the payoffs are.
  510. So (left, up),
    that's easy:
  511. that's going to be (A1,A2).
    (Left, down) is going to be
  512. (D1,D2).
    (Middle, up) is going to be
  513. (B1,B2).
    (Middle, down) is going to be
  514. (E1,E2).
    (Right, up) is going to be
  515. (C1,C2) and (right,down) is
    going to be (F1,F2).
  516. So I have to be a little bit
    careful where I put in the
  517. payoffs, but I think that's
    right what I just did.
  518. Notice that what I did here:
    I started from this tree.
  519. It was an easy operation to
    construct the matrix,
  520. so easy that it was kind of
    boring,
  521. and it's not that hard to see
    that I can go the other way and
  522. construct this other tree from
    the matrix.
  523. This is also a tree in which
    Player 2 has three strategies
  524. and Player 1,
    this is all Player 1,
  525. has two strategies.
    So what, what have we learned
  526. from this?
    Well let's look at this more
  527. carefully.
    This tree is a tree in which
  528. Player 1 moved first and Player
    2 didn't observe Player 1's
  529. choice.
    Is that right?
  530. This is a tree in which Player
    2 moved first and Player 1
  531. didn't observe 2's choice.
    What are we noticing here?
  532. They're really the same game.
    There's no difference between
  533. these two games.
    They're really the same game.
  534. It doesn't matter whether it's
    Player 1 moving first and Player
  535. 2 who's unable to observe 1's
    choice,
  536. or whether it's Player 2 who is
    moving first and Player 1 who is
  537. unable to observe 2's choice.
    All that matters is that
  538. neither player could observe the
    other person's choice before
  539. they got to move,
    they both correspond to exactly
  540. the same game.
    So what's the message here?
  541. The message is something we've
    talked about before in the
  542. class, but I'm trying to be a
    bit more formal about it.
  543. The message is that what
    matters is information,
  544. not time.
    Clearly, time isn't an
  545. irrelevant thing.
    I couldn't know something that
  546. hasn't happened yet,
    so time is going to have
  547. effects in information,
    but ultimately what matters is
  548. information.
    What do I know and when did I
  549. know it?
    So the key idea that we're
  550. trying to capture with these
    information sets,
  551. just to repeat,
    is what did the player know and
  552. when did they know it,
    that famous expression from the
  553. Watergate Trials.
    Okay, let's look at a more
  554. interesting example,
    and see if we can actually talk
  555. about what's going to happen in
    these games.
  556. So by the end of today I want
    to have enough machinery so we
  557. can actually start analyzing
    these games and predicting
  558. what's going to happen.
  559. So as we go on we'll get more
    complicated, so let's get a
  560. little bit more complicated now.
    Once again, here's a game in
  561. which Player 1 is going to have
    two choices and we'll call those
  562. choices Up or Down.
    It's getting to be a familiar
  563. theme.
    Once again, Player 2 is going
  564. to move next and now,
    this time, just to keep things
  565. simple,
    we'll have Player 2 just have
  566. two choices left or right,
    or left or right.
  567. But now I want to make things
    more interesting,
  568. let's have Player 1 move again.
    So if Up, right happens then
  569. Player 1 gets to move again,
    in which case Player 1 is going
  570. to choose up or down.
    I'll use a [little]
  571. u and a [little]
    d to distinguish it from the
  572. ones further to the left of the
    tree.
  573. So this is a very simple tree,
    Player 1 moves first.
  574. Player 2 moves second--I forgot
    to put a 2 in here.
  575. And then if (Up,
    right) has occurred then Player
  576. 1 gets to move again.
    Let's put some payoffs in.
  577. So let's have this be (4,2),
    (0,0), (1,4),
  578. (0,0) again and (2,4).
    Let's just carry on analyzing
  579. this game using exactly the
    methods we've been talking about
  580. in the class today so far.
    So the first thing I'm going to
  581. do is I want to turn this into a
    matrix.
  582. The first thing to do on that
    route is to try and figure out
  583. how many strategies does Player
    1 have, and how many strategies
  584. does Player 2 have.
    Before we even do that let's
  585. try and figure out how many
    information sets they have.
  586. So I claim that Player 2 just
    has the one information set,
  587. is that right?
    But Player 1 has two
  588. information sets.
    This information set at the
  589. beginning of the game and then
    potentially this second
  590. information further down the
    tree.
  591. A strategy must tell you--a
    strategy must tell the player
  592. what to do at each of their
    information sets.
  593. So the strategies for Player 1
    are what?
  594. Well one strategy is Up and
    then up again,
  595. another strategy is Up and then
    right,
  596. another strategy is Down and
    then up, and a fourth strategy
  597. is Down and then right.
    Notice something which we've
  598. seen already in this class
    before: there's a little bit of
  599. a redundancy here.
    These two Down strategies force
  600. the game into a part of the tree
    where this node will not arise.
  601. To put it less grandly,
    if Player 1 chooses Down,
  602. she knows that she won't have
    to make a choice of up or down
  603. later on.
  604. Yes.
    Sorry--thank you.
  605. Let me start again since that
    was the wrong notation.
  606. So Player 1's choices are Up
    and then up;
  607. Up and then down;
    Down and then up;
  608. and Down and then down.
    Thanks Jake, sorry.
  609. Now, why are there four
    strategies?
  610. It's a bit of a surprise
    perhaps because if Player 1
  611. chooses Down then she knows she
    will never have to make a choice
  612. at her second information set.
    Nevertheless,
  613. when we write down a strategy
    we have to write down an
  614. instruction for every single
    information set.
  615. So we include both of those
    strategies.
  616. Strategies for Player 2 here
    are a little bit easier.
  617. Strategies for Player 2 are
    just left or right.
  618. With that in mind,
    let's draw up the matrix.
  619. So Player 1 here has four
    strategies and they are (Up,
  620. up), (Up, down),
    (Down, up), and (Down,
  621. down).
    Player 2 has two strategies,
  622. and they are left or right.
    Everyone okay so far?
  623. We're just basically
    transferring things across,
  624. and then we have to transfer
    the payoffs across so (Up,
  625. up) followed by left is going
    to be (4,2).
  626. (Up, up) followed by right is
    going to be (0,0).
  627. So Up, right, up is (0,0).
    ((Up, down),
  628. left) is the same as Up,
    left, down so it's going to be
  629. (4,2).
    ((Up, down),
  630. right) is going to be Up,
    right, down so it's going to be
  631. (1,4).
    ((Down, up),
  632. left) is the same as saying
    Down, left so it's going to be
  633. (0,0).
    ((Down, up),
  634. right) is going to be (2,4).
    ((Down, down),
  635. left) is once again going to be
    (0,0) and ((Down,
  636. down), right) is once again
    going to be (2,4).
  637. Does everyone see how I got the
    payoffs?
  638. I just used those strategies to
    tell me which way I'm going
  639. through the tree.
    If I combine them,
  640. it gives me an entire path and
    gets me to an end node.
  641. You can see this redundancy we
    talked about.
  642. We pointed out that these
    things are kind of the same
  643. thing, and you can see in the
    matrix that the bottom four
  644. squares of the matrix have
    repetition.
  645. This row is the same as that
    row.
  646. Everyone happy with that?
    Okay, we have a matrix.
  647. Let's analyze it by finding the
    Nash equilibria in this game.
  648. So to find the Nash equilibria
    in this game,
  649. we're going to find best
    responses.
  650. So let's just start by asking
    what is the best response to
  651. left?
    So if Player 2 chooses left,
  652. Player 1's best response is
    either (Up, up) or (Up,
  653. down).
    If Player 2 chooses right then
  654. Player 1's best response is
    either (Down,
  655. up) or (Down,
    down).
  656. Everyone okay so far?
    If Player 1 chooses (Up,
  657. up) then Player 2 is going to
    choose left.
  658. If Player 1 chooses (Up,
    down) then Player 2's best
  659. response is to choose right.
    If Player 1 chooses (Down,
  660. up) then Player 2's best
    response is to choose right.
  661. And if Player 1 chooses (Down,
    down) then Player 2's best
  662. response is to choose right.
    So this is kind of slow and I
  663. just want to be careful.
    I'm going slow for a reason.
  664. We're going to gradually get
    harder, and I want to be a
  665. little bit careful.
    I can see people are looking a
  666. little sleepy around the room.
    I know it's kind of lunchtime.
  667. If you see your neighbor
    getting sleepy give them a good
  668. sharp elbow because I think this
    isn't a good time to fall
  669. asleep.
    Sometimes I'm worried you're
  670. going to miss something,
    and it's only going to get
  671. harder and you're going to miss
    things.
  672. Alright, so what are the Nash
    equilibria in this game?
  673. We know how to do that.
    The Nash equilibria must be
  674. (Up, up) followed by left.
    Make sure I get them all.
  675. (Down, up) followed by right;
    and (Down, down) followed by
  676. right.
    I want these three Nash
  677. equilibria.
    Okay, so that wasn't such a big
  678. deal.
    I've got three equilibria in
  679. this game.
    And if I'd simply given you
  680. this game in the first half of
    the semester,
  681. I hadn't shown you the
    tree--you've never seen this
  682. tree--I just gave you this game
    and said find the Nash
  683. equilibria in this game,
    and that had been a question on
  684. the mid-term,
    we'd have stopped here.
  685. We'd have said:
    okay, I found these Nash
  686. equilibria.
    Maybe you'd have gone on and
  687. found mixed ones I don't know
    but essentially we'd be done at
  688. this point.
    Let's say again.
  689. If we'd started as we would
    have done before the mid-term
  690. with me giving you a payoff
    matrix and asking you to find
  691. the Nash equilibria,
    then at this point we'd be done.
  692. We'd have found the three Nash
    equilibria, at least the three
  693. pure-strategy Nash equilibria.
    The problem is if we go back to
  694. the tree, to the dynamic
    game--the game that has some
  695. action going on in it--and
    actually look at this game,
  696. it's not clear that all of
    these Nash equilibria are really
  697. equally plausible.
    Can anyone see what might be a
  698. bit implausible about some of
    these Nash equilibria?
  699. What's implausible about them?
    Any taker on this?
  700. Well, let's look at this game
    again.
  701. This game is a little bit
    complicated.
  702. It's not clear what one should
    do here perhaps,
  703. and perhaps it's not clear what
    Player 2 should do here,
  704. because after all,
    Player 2 doesn't know where he
  705. is and he doesn't know whether
    Player 1,
  706. if Player 1 gets to move again
    is going to choose up or down
  707. but: what's the but?
    Can we get a mike on Patrick?
  708. Student: So if you look
    at it backwards,
  709. you can cross out Player 1's
    second choice.
  710. He's always going to choose
    down so that's (1,4) at that
  711. node.
    So then you know Player 2 is
  712. always going to choose right
    because his payoff is always 4.
  713. So then Player 1's not going to
    have--I mean Player 1 knows
  714. which to choose then.
    He's going to choose down.
  715. Professor Ben Polak:
    Good.
  716. So let's just walk through what
    Patrick just said.
  717. That's very good.
    So if we just analyze this game
  718. the way we've been taught to
    analyze trees,
  719. essentially using backward
    induction,
  720. we first of all observe that if
    Player 1 gets to move again here
  721. she'll know where she is,
    and she'll know she's choosing
  722. between 1 and 0.
    She's going to choose down.
  723. Is that right?
    She's going to choose down.
  724. But knowing this,
    Player 2, even though Player 2
  725. doesn't know where he is,
    Player 2 actually has a pretty
  726. easy choice to make.
    He knows that if he chooses
  727. left, he either gets 2 or 0,
    but if he chooses right he gets
  728. 4.
    4 is bigger than 2.
  729. 4 is bigger than 0.
    So Player 2 is actually going
  730. to choose right.
    And given that,
  731. given that Player 2 is going to
    choose right,
  732. Player 1 is essentially
    choosing between 1,
  733. if she chooses up,
    which would be followed by
  734. right and down;
    and 2, which is what happened
  735. if she chooses down followed by
    right.
  736. So this game we can essentially
    analyze through backward
  737. induction.
    It's not quite backward
  738. induction because we had to add
    in this little piece about 2 not
  739. knowing where she was,
    but it turned out,
  740. no matter where she was,
    she had a dominant strategy,
  741. she had a better strategy once
    she figures out that Player 1 is
  742. going to choose down.
    Is that right?
  743. If we go back and look at these
    Nash equilibria,
  744. the prediction that we just
    got, which is what?
  745. Down for Player 1,
    right for Player 2 and then
  746. down again for Player 1.
    That strategy is this one.
  747. So one of these Nash equilibria
    corresponds to our sensible
  748. analysis of this tree,
    but the other two do not.
  749. These two Nash equilibria are
    inconsistent with backward
  750. induction.
    They're perfectly good Nash
  751. equilibria, if we'd given you
    this matrix at the mid-term
  752. you'd have thought that it's
    just fine.
  753. But it turns out both of these
    Nash equilibria involve Player 1
  754. choosing a strategy up,
    that we know that Player 1 is
  755. not going to do if reached.
    And one of these Nash
  756. equilibria involves Player 2
    choosing a strategy left,
  757. that, in fact,
    she's only choosing because she
  758. thinks Player 1 is going to
    choose up,
  759. which, in fact,
    we just argued Player 1 is not
  760. going to do.
    The people at the back,
  761. there's a little bit too much
    volume bouncing off the wall
  762. there, so just keep it down in
    the balcony.
  763. Thank you.
    So these two Nash equilibria,
  764. they're perfectly good Nash
    equilibria to the game,
  765. but they don't make any sense.
    They're completely inconsistent
  766. with the way we've learned to
    talk about games.
  767. Now we've seen this before.
    We saw it in the entry game.
  768. This is a much more
    complicated, much more
  769. interesting example,
    but we saw in the entry game
  770. when there was one entrant
    entering into a market,
  771. that in that game there were
    actually two Nash equilibria and
  772. one of them we argued was
    incredible.
  773. Here it's a bit more
    complicated, but nevertheless,
  774. these two equilibria seem like
    bogus equilibria or phony
  775. equilibria,
    or equilibria we wouldn't
  776. really believe in.
    The reason we don't believe in
  777. them is that they don't
    correspond to backward induction
  778. and our common sense intuition
    is about backward induction.
  779. So we need some new notion,
    the aim of the class has been
  780. what?
    We want to be able to model
  781. games that have both sequential
    moves and simultaneous moves,
  782. and we want to be able to look
    at the games and use our
  783. techniques from both halves of
    the class.
  784. We want to be able to use the
    idea of Nash equilibrium from
  785. the first half of the class,
    and we want to be able to use
  786. the idea of backward induction
    from the second half of the
  787. class.
    But what we're learning here is
  788. that Nash equilibrium,
    if we just take the notion of
  789. Nash equilibrium and plunk it
    down on these sequential move
  790. games,
    it will produce equilibria that
  791. don't make any sense.
    So we need a more refined
  792. notion of equilibrium,
    a better notion of equilibrium
  793. than just Nash equilibrium to
    deal with these settings where
  794. we have both simultaneity and
    sequential moves.
  795. We have both some perfect
    information and some imperfect
  796. information.
    That was one example.
  797. Let me give you a second
    example if that example wasn't
  798. yet convincing.
    Let me leave that example up.
  799. So far we've seen that Nash
    equilibrium gets us into trouble
  800. in these games and we've seen it
    got us into trouble because it
  801. basically conflicted with our
    backward induction intuitions.
  802. Now I'm going to show you a
    different game and we're going
  803. to see again that Nash
    equilibria is going to get us
  804. into trouble.
    This is a going to be a
  805. three-player game.
    We will get more complicated as
  806. we go along.
    So another example,
  807. this time with three players.
    So as the examples get harder I
  808. need you to be more alert to see
    if you can follow them.
  809. So this is a more complicated
    tree.
  810. Here's a tree in which Player 1
    moves first and chooses between
  811. A or B and if Player 1 chooses A
    the game is over,
  812. she gets 1, and the other two
    players get nothing.
  813. If she chooses B then Players 2
    and 3 get to play a little game
  814. down here in which 2 moves first
    in this little sub-game and 3
  815. moves second,
    and the payoffs in this
  816. sub-game are as follows.
    Again, using Player 1's payoff
  817. first.
    So there's (0,1,
  818. 1), (0,0, 2),
    (0,0, -1) and (2,1,
  819. 0).
    So this is quite a complicated
  820. game, it's got three players for
    a start, so it's going to be a
  821. little bit hard to draw it up in
    a matrix,
  822. but nevertheless,
    let me try and do that.
  823. So I claim that we can model
    this game as follows.
  824. It's a game in which Player 1
    is choosing which matrix,
  825. let's call this Matrix A and
    Matrix B.
  826. Player 1 is choosing the
    matrix, Player 2 is choosing,
  827. let's call them up and down:
    Player 2 is choosing up or
  828. down.
    And Player 3 is choosing left
  829. or right.
    Notice in this game Players 2
  830. and 3 actually can observe the
    choice of A or B to start with.
  831. So let's try and put in the
    payoffs in the correct places.
  832. It's not always easy to do,
    but let's try.
  833. So A is easy,
    if Player 1 chooses A,
  834. then the payoffs in this matrix
    are somewhat trivial.
  835. Because if Player 1 chooses A,
    whatever anyone else does the
  836. payoff is 1,0,
    0.
  837. Somewhat uninteresting matrix
    over there.
  838. But if Player 1 chooses B then
    life gets more interesting.
  839. Then if Player 2 chooses up and
    Player 3 chooses left,
  840. we end up here so that's
    (0,1,1).
  841. If Player 2 chooses--this is 2
    and this is 3--if Player 2
  842. chooses up and Player 3 chooses
    right then we're at (0,0,2),
  843. so this is (0,0,2) going in
    here.
  844. If Player 2 chooses down and
    Player 3 chooses left then we're
  845. at (0,0,-1).
    Everyone okay with that?
  846. If Player 3 chooses down then
    we're down here which is (2,1,
  847. 0).
    Okay so here's a little game.
  848. Player 1 is choosing the
    matrix, Player 2 is choosing the
  849. row in the matrix,
    albeit trivially on the left
  850. hand side and Player 3 is
    choosing the column in the
  851. matrix,
    again, albeit trivially on the
  852. left hand side.
    We don't really care about this
  853. picture very much.
    Okay, so now what?
  854. Well, once again we could look
    for Nash equilibria in this
  855. game.
    It turns out there are lots of
  856. Nash equilibria in this game.
    Let me just show you one Nash
  857. equilibrium and then we'll talk
    about it.
  858. So I claim that there are lots
    of Nash equilibria,
  859. and one of them is the Nash
    equilibrium (A,
  860. up, left).
  861. So let's just see where that is
    in the tree first of all.
  862. So Player 1 chose A,
    Player 2 chose up and left,
  863. but it followed A so we end up
    here, we end up at (1,0,
  864. 0).
    So (A, up, left) is this box in
  865. the tree.
    Now, let's just check that that
  866. actually is a Nash equilibrium.
    So we all know how to do this
  867. from the first half of the
    class.
  868. So to check that that's a Nash
    equilibrium we better check that
  869. no individual player can do
    better by deviating.
  870. So let's start with Player 1.
    If Player 1 deviates holding
  871. Player 2 and 3 fixed,
    then Player 1 will be switching
  872. the matrix from Matrix A to
    Matrix B.
  873. Is that correct?
    So we'll move from this box in
  874. the left hand matrix to the
    equivalent box in the right hand
  875. matrix.
    Player 1's payoff will go from
  876. 1 to 0, so Player 1 doesn't want
    to deviate.
  877. Everyone happy with that?
    Player 1 doesn't want to
  878. deviate here.
    How about Player 2?
  879. If Player 2 deviates,
    holding Players 1 and 3 fixed,
  880. then Player 1 is going to
    switch rows in this matrix,
  881. so we'll move from this box to
    this box.
  882. Player 2 was making 0 before.
    She's still making 0,
  883. so she has no incentive to
    deviate.
  884. And the same argument applies
    for Player 3 because she will be
  885. choosing the column holding the
    row in the matrix fixed,
  886. so once again she gets 0 in
    either case.
  887. So everyone happy with that?
    So that actually is a Nash
  888. equilibria, and again,
    if this had been on the
  889. mid-term I could have set this
    up,
  890. I could have given you these
    matrices or the story behind
  891. them, and you'd have found--I
    could have asked you whether
  892. this was a Nash equilibrium and
    the answer would have been yes.
  893. But I claim that once again
    this is just not a believable
  894. Nash equilibrium.
    It is a Nash equilibrium,
  895. formally it's a Nash
    equilibrium, but it's not a
  896. plausible prediction for how
    this game is going to be played.
  897. Why is it not a plausible
    prediction for how this game's
  898. going to played?
    Anyone see?
  899. Stare at the tree a bit.
    So in the information here,
  900. the pre-mid-term information,
    it's fine.
  901. But knowing about the actual
    structure of this game I claim
  902. this makes no sense at all.
    Why does it make no sense?
  903. Well, notice that if Player 1
    were to switch her action from
  904. the prescribed action A to
    action B, then we'd be here.
  905. Notice that the tree from here
    on in looks like a little game,
  906. is that right?
    The tree from here on looks
  907. like a little game.
    So this thing here,
  908. let's put it in green,
    is a little game within the
  909. game.
    It's a sub-game.
  910. This sub-game really only
    involves two players.
  911. The two players that it
    involves are Player 2 and Player
  912. 3.
    Player 1 is done,
  913. Player 1 has put us into this
    game, but now this little
  914. sub-game, it's a little sub-game
    involving just Player 2 and
  915. Player 3.
    So we can analyze this little
  916. sub-game.
    If we analyze this little
  917. sub-game, what will it give us?
    What will we find?
  918. So let's look at this sub-game.
    So look at the green sub-game,
  919. the game that would have
    happened had Player 1 chosen B.
  920. This is a sub-game involving
    just Players 2 and 3.
  921. So why don't I just forget
    Player 1.
  922. We know--I mean Player 1 is
    part of the game,
  923. he's getting payoffs,
    but Player 1 has made their
  924. move, they're not really
    involved any more.
  925. So let's just look at this game
    as a game involving Players 2
  926. and 3, and let's look at the
    matrix for Players 2 and 3.
  927. So it actually corresponds to
    the matrix above,
  928. it's a matrix in which Player 2
    is choosing up and down.
  929. There it is up and down and
    simultaneously Player 3 is
  930. choosing left or right.
    There it is left or right at
  931. this information set and the
    payoffs are (1,1),
  932. (0,2), (0, -1) and (1,0).
  933. So this is, I claim,
    a representation of this little
  934. green game.
    Perhaps we should put this in
  935. green as well.
    This thing corresponds to that
  936. thing.
    Everyone okay with that?
  937. So if Player 1 had chosen B
    rather than A,
  938. then we'd be involved in a
    little game, a game within a
  939. game, or a sub-game involving
    just Players 2 or 3.
  940. And we can analyze that game.
    That's a straightforward game.
  941. Here it is.
    And what would we do with that
  942. game?
    We'd look for the Nash
  943. equilibria in that game.
    So let's look for the Nash
  944. equilibrium of this game.
    So what do we notice about this
  945. game?
    So if Player 3 chooses left
  946. then Player 2 would rather
    choose up.
  947. If Player 3 chooses right then
    Player 2 should choose down.
  948. If Player 2 chooses up then
    Player 3 would rather choose
  949. right, because 2 is bigger than
    1.
  950. And if Player 2 were to choose
    down than Player 3 would choose
  951. right again, because 0 is bigger
    than -1.
  952. So in fact, in this little
    sub-game, actually,
  953. Player 3 has a dominant
    strategy.
  954. If it turned out that we got
    involved in this little
  955. sub-game, Player 3 has a
    dominant strategy which is to
  956. play right,
    and moreover,
  957. this sub-game has just one Nash
    equilibrium.
  958. If I'd given you this sub-game
    on its own, it's clear that the
  959. Nash equilibrium of this
    sub-game--this game within a
  960. game--is (down,
    right).
  961. So what's that telling us?
    It's telling us if Player 2 and
  962. 3 ever get called upon to play
    in this game--and that only
  963. happens when Player 1 chooses
    B--if Player 2 and 3 ever get
  964. called upon to play in this
    game,
  965. we know from when we were young
    or at least from before the
  966. mid-term, we know that they're
    going to play Nash equilibrium
  967. in that sub-game.
    And the Nash equilibrium in the
  968. sub-game is going to have Player
    3 choosing right and Player 2
  969. choosing down.
    But the equilibrium we talked
  970. about, this equilibrium we
    argued before about,
  971. (A, U, L)--the equilibrium we
    talked about before,
  972. (A, U, L), doesn't involve
    Player 2 choosing down.
  973. In fact she chose up.
    And it doesn't involve Player 3
  974. choosing right.
    In fact she chose left.
  975. So let's sum up.
    We found an equilibrium of this
  976. game, this equilibrium of this
    game was (A, U,
  977. L) but I claim this is not a
    plausible equilibrium.
  978. It's not a plausible
    equilibrium because it predicts
  979. that if we actually were to play
    the game within the game,
  980. we wouldn't play equilibrium.
    Let me say it again,
  981. in the whole game (A,
    U, L) is an equilibrium.
  982. But I claim it's a silly
    equilibrium because it involves
  983. the prediction that,
    if in fact we ever got into the
  984. game within the game,
    we would no longer play
  985. equilibrium,
    and that doesn't seem right.
  986. If we're going to believe in
    equilibrium, we should be
  987. consistent and believe in
    equilibrium throughout.
  988. So this brings us to a new idea
    and the new idea is going to
  989. have two parts to it.
    The first part is kind of on
  990. the board already.
    It's something we talked about
  991. informally.
    It's the notion of a sub-game.
  992. What's a sub-game?
    It's a game within a game.
  993. I've been using that informally
    but we need to start thinking
  994. about, more formally,
    what it means.
  995. So I talked about it informally.
    I said that green object is the
  996. game that would be played were
    Player 1 to choose B.
  997. We talked about other sub-games
    in this class.
  998. We talked about the sub-game
    that would happen in the entry
  999. game if one of those rival pizza
    companies moved in,
  1000. in the Miami market or
    something, the game within a
  1001. game.
    When we talked about the Tour
  1002. de France we talked about there
    being a game within a game that
  1003. is about when you break away.
    But now I want to be formal
  1004. about this notion of a game
    within a game and introduce some
  1005. nomenclature.
    So the formal definition is
  1006. this.
    A sub-game is a part of a game,
  1007. informally that looks like a
    game within the tree,
  1008. and it has three properties.
  1009. It satisfies the following
    three properties.
  1010. So one since it looks a game
    itself the sub-game must start
  1011. from a particular point.
    So it starts--the sub-game must
  1012. start--it starts from a single
    node.
  1013. Let's just look at the example.
    In the example we just looked
  1014. at, the sub-game started from
    this node here.
  1015. Second, it comprises all
    successors to that node.
  1016. So in our example,
    here's our sub-game.
  1017. Here's our green sub-game.
    Here's the node it starts from.
  1018. Here are all the nodes that are
    successors of that node:
  1019. these are the children,
    and these are the
  1020. grandchildren.
    If you have this grandparent
  1021. node you have to have all of his
    children, and all of his
  1022. grandchildren.
    So it comprises all the
  1023. successors of that node.
    And finally--this is
  1024. important--it does not break up
    any information sets.
  1025. So a sub-game,
    informally, it's just a game
  1026. within the game.
    But slightly more formally,
  1027. I can't put one node that's
    part of an information set into
  1028. this sub-game unless I'm going
    to put all the nodes that are
  1029. part of that information set
    into the sub-game.
  1030. Let's have a look at some
    examples.
  1031. We've got one example up there,
    the entry game looked something
  1032. like this.
    So what are the sub-games here,
  1033. no secrets here,
    this is a sub-game.
  1034. There's actually another
    sub-game, can anyone see what
  1035. the other sub-game is?
    The whole game is a sub-game.
  1036. The whole game is itself a
    sub-game, somewhat trivially.
  1037. So this particular game,
    which is the schematic of the
  1038. entry game, it has actually two
    sub-games but only one proper
  1039. sub-game.
    Here's a more complicated
  1040. example.
  1041. This is actually going to be
    quite a complicated example just
  1042. to make life interesting.
    So 1 is going to move,
  1043. then 2 is going to move,
    and then 1 is going to move
  1044. again.
    This is all one big information
  1045. set for Player 1.
    And 1 is going to move like
  1046. this.
    So again, without payoffs,
  1047. this is a little tree and the
    key point here is this is an
  1048. information set.
    Let's look-- let's stare--at
  1049. this tree a second and figure
    out what are and aren't
  1050. sub-games.
    So first of all,
  1051. this was a sub-game and this
    was a sub-game.
  1052. What about this thing here?
    Is that a sub-game?
  1053. It's not a sub-game.
    What rule does it break?
  1054. It's breaking up an information
    set, so that's no good because
  1055. of rule three.
    What about this thing?
  1056. That doesn't break up an
    information set.
  1057. I've got the whole information
    set in there.
  1058. Is that any good?
    No that's no good because it
  1059. doesn't start from a singleton
    node, so that's no good it
  1060. violates 1.
    If we do this,
  1061. we look at this piece.
    That piece there,
  1062. that's also no good.
    Why is that no good?
  1063. Again it breaks up an
    information set,
  1064. so this is no good again
    because of rule 3.
  1065. So you can practice at home
    drawing trees and trying to
  1066. identify what are and what are
    not sub-games.
  1067. So with the definition of a
    sub-game now formal--it's
  1068. basically just formalizing
    something we've talked about
  1069. before which is the idea of a
    game within the game--I want to
  1070. introduce our new--what's going
    to be our new solution concept.
  1071. And this is going to be the
    solution concept we're going to
  1072. use essentially almost until the
    final.
  1073. Definition, so just remember
    what our task is.
  1074. Our task is to come up with a
    solution concept that picks up
  1075. the idea from the first half of
    the semester,
  1076. namely Nash equilibrium,
    but does so in a way that
  1077. respects what we've learned in
    the second half of the semester,
  1078. namely that games have
    sequential elements and people
  1079. move by backward induction.
    So, in particular,
  1080. what we want to rule out are
    those Nash equilibria that
  1081. instruct players down the tree
    to play in sub-games according
  1082. to strategies that are not Nash
    equilibria.
  1083. Say it again,
    we want to rule out those Nash
  1084. equilibria that instruct people
    way down the tree to play
  1085. according to something which is
    not a Nash equilibrium.
  1086. We want our new notion to say:
    wherever you find yourself in a
  1087. tree, play Nash equilibrium and
    that's exactly what the
  1088. definition's going to say.
    So a Nash equilibrium,
  1089. S1*, S2* all the way up to SM*
    is a sub-game perfect
  1090. equilibrium--so that's a
    SPE--it's a sub-game perfect
  1091. equilibrium if it induces a Nash
    equilibrium in every sub-game of
  1092. the game.
    So to be a sub-game perfect
  1093. equilibrium, it has to itself be
    a Nash equilibrium of course,
  1094. but it also has to instruct
    players to play a Nash
  1095. equilibrium in every sub-game.
    Let's take that immediately
  1096. back to our examples.
    In this example,
  1097. we know that this is a
    sub-game.
  1098. We know that,
    in this sub-game,
  1099. there is only one Nash
    equilibrium;
  1100. and that Nash equilibrium
    involves Player 2 choosing down
  1101. and Player 3 choosing right.
    So we know that Player 2 is
  1102. going to choose down according
    to that equilibrium,
  1103. and Player 3 is going to choose
    right according to that
  1104. equilibrium.
    So if we now have to look for
  1105. an equilibrium of the whole
    game--let's go back to Player
  1106. 1's choice--Player 1 if they
    choose A will get 1,
  1107. if they chose B then they know
    that this Nash equilibrium will
  1108. be played so they'll get 2.
    They prefer 2 to 1,
  1109. so the sub-game perfect
    equilibrium here is Player 1
  1110. chooses B, Player 2 chooses
    down, and Player 3 chooses
  1111. right.
    This is an equilibrium of the
  1112. game and it induces an
    equilibrium in the sub-game.
  1113. So in that example,
    the sub-game perfect
  1114. equilibrium is found by first of
    all looking in the sub-game;
  1115. find the equilibrium in the
    sub-game;
  1116. and then go back and look at
    the equilibrium in the whole
  1117. game.
    The equilibrium that we end up
  1118. with, it is a Nash equilibrium
    in the whole game,
  1119. but more importantly,
    it induces a Nash equilibrium
  1120. in the sub-game.
    Let's just go back to our other
  1121. example then I'll stop.
    So our other example was here.
  1122. Here was our other example.
    And we claimed,
  1123. hang on everybody,
    we claimed that the good
  1124. equilibrium here,
    the one we believed in was
  1125. (Down, down, right).
    Where are the sub-games in this
  1126. game?
    Where are the sub-games in this
  1127. tree?
    Anybody?
  1128. So I claim there's only one
    real sub-game here and that's
  1129. this piece.
    This is a sub-game.
  1130. What's the Nash equilibrium of
    this somewhat trivial sub-game?
  1131. The Nash equilibrium of this
    somewhat trivial sub-game is
  1132. that Player 1 must choose down.
    So for a Nash equilibrium to be
  1133. a sub-game perfect
    equilibrium--here are three Nash
  1134. equilibria: one,
    two, three.
  1135. For this Nash equilibria to be
    a sub-game perfect equilibrium,
  1136. it's got to instruct Player 1
    to choose down in the trivial
  1137. sub-game and here it is.
    This is our sub-game perfect
  1138. equilibrium in this game.
    Now I know today was a lot of
  1139. formal stuff,
    a lot of new ideas,
  1140. and when we come back on Monday
    we'll first of all give you a
  1141. game that refreshes these ideas
    and then we'll go straight to
  1142. applications.
    So trust me:
  1143. there will be applications.
    It will be useful.
  1144. See you on Monday.
    There's a homework to come in,
  1145. there's another on the web
    going out.