English subtitles

← 15. Backward induction: chess, strategies, and credible threats

Get Embed Code
1 Language

Showing Revision 1 created 07/19/2012 by Amara Bot.

  1. Professor Ben Polak: So
    last time we finished up by
  2. playing the game of Nim and
    you'll remember,
  3. I hope, that the game of Nim
    was a game where there was two
  4. piles of stones--we made do with
    lines on the board--and the
  5. winner was the person who picked
    up the last stone.
  6. Remember you had to pick piles.
    I want to use the game of Nim
  7. to make a transition here.
    So what we had pointed out
  8. about the game of Nim was it
    illustrated very nicely for us
  9. that games can have first mover
    advantages or they can have
  10. second mover advantages.
    A very small change in the
  11. game, essentially a very small
    change in where we started from,
  12. could switch a game from a game
    with a first mover advantage to
  13. a game with a second mover
    advantage.
  14. Now today, I want to just draw
    a slightly grander lesson out of
  15. that game.
    So not only was it the case
  16. that the game sometimes has
    first mover advantages and
  17. sometimes has second mover
    advantages,
  18. but moreover,
    we could tell when it had a
  19. first mover advantage and we
    could tell when it had a second
  20. mover advantage.
    Is that right?
  21. When we actually looked at the
    initial set up of those stones,
  22. we knew immediately that's a
    game in which Player 1 is going
  23. to win,
    or alternatively,
  24. we knew immediately that's a
    game which Player 2 is going to
  25. win.
    Now it turns out that that idea
  26. is very general and actually has
    a name attached to it,
  27. and that name is Zermelo.
  28. So today we'll start off by
    talking about a theorem due to a
  29. guy called Zermelo,
    and the idea of this theorem is
  30. this.
    We're going to look at games
  31. more general than just Nim,
    and we're going to ask the
  32. question,
    under what circumstances would
  33. you know about a game either
    that Player 1,
  34. the person who goes first,
    can force a win,
  35. or that Player 2 can force a
    win, or will allow a third
  36. possibility,
    which is it's going to be a tie.
  37. So here's the theorem,
    suppose there are two players
  38. in this game,
    like the games that we looked
  39. at last time,
    and suppose--I won't define
  40. this formally now--but suppose
    the game is a game of perfect
  41. information.
    So what do I mean by perfect
  42. information?
    I'll define this later on in
  43. the class, but for now all I
    mean is, that whenever a player
  44. has his turn to move,
    that player knows exactly what
  45. has happened prior in the game.
    So, for example,
  46. all these sequential move games
    we've been looking at are moves
  47. of perfect information.
    When I get to move I know
  48. exactly what you did yesterday,
    I know what I did the day
  49. before yesterday and so on.
    So it's a game of perfect
  50. information.
    I'm going to assume that the
  51. game has a finite number of
    nodes.
  52. So two things here,
    it can't go on forever,
  53. this game, and also there's no
    point in which it branches in an
  54. infinite way.
    So there's a finite number of
  55. nodes, and we'll assume that the
    game has three possible
  56. outcomes.
    Actually there's a more general
  57. version of this theorem but this
    will do for now.
  58. The three possible outcomes are
    either a win for Player 1,
  59. so I'll call it W_1,
    or a loss for Player 1,
  60. which is obviously a win for
    Player 2, or a tie.
  61. So the game--like Nim last
    time, that only had two
  62. outcomes.
    So here we're going to look up
  63. to three outcomes or three or
    fewer outcomes I should say.
  64. So these are the conditions and
    here's the result.
  65. So I said three,
    it could be three but it could
  66. also be two here,
    I'm just allowing for three.
  67. (One is trivial.) So under
    these conditions the following
  68. is true.
    Either Player 1 can force a
  69. win, so either it's the case
    that this game is a game that if
  70. Player 1 plays as well as they
    can,
  71. they're going to win the game
    no matter what Player 2 does.
  72. Or 1 can at least force a tie,
    which means Player 1 can play
  73. in such a way that they can
    assure themselves of a tie
  74. regardless of what Player 2
    does.
  75. Or it could be a game in which
    2 can force a loss on 1,
  76. so a win for 2.
    So this theorem,
  77. when you first look at it,
    it doesn't seem to say very
  78. much.
    You're staring at this
  79. thing--you might think,
    we already knew that we're
  80. looking at games that only have
    three possible outcomes win,
  81. loss, or tie so it doesn't seem
    so surprising if you look at
  82. this theorem,
    it says,
  83. well, you're going to end up
    with a win, or a loss,
  84. or a tie.
    However, that's not quite what
  85. the theorem says.
    The theorem says,
  86. not only are you going to end
    up there--we knew that
  87. already--but the games divide
    themselves.
  88. Games of these forms divide
    themselves into those games in
  89. which Player 1 has a way of
    winning regardless of what
  90. Player 2 does;
    or games in which Player 1 has
  91. a way of forcing a tie
    regardless of what Player 2
  92. does;
    or Player 2 has a way of
  93. winning regardless of what
    Player 1 does,
  94. so these games all have a
    solution.
  95. Let's just go back to Nim to
    illustrate the point.
  96. So in Nim actually there's no
    tie so we can forget the middle
  97. of these, and in Nim,
    under certain circumstances it
  98. is the case that Player 1 can
    force a win.
  99. Who remembers what the case
    was, for when Player 1 can force
  100. a win?
    Anybody?
  101. The people who played last time.
    No, yes, Ale,
  102. there's somebody here.
    Shout it out.
  103. Student: Insuring that
    the piles are equal for the
  104. other player.
    Professor Ben Polak: All
  105. right, so if the piles start
    unequal, then Player 1 can
  106. actually force a win.
    So in Nim if the piles are
  107. unequal at the start,
    then 1 can force a win.
  108. It really doesn't matter what 2
    does, 2 is "toast."
  109. Or the alternative case is the
    piles are equal at the start,
  110. and if they're equal at the
    start then it's Player 1 who's
  111. "toast."
    Player 2 is going
  112. to--rather--force a loss on 1,
    so 2 can force a loss on 1,
  113. i.e.
    a win for Player 2.
  114. Does everyone remember that
    from last time?
  115. It's just before the weekend,
    it shouldn't be so long ago.
  116. So this theorem applies to all
    games of this form.
  117. So what games are of this form?
    Let's try and think of some
  118. other examples.
    So one example is tic-tac-toe.
  119. Everyone know the rules of
    tic-tac-toe?
  120. In England we call it Noughts
    and Crosses, but you guys call
  121. it tic-tac-toe,
    is that right?
  122. Everyone know what tic-tac-toe
    is?
  123. Yeah, which category is
    tic-tac-toe?
  124. Is it a game which Player 1 can
    force a win, or is it a category
  125. in which Player 1 can only force
    a tie,
  126. or is it a category which you'd
    rather go second and Player 2
  127. can force a win for Player 2 or
    a loss for Player 1?
  128. Which is tic-tac-toe?
    Let's have a show of hands here.
  129. Who thinks tic-tac-toe is a
    game in which Player 1 can force
  130. a win?
    Who thinks tic-tac-toe is a
  131. game in which Player 1 can only
    force a tie?
  132. Who thinks Player 2's going to
    win?
  133. Most of you are right.
    It's a game in which if people
  134. play correctly then it'll turn
    out to be a tie.
  135. So tic-tac-toe is a game that
    leads to a tie.
  136. Player 1 can still make a
    mistake, in which case they can
  137. lose.
    Player 2 can make a mistake in
  138. which case they would lose,
    but there is a way of playing
  139. that forces a tie.
    So these are fairly simple
  140. games, let's talk about more
    complicated games.
  141. So what about the game of
    checkers?
  142. So the game of checkers meets
    these conditions.
  143. It's a two player game.
    You always know all the moves
  144. prior to your move.
    It's finite:
  145. there's some rules in checkers
    that prevent it going on
  146. forever.
    And there are two or three
  147. outcomes: I guess there's a
    third outcome if you get into a
  148. cycle you could tie.
    So checkers fits all these
  149. descriptions and what this
    theorem tells us is that
  150. checkers has a solution.
    I'm not sure I know what that
  151. solution is, or I think actually
    somebody did compute it quite
  152. recently,
    even in the last few months,
  153. I just forgot to Google it this
    morning to remind myself.
  154. But what this theorem tells us
    even before those people that
  155. computed it: checkers has a
    solution.
  156. Let's be a bit more ambitious.
    What about chess?
  157. So chess meets this description.
    Chess is a two player game,
  158. everybody knows all the moves
    before them, it's sequential,
  159. it has finite number of moves,
    it's a very large number but it
  160. is finite, and it has three
    possible outcomes,
  161. a win, a loss, or a tie.
    So let's be careful,
  162. the reason it's finite is that
    if you cycle--I forget what it
  163. is--three times then the game is
    declared a draw,
  164. declared a tie.
    So what's this theorem telling
  165. us?
    It's telling us that there is a
  166. way to solve chess.
    Chess has a solution.
  167. We don't know what that
    solution is.
  168. It could be that solution is
    that Player 1,
  169. who's the player with the white
    pieces can force a win.
  170. It could be that Player 1 can
    only force a tie,
  171. and it could even be that
    Player 2 can force a win.
  172. We don't know which it is,
    but there is a solution.
  173. There's a catch to this theorem.
    What's the catch?
  174. The catch is it doesn't
    actually tell us--this theorem
  175. is not going to tell us what
    that solution is.
  176. It doesn't tell us how to play.
    This theorem,
  177. in and of itself,
    doesn't tell us how to play
  178. chess.
    It just says there is a
  179. way to play chess.
    So we're going to try and prove
  180. this.
    We don't often do proofs in
  181. class, but the reason I want to
    prove this particular one is
  182. because I think the proof is
    instructive as part of sort of
  183. QR at Yale.
    So here's another example here,
  184. and some other examples.
    You could think of chess as
  185. being the most dramatic example.
  186. So the reason I want to spend
    actually some time proving this
  187. today is because we're going to
    prove it by induction and I'm
  188. going to sketch the proof.
    I'm not going to go through
  189. every possible step but I want
    to give people here a feeling
  190. for what a proof by induction
    looks like.
  191. The reason for this is,
    my guess is,
  192. well let's find out,
    how many of you have seen a
  193. proof by induction before?
    How many have not?
  194. So for those of you who haven't
    I think it's a good thing in
  195. life, at one point in your life
    to see a proof by induction,
  196. and for those who have,
    my guess is you saw it in some
  197. awful math class in high school
    and it just went over--it didn't
  198. necessarily go over your head
    but,
  199. the excitement of it doesn't
    catch on.
  200. This is a context where it
    might appeal.
  201. So proof by induction.
    We're going to prove this by
  202. induction on the maximum length
    of the game and we'll call that
  203. N.
    We'll call N the maximum length
  204. of the game.
    What do I mean by this?
  205. If I write down a tree I can
    always look at all the paths
  206. from the beginning of the tree
    all the way through to the end
  207. of the tree.
    And I'm going to look at the
  208. path in that particular tree
    that has the largest length,
  209. and I'll call that the length
    of the game, the maximum length
  210. of the game.
    So we're going to do induction
  211. on the maximum length of the
    game.
  212. So how do we start a proof by
    induction?
  213. Let's just remind ourselves,
    those people who have seen them
  214. before.
    We're going to prove that this
  215. theorem this true,
    the claim in the theorem is
  216. true for the easy case when the
    game is only 1 move long.
  217. That's the first step,
    and then we're going to try and
  218. show that if it's true for all
    games of length <
  219. = N,
    whatever N is,
  220. then it must therefore be true
    for games of length N+1.
  221. That's the way you do a proof
    by induction.
  222. So let's just see how that
    works in practice.
  223. So let's start with the easy
    step and we'll do it in some
  224. detail, more detail than you
    really need to in a math class.
  225. So if N = 1,
    what do these games look like?
  226. Well let's look at some
    examples here.
  227. So I claim it's pretty trivial
    if N = 1 but let's do it anyway.
  228. So the game might look like
    this.
  229. Player 1 is going to
    move--here's Player 1
  230. moving--the game's only length
    one--so Player 1's the only
  231. player who ever gets to move.
    So the game might look like
  232. this.
    Let's put in a fifth branch,
  233. it might look like this.
    And at the end of this game,
  234. rather than putting payoffs let
    me just put the outcomes.
  235. So the outcome must either be a
    win, or a tie,
  236. or a loss, so suppose it looks
    like this.
  237. Suppose it's a win,
    or here we could have a tie,
  238. or here we could have a win
    again, or here we could have a
  239. loss, and here we could have a
    tie.
  240. So in this particular game I
    claim it's pretty clear this
  241. game has a solution.
    It's pretty clear what 1 should
  242. do.
    What should 1 do?
  243. 1 should pick one of her
    choices that leads to a win.
  244. To be careful I'll put 1 in
    here just to distinguish who it
  245. is who's actually winning and
    losing.
  246. So in this game I claim that
    this game has a pretty obvious
  247. solution, either Player 1 is
    going to choose this branch that
  248. leads to a win or this branch
    that leads to a win,
  249. and either way Player 1 is
    going to win.
  250. Is that obvious?
    It's so obvious,
  251. it's kind of painful.
    So I claim this game,
  252. we could actually replace this
    first node with what's going to
  253. happen which is Player 1's going
    to win.
  254. Is that right?
    That was easy.
  255. Let's look at a different
    example, we'll do three.
  256. So here's another possible
    example, and this again,
  257. Player 1 is going to move here
    and this time the possible
  258. outcomes are tie,
    or a loss, or a loss.
  259. Once again it's a one player
    game, this time he has three
  260. choices and the three choices
    lead to a tie or he loses,
  261. so I claim once again this is
    simple.
  262. What Player 1 should do is what?
    Choose the tie since there's no
  263. way of winning.
    But in that case he could
  264. actually force it.
    He could actually choose the
  265. tie in which case he's going to
    tie.
  266. So this game has a solution
    called choose the tie.
  267. And again, we could replace the
    first node of the game with
  268. what's actually going to happen
    which is a tie.
  269. Everyone happy?
    There's one other possibility I
  270. guess.
    The other possibility is that
  271. here Player 1 has a lot of
    choices, maybe four choices in
  272. this case,
    once again Player 1 is going to
  273. move but in each
    case--unfortunately for Player
  274. 1--in each case the outcome is
    that Player 1 loses.
  275. So this is a game in which
    Player 1 is going to lose no
  276. matter what they do.
    Once again it has a solution:
  277. the solution is Player 1 is
    toast and is going to lose.
  278. So I claim this has really
    captured all the possibilities
  279. of games of length 1.
    I mean you could imagine that
  280. there could be more branches on
    them, but basically there are
  281. these three possibilities.
    Either it's the case that one
  282. of those branches leads to a
    win, in which case the solution
  283. is Player 1 wins;
    or it's the case that none of
  284. the branches have wins in them,
    but one of them has a tie,
  285. in which Player 1 is going to
    tie;
  286. or it's the case that all the
    branches have loss in them,
  287. in which Player 1's going to
    lose.
  288. So I claim I'm for done for
    games of length one.
  289. Everyone okay?
    So that step is deceptively
  290. easy but there it is.
    All right, now we do the
  291. inductive step,
    the big step.
  292. What we're going to do is we're
    going to assume that the
  293. statement of the theorem,
    which I've now hidden
  294. unfortunately--let me unhide it
    if I can, it's not going to be
  295. easy.
  296. This now visible?
    So now let's assume that the
  297. statement of this theorem is
    true for all games length <
  298. = N, suppose that's the case.
    Suppose the claim is true for
  299. all games of this type,
    of length <= some N.
  300. What we need to show is--what
    we're going to try and show is
  301. therefore it will be true for
    all games of length N + 1.
  302. What we want to show is--we
    claim, therefore,
  303. it will be true for games of
    length N + 1,
  304. that's the key step in
    inductive proofs.
  305. All right, so how are we going
    to do that?
  306. Well let's have a look at a
    game of length N + 1,
  307. and obviously I can't use
    thousands here so let me choose
  308. from relatively small numbers,
    but the idea will go through.
  309. So let's have a look at a game.
    So here's a game in which
  310. Player 1 moves first,
    Player 2 moves second.
  311. Let's suppose that if Player 2
    does this, then 1 only has
  312. little choice here,
    up here then one has a more
  313. complicated choice,
    and perhaps 2 could move again.
  314. So this is quite a complicated
    little game up here,
  315. and down here let's make it a
    little bit less complicated.
  316. Here the game looks like this
    and then comes to an end,
  317. so the tree looks like this.
    I haven't put the end,
  318. I haven't put the outcomes on
    it but the tree looks like this.
  319. So how long is this tree first
    of all?
  320. Well I claim that the longest
    path in this tree,
  321. from the beginning to the end
    has four steps.
  322. Let's just check.
    So one, two,
  323. three, four that's what I claim
    is the longest path.
  324. Any path going down this way
    only has three moves in it,
  325. so this is a game of length
    four.
  326. So we can apply it to our
    claim, let's assume that the
  327. theorem holds for all trees
    length three or fewer,
  328. so that N + 1 is 4 for the
    purpose of our example.
  329. So in this example--I don't
    want to put it here,
  330. let's put it here--in this
    example N = 3,
  331. so that N + 1 = 4.
    Everyone happy with this in the
  332. example?
    Now what I want to observe
  333. about this is the following:
    contained in this N + 1,
  334. or the game of length 4 are
    some smaller games.
  335. You could think of them as
    sub-games.
  336. They're games within the game,
    is that right?
  337. So let's have a look.
    I claim, in particular--draw
  338. around them in pink--there's a
    game here that follows Player 1
  339. choosing Up and there's a game
    here that follows Player 1
  340. choosing Down,
    is that right?
  341. Okay, so these are both games,
    and what do we have here?
  342. So this little game in
    here--the game in the top
  343. circle, the game that follows
    Player 1 moving up--that's a
  344. game and it has a length.
    So this little thing is a game
  345. and I'm going to call it a
    sub-game, some jargon that we'll
  346. be getting used to later on in
    the semester.
  347. It's a game within the game,
    but it's basically just a game:
  348. it's a sub-game and this is a
    sub-game following 1 choosing
  349. Up--let's put Up in
    here--following up.
  350. And I claim that this game has
    a length--this sub-game has a
  351. length.
    So we knew that we started from
  352. a game of length 4,
    we've taken one of the moves,
  353. and let's just make sure this
    is actually a game of length 3.
  354. So the game started here,
    this would be a game of length
  355. 3 because you could go one,
    two, three moves,
  356. is that right?
    So this is a game of length 3.
  357. So this is a sub-game following
    1 choosing Up and it,
  358. the sub-game,
    has length 3.
  359. Down here, this is also a
    sub-game, it's a game that
  360. follows Player 1 choosing Down
    and this little sub-game has
  361. length--now here we have to be a
    little bit careful--you might
  362. think that since we started from
    a game of length 4,
  363. after the first move we must be
    in a game of length 3.
  364. But actually that's not true,
    and if we look carefully,
  365. we'll notice that this game
    down here, the game starting
  366. here actually isn't of length 3.
    It's of length 2.
  367. Is that right?
    So even though started from a
  368. game of length 4,
    by going this way we put
  369. ourselves into a game of length
    2.
  370. Is that right?
    So 1,2 or 1,2--whatever--it has
  371. length 2.
    Okay, all right so in this
  372. example N is 3 and N + 1 is 4,
    and our assumption,
  373. our induction assumption is
    what?
  374. We've assumed that the claim of
    the theorem holds for all games
  375. <= N, which in this case
    means <= 3.
  376. So what does that tell us?
    That tells us,
  377. by our assumption,
    this game, which I've put a
  378. pink circle around on the top,
    this is a game of length 3,
  379. this game has a solution.
    It must have a solution because
  380. it's of length 3 or less.
    So by the induction hypothesis
  381. as it's called--by the induction
    hypothesis--this is a technical
  382. term, but what's the induction
    hypothesis?
  383. It's this thing.
    By the assumption that we made
  384. this game has a solution.
    So it's a game of length 3,3
  385. <= N in this case.
    So this game must have a
  386. solution.
    Now I don't immediately know by
  387. staring at it what it is,
    but let's suppose it was W,
  388. say it was W.
    And the game down below,
  389. this is also a game,
    and it's a game of length 2 but
  390. 2 is less than 3 as well,
    2 <= 3.
  391. So this game also has a
    solution.
  392. This game also,
    by the same assumption,
  393. has a solution,
    so its solution is say--I don't
  394. know--maybe its L.
    So what does that mean to say
  395. that these games have solutions,
    in this case we're going to
  396. assume W is this one,
    and L is this one,
  397. what does it mean?
    It means that,
  398. just as we did with the games
    up there, we could put the
  399. solution at the beginning of the
    game.
  400. We know that if we get into
    this game, we're going to get
  401. the solution of this game;
    and we know if we get into this
  402. game, we're going to get the
    solution of this game.
  403. So we can replace this one with
    its solution which by assumption
  404. was W, and this one by its
    solution which by assumption was
  405. L.
    And here I want to be really
  406. careful, I need to keep track of
    which person it is it's a win or
  407. loss for.
    So this was a win for Player 1,
  408. hence it was a loss for Player
    2, and this was a loss for
  409. Player 1, hence it was a win for
    Player 2.
  410. So these games,
    each of them have some
  411. solution, and this case I've
    written down the solutions as W
  412. or L.
    So now what could we do?
  413. Let me just bring down this
    board again.
  414. I can translate this game into
    a very simple game.
  415. I'm going to translate it up
    here, so this game can be
  416. translated as follows.
    Player 1 moves,
  417. if he goes Up then he hits a W
    and if he goes Down he hits a L.
  418. So in this particular example,
    Player 1 is effectively
  419. choosing between going Up and
    finding himself in a game which
  420. has a solution,
    and the solution is he wins it,
  421. or going Down and finding
    himself in a game which has a
  422. solution,
    and the solution is he's toast.
  423. But that's what?
    That's a one move game.
  424. So we know this has a solution,
    and in particular,
  425. he's going to choose Up.
    This one has a solution.
  426. This has a solution,
    it is a game of length 1.
  427. So what do we do?
    Somewhat schematically,
  428. we took a game of length N + 1,
    in this case that was 4;
  429. and we pointed out that once
    Player 1 has made her first move
  430. in this game,
    we are in a game,
  431. or in a sub-game if you like,
    that has length less than 4,
  432. it could be 3,
    it could be 2,
  433. whatever.
    Whatever sub-game we're in,
  434. by assumption,
    that game has a solution.
  435. So it's really effectively,
    Player 1 is choosing between
  436. going into a game with solution
    W or going into a game with
  437. solution L.
    And if there were 15 other
  438. sub-games here,
    each one would have a solution,
  439. and each one Player 1 would be
    able to associate that solution
  440. with what he's going to
    get--what she's going to get.
  441. So I claim that,
    if it in fact is true that each
  442. of these sub-games of length 3
    or less had a solution,
  443. then the game of length 4 must
    have a solution,
  444. which is what?
    It's the solution which is:
  445. Player 1 should pick the best
    sub-game.
  446. Are people convinced by that
    step?
  447. People are looking slightly
    "deer in the headlamps" now,
  448. so let me just say it again.
    We started by assuming that all
  449. games of length 3 or less,
    or N or less,
  450. have a solution.
    We pointed out that any game
  451. that has length N + 1 can be
    thought of as an initial move by
  452. Player 1 followed by a game of
    length N or less,
  453. N or fewer I should say.
    Each of those games of N or
  454. fewer steps has a solution,
    so Player 1 is just going to
  455. choose the game that has the
    best solution for her and we're
  456. done.
    In this particular example,
  457. Player 1 is going to choose Up
    and therefore the solution to
  458. this parent game is Player 1
    wins.
  459. Now the hard step I think in
    proofs by induction is realizing
  460. that you're done.
    So I claim we're now done,
  461. why are we done?
    Well we know that it was pretty
  462. easy that all games of length 1
    have a solution.
  463. That was pretty trivial.
    And we've shown that if any
  464. game of length of N or fewer has
    a solution, then games of N + 1
  465. have a solution.
    So now let's see how we can
  466. proceed.
    We know that games of length 1
  467. have a solution,
    so let's consider games of
  468. length 2.
    Games of length 2,
  469. do games of length 2 have a
    solution?
  470. Well let's set N = 1.
    We know that if games of length
  471. 1 have a solution,
    then any game of length 2 could
  472. be thought of as an initial step
    followed by a game of length 1,
  473. but that has a solution,
    so therefore games of length 2
  474. have a solution.
    Now let's think of games with
  475. length 3, we've shown that games
    with length 1 have a solution
  476. and games with length 2 have a
    solution.
  477. Any game of length 3 can be
    thought of as a game in which
  478. there's an initial step followed
    by either a game of length 1 or
  479. a game of length 2,
    each of which have a solution,
  480. so once again a game of length
    3 has a solution and so on.
  481. So games of induction,
    they work by building up on the
  482. length of the game,
    and we're ending up knowing
  483. that we're done.
    For those people who have never
  484. seen a proof by induction don't
    worry I'm not going to test you
  485. on a proof with induction on the
    exam,
  486. I just wanted you to see one
    once.
  487. Let's all take a deep breath
    and try and digest this a little
  488. bit by playing a game.
    I'll leave it up there so you
  489. can stare at it.
    We'll want it down again later.
  490. Let's try and actually play a
    game and see if we can actually
  491. throw any light on this.
    So I'm going to pick out
  492. volunteers like I did last time,
    but first of all,
  493. let me tell you what the game
    is.
  494. So once again I'm going to have
    rocks in this game,
  495. and once again I'm going to
    approximate those rocks with
  496. marks on the blackboard.
    So here's an example.
  497. The example is this.
    The game involves an array of
  498. rocks, so here the array has
    five rows and three columns.
  499. So the rocks are not arranged
    as they were before in piles,
  500. but rather in a sort of in a
    pretty array.
  501. I should have made it more even
    but this is: one,
  502. two, three, four,
    five rows;
  503. and one, two,
    three columns,
  504. at least in this example.
    But in general there's an array
  505. of rocks, N x M.
    We're going to play
  506. sequentially with the players.
    And the way the game is going
  507. to work is this--is when it's
    your turn to move,
  508. you have to point to one of
    these rocks and whichever rock
  509. you point to I will remove.
    I will delete that rock and all
  510. the rocks that lie above or to
    the right of it:
  511. all the rocks that lie to the
    northeast of it.
  512. So for example,
    if you pointed to this rock,
  513. then I will remove this rock
    and also this one,
  514. this one, and this one.
    If the next person comes along
  515. and chooses this one,
    then I will remove this one,
  516. and also that one.
    Okay, everyone understand?
  517. All right and the game is
    lost--this is important--the
  518. game is lost by the person who
    ends up taking the last rock.
  519. The loser is the person who
    ends up removing the last rock.
  520. Could I have some volunteers?
    Can I volunteer people then?
  521. How about the guy with the
    white t-shirt with the yellow on
  522. there?
    Okay, come on up front,
  523. and who else can I volunteer.
    Everyone's looking away from
  524. me, desperately looking away
    from me.
  525. How about the guy with the Yale
    sweatshirt on,
  526. the Yale football sweatshirt
    on, okay.
  527. So come on up.
    Your name is?
  528. I should get a mike,
    let me get a mike up here,
  529. thank you.
    Great, your name is?
  530. Student: Noah.
    Professor Ben Polak:
  531. Noah and your name is?
    Student: Koran.
  532. Professor Ben Polak:Say
    it into the mike.
  533. Student: Koran.
    Professor Ben Polak:
  534. Koran okay.
    So Noah and Koran are our
  535. players.
    Everyone understand the rules?
  536. Do you understand the rules?
    Yeah?
  537. So why don't we make Noah go
    first.
  538. So Noah which rock are you
    going to remove?
  539. Remember the last rock loses.
    Student: That one.
  540. Professor Ben Polak:
    That one.
  541. Okay, so Noah chose this one.
    So I'm going to remove this one
  542. and the one above it.
    Student: So all the
  543. rocks to the north and east are
    deleted, right?
  544. Professor Ben Polak: All
    the rocks to the north and east
  545. of it are deleted.
    That means you--last rock loses.
  546. Okay good.
    Student: That one.
  547. Professor Ben Polak:
    That one okay.
  548. So now we have an L shape.
    Student: That one.
  549. Professor Ben Polak:
    That one, all right.
  550. Student: That one.
    Professor Ben Polak:
  551. That one okay,
    okay.
  552. All right so we're done.
    Okay, everyone understand how
  553. that worked?
    Round of applause for our
  554. players please,
    thank you.
  555. Let me get two more volunteers
    now that everyone has seen it.
  556. These guys had the hard job,
    they had to figure it out cold.
  557. Two more volunteers?
    Oh I don't have to pick
  558. volunteers--this is Yale,
    somebody volunteer.
  559. Are you volunteering?
    Great.
  560. I have one volunteer.
    You can pick an opponent if you
  561. like.
    There you go thank you.
  562. Your name is,
    let's just wait until the mikes
  563. are here.
    So your name is?
  564. Student: Peter.
    Professor Ben Polak:
  565. Peter and your name is Justin.
    Student: Justin.
  566. Professor Ben Polak: Let
    me put a new array up here.
  567. So let's put a new array up
    here and this time let's make
  568. it--it doesn't have to be odd.
    So let's make it this time a
  569. little bit more complicated.
    So we'll have five--let's have
  570. four rows and five columns this
    time.
  571. Okay so last time I had five
    rows and three columns,
  572. and this time I have four rows
    and five columns.
  573. Away you go,
    let me move out of the way.
  574. Why don't you stand a bit
    nearer to the board--make it a
  575. little easier for people to see
    what's going on,
  576. there you go.
    So Peter you want to go first?
  577. Student: Sure, that one.
    Professor Ben Polak:
  578. That one okay.
    So this one goes,
  579. which means all of these go as
    well.
  580. Anyone got any advice from the
    floor?
  581. Shout things out, not too loud.
  582. Student: I'll try going
    here.
  583. Professor Ben Polak:
    There.
  584. Okay we're back to our L shape
    again here.
  585. Now people can see what's going
    on all right.
  586. Student: I'll go here
    and speed it up.
  587. Professor Ben Polak: All
    right thank you.
  588. So a round of applause again.
    Thank you guys.
  589. So here's what I claim about
    this game.
  590. First, and this isn't a formal
    claim, this is a lot harder than
  591. the game we played last time,
    right?
  592. Everyone is convinced by that
    all right.
  593. So here's going to be an
    exercise, or something that's a
  594. challenge.
    I may even put it formally on
  595. the problem set this week,
    but let me just say if I do put
  596. this on the problem set this
    week,
  597. I don't expect you all to solve
    it.
  598. So if I do put it on the
    problem set, it's an optional
  599. problem.
    But here's the challenge,
  600. the challenge is,
    I know from Zermelo's theorem
  601. that no matter what N is,
    and no matter what M is,
  602. this game has a solution.
    So Zermelo's theorem tells us
  603. this game has a solution.
    Now, notice it could have a
  604. different solution depending on
    the N and the M,
  605. depending on how many rows and
    how many columns,
  606. is that right?.
    So depending on N and M,
  607. each such game has a
    solution--is that right--which
  608. could depend on N and on M,
    on the number of rows and
  609. columns.
    So just as in Nim,
  610. it depended on how those piles
    started out.
  611. So what's going to be the
    homework assignment?
  612. Find the solution.
    Homework- what is the solution?
  613. So I claim, let me give you a
    hint, I claim it is useful to
  614. remember that there is a
    solution.
  615. It turns out to be useful to
    remember that.
  616. That wasn't much of a hint.
    You knew that already,
  617. but let me just emphasize that.
    Okay, so I want to do a bit of
  618. a transition now away from these
    games like chess and like
  619. checkers,
    and like this game with the
  620. rocks or Nim,
    and I want to be a little bit
  621. formal for a while.
    So one thing that we haven't
  622. done for a while,
    actually since the mid-term,
  623. is write down some formal
    definitions.
  624. So I'm going to do that now.
    I want at least one of these
  625. boards back.
  626. So as I warned very early in
    the class, there are some points
  627. of the day when we have to stop
    and do some work and the next
  628. twenty minutes or so is such a
    point.
  629. So what is this?
    This is formal stuff.
  630. The first thing I want to do is
    I want to give you a formal
  631. definition of something I've
    already mentioned today and
  632. that's the idea of perfect
    information.
  633. What we've been looking at,
    or at least since the mid-term,
  634. are games of perfect
    information.
  635. So a game of perfect
    information is one in which at
  636. every node, or at each node in
    the game the player whose turn
  637. it is to move at that node knows
    which node she is at.
  638. So it's a very simple idea.
    Every game we've seen since the
  639. mid-term has this property.
    When you're playing this game
  640. you know where you are.
    So you know which node you're
  641. at, you know what your choices
    are and what that means
  642. implicitly is she must know how
    she got there.
  643. So the whole history of the
    game is observed by everybody.
  644. When I get to move I know what
    you did yesterday,
  645. I know what I did the day
    before yesterday,
  646. and if I'm playing with a third
    person I know what they did the
  647. day before that.
    So a very simple idea but it
  648. turns out to be an idea that
    actually allows us to use things
  649. like backward induction very
    simply.
  650. Now so far all we've been doing
    is thinking about such games,
  651. games like perfect competition,
    sorry games like quantity
  652. competition between firms or
    games like the games where we
  653. were setting up incentives or
    games like Nim,
  654. and we've basically been
    solving these games by backward
  655. induction, rather informally.
    What I want to add in now is
  656. the notion of a strategy in
    these games.
  657. So when we had simultaneous
    move games, strategies were
  658. really unproblematic.
    It was obvious what strategies
  659. were.
    But when we have games which
  660. are sequential,
    that go on over a period of
  661. time,
    and information is emerging
  662. over a period of time,
    we need to be a little careful
  663. what we mean by a strategy.
    So what I want to do is I want
  664. to define a pure strategy at
    least as follows.
  665. So a pure strategy for Player i
    in a game of perfect
  666. information--so in the games
    we've been talking about,
  667. games which we can represent by
    trees is what?
  668. It's a complete plan of action.
    So in other words,
  669. what it does is it specifies
    which action I should
  670. take--let's not make it should
    take--let's say will
  671. take,
    at each of i's nodes,
  672. each of i's decision nodes.
    So when you first read this
  673. definition it seems completely
    uninteresting and unproblematic.
  674. A pure strategy for Player i
    just tells you in this game
  675. whenever you might be called
    upon to move,
  676. it tells you how you're going
    to move.
  677. So if you're going to move
    three times in the game,
  678. it tells you how you're going
    to move the first time;
  679. it tells you how you're going
    to move the second time;
  680. and it tells you how you're
    going to move the third time.
  681. So, so far none of this seems
    difficult but actually it's a
  682. bit more difficult than it
    looks.
  683. So let's have a look.
  684. What I want to do is I want to
    look at an example,
  685. and this example I'm hoping is
    going to illustrate some
  686. subtleties about this
    definition.
  687. So here's an example.
  688. In this example Player 1 moves
    first and they can choose in or
  689. out, or if you like let's call
    it up or down since that's the
  690. way we've drawn it.
    So they can choose down or
  691. up--and I'll use capital letters
    U and D.
  692. If Player chooses U then Player
    2 gets to move and Player 2 can
  693. choose L or R.
    If Player 1 moves U followed by
  694. Player 2 choosing L,
    then Player 1 gets to move,
  695. in which case Player 1 could
    move up or down and I'll use
  696. little letters this time,
    u and d.
  697. Let's put some payoffs in this
    game.
  698. So the payoffs are 1,0 here,
    0,2 here, 3,1 here and 2,4
  699. here.
    So on paper--it's on the
  700. board--when you first look at
    it, this is a perfectly simple
  701. game.
    It's just like many of the
  702. games we've been looking at
    since the mid-term.
  703. It has a tree.
    We could analyze it by backward
  704. induction, and in a moment we
    will analyze it by backward
  705. induction.
    But what I want to do first is
  706. I want to say what are the
    strategies in this game?
  707. So let's start with Player 2
    since it's easier.
  708. Player 2's strategies here are
    what?
  709. Pretty simple:
    player 2 only has one decision
  710. node--here's Player 2's decision
    node--and the strategy has to
  711. tell Player 2 what he's going to
    do at that decision node.
  712. So there's only two choices,
    L or R.
  713. So Player 2's strategies are
    either L or R.
  714. Notice, however,
    already one slight subtlety
  715. here.
    Player 2 may never get to make
  716. this choice.
    So Player 2 is choosing L or R,
  717. but Player 2 may not ever get
    to make this choice.
  718. In particular,
    if 1 chose D it's really
  719. irrelevant whether Player 2 has
    chosen L or R.
  720. Nevertheless,
    the strategy has to specify
  721. what Player 2 would do were he
    called upon to make that move.
  722. Now let's look at Player 1's
    strategies.
  723. So what are Player 1's
    strategies in this game?
  724. Any takers?
    Anyone want to guess?
  725. So I claim it's tempting,
    but it turns out to be wrong,
  726. to say that Player 1 has three
    strategies here.
  727. It's tempting to say either
    Player 1 moves D,
  728. in which case we're done,
    or Player 1 moves U,
  729. in which case at some later
    date she may be called upon to
  730. choose u or d again.
    So it's very tempting to think
  731. that Player 1 has just three
    strategies here,
  732. D in which case we're done,
    or U followed by either u or d,
  733. if she's reached that point.
    But actually,
  734. if we follow this definition
    carefully, we notice that Player
  735. 1 actually has four strategies
    here.
  736. So let me say what they are and
    you'll see why it's a little
  737. odd.
    So here's one of the strategies
  738. we talked about,
    U followed by u,
  739. and here's another one we
    talked about,
  740. U followed by d,
    but I claim there are two
  741. others, D followed by u and D
    followed by d.
  742. So what's a little weird about
    this?
  743. What's weird is we know
    perfectly well that if Player 1
  744. chooses D she's not going to get
    to make the choice,
  745. the second choice u or d.
    But nevertheless the strategy
  746. has to tell us what she would do
    at every node,
  747. regardless of whether that node
    actually is reached by that
  748. strategy.
    Let me say it again,
  749. the strategy has to tell you
    what Player 1 would do at that
  750. node,
    regardless of whether that node
  751. is actually reached by playing
    that strategy.
  752. So a little weird.
    There's a bit of redundancy
  753. here.
    It's a bit redundant.
  754. Now why?
    Well we'll see why in a second.
  755. Let's first of all consider how
    this game will be played
  756. according to backward induction.
    So how do we play this game
  757. according to backward induction?
    Where do we start?
  758. Shouldn't be a trick question,
    where do we start the game
  759. according to backward induction?
    At the end okay.
  760. So the end here is Player 1 and
    Player 1 will choose here to go
  761. d, is that right?
    Because 3 is bigger than 2.
  762. So if we roll back to Player
    2's move, Player 2 if she
  763. chooses L she knows that she'll
    be followed by Player 1 going d,
  764. in which case she'll get 1,
    but if she chooses R she'll get
  765. 2.
    So Player 2 will choose R.
  766. So Player 1 here,
    if she chooses U then Player 2
  767. will choose R and Player 1 will
    get 0, and if she chooses D
  768. she'll get 1,
    so Player 1 will choose D.
  769. So, backward induction suggests
    that Player 1 will choose D;
  770. and followed by Player 2--if
    Player 2 did get to move--she'd
  771. choose R;
    followed by Player 1--if she
  772. did get to move again--choosing
    d again.
  773. Notice that backward induction
    had to tell us what Player 2
  774. would have done had she got to
    move.
  775. Backward induction had to
    consider what Player 2 would
  776. think that Player 1 would have
    done were Player 1 to get to
  777. move again.
    So to do backward induction,
  778. Player 1 has to ask herself
    what Player 2 is going to do;
  779. and Player 2 has to therefore
    ask herself what Player 1 would
  780. have done;
    which means Player 1 has to ask
  781. herself what Player 2 thinks
    Player 1 would have done;
  782. which means we actually needed
    to say what Player 1 did over
  783. here.
    Let me say that again.
  784. So backward induction here
    tells us that Player 1 chooses D
  785. in which case the game is over.
    But to do backward induction,
  786. to think through backward
    induction, we really needed to
  787. consider not just what Player 2
    was going to do were he to get
  788. to move,
    but also what Player 2 would
  789. think Player 1 would do were
    Player 2 to get to move and were
  790. to do choose L.
    So all of these redundant moves
  791. were actually part of our
    backward induction.
  792. We need them there so we can
    think about what people are
  793. thinking.
    So backward induction tells us
  794. that the outcome of the game is
    D, but it tells in terms of
  795. strategies,
    Player 1 chooses D and,
  796. were she to get to move again,
    would choose D again.
  797. And Player 2 chooses R.
    Now let's analyze this game a
  798. totally different way.
    We now know what the strategies
  799. are in the game:
    Player 2 has two strategies,
  800. L and R;
    and Player 1 has four
  801. strategies: U/u,
    U/d, D/u and D/d.
  802. So let's write up the matrix
    for this game,
  803. so here it is.
    It must be a 4 x 2 matrix,
  804. Player 2 is choosing between L
    and R and Player 1 is choosing
  805. between her four strategies Uu,
    Ud, Du and Dd and we can put
  806. all the payoffs in.
    So (Uu,L) gets us here which is
  807. (2,4).
    (Uu,R) gets us here which is
  808. (0,2).
    (Ud,L) gets us here which is
  809. (3,1).
    (Ud,R) gets us here again which
  810. is (0,2).
    And Du gets us,
  811. going right out of the game
    immediately, so we're going to
  812. go to (1,0).
    And in fact that's also true
  813. for (Du,R), and it's also true
    for (Dd,L) and it's also true
  814. for (D/d,R).
    So all of these four strategies
  815. at the bottom started off by
    Player 1 choosing D,
  816. in which case the game is over.
    So once again in this matrix
  817. you can kind of see the
    redundancy I was talking about.
  818. In this matrix,
    the third row,
  819. the Du row looks the same as
    the Dd row.
  820. Everyone happy with that?
    So when we've had matrices in
  821. the past how have we solved the
    game?
  822. Well we've been doing this
    backward induction stuff for a
  823. couple of weeks,
    but,
  824. prior to the mid-term--if I had
    given you this on the mid-term
  825. what would you have done?
    You'd look for a Nash
  826. Equilibrium right?
    So let's do that:
  827. let's look for Nash
    Equilibrium.
  828. So if Player 2 chooses L then
    Player 1's best response is Uu;
  829. and if Player 2 chooses R,
    then Player 1's best response
  830. is either Du or Dd.
    Conversely, if Player 1 chooses
  831. Uu then Player 2's best response
    is L.
  832. If Player 1 chooses Ud then
    Player 2's best response is R.
  833. If Player 1 chooses Du then
    Player 2 doesn't care because
  834. they're getting 0 anyway;
    and if Player 1 chooses Dd then
  835. again Player 2 doesn't care
    because they're getting 0
  836. anyway.
    So the Nash Equilibria in this
  837. game are what?
    So one of them is here,
  838. so that's one Nash Equilibrium.
    One of them is Dd followed by
  839. R, and that's a Nash Equilibrium
    we actually found by backward
  840. induction.
    The Nash Equilibrium (Dd,R)
  841. corresponds to the one we found
    by backward induction.
  842. But there's another Nash
    Equilibrium in this game.
  843. The other Nash Equilibrium in
    this game is this one.
  844. That's also a Nash Equilibrium.
    What does it involve?
  845. It involves Du and R--sorry Du
    and R.
  846. So what happened in that
    equilibrium?
  847. Player 1 went down which made
    everything else kind of
  848. irrelevant.
    Player 2 played R and Player 1
  849. in their plan of action was
    actually going to choose u.
  850. Say it again,
    Player 1 in fact went D,
  851. Player 2 had they got to move
    would have chosen R and Player 1
  852. had they got to move at this
    second time would have chosen u.
  853. So how can that be a Nash
    Equilibrium?
  854. It doesn't correspond to
    backward induction.
  855. In particular,
    Player 1 up here is choosing a
  856. strategy that seems silly.
    They're choosing u rather than
  857. d which gets them 2 rather than
    3.
  858. So how could it possibly be
    that that's an equilibrium
  859. strategy?
    The reason is--the reason it
  860. can be an equilibrium strategy
    is it doesn't really matter what
  861. Player 1 chooses up here from
    Player 1's point of view because
  862. it's never going to be reached
    anyway.
  863. As long as Player 2 is going to
    choose R here,
  864. it really doesn't matter what
    Player 1 decides to do up there.
  865. From Player 1's point of view,
    it doesn't make a difference.
  866. It isn't reached anyway.
    So we're highlighting here a
  867. danger, and the danger is this.
    If you just mechanically find
  868. the Nash Equilibria in a game,
    you're going to find people
  869. choosing actions that,
    if they ever were called upon
  870. to make them,
    are silly.
  871. Say it again,
    if you just mechanically find
  872. the Nash Equilibria in the game,
    just as we did here,
  873. you're going to select some
    actions, in this case an action
  874. by Player 1,
    that were she called upon to
  875. take that action would be a
    silly action.
  876. The reason it's surviving our
    analysis is because in fact she
  877. isn't called upon to make it.
    Now to make that more concrete,
  878. let's take this to an economic
    example, this same idea.
  879. So what I want you to imagine
    is a market, and in this market
  880. there is a monopolist who
    controls the market,
  881. but there's an entrant who is
    thinking of entering into this
  882. market.
    So right now this market has a
  883. monopoly in it,
    and the monopolist is an
  884. incumbent--let's call him
    Inc--and an entrant who is
  885. trying to decide whether or not
    to enter the market or whether
  886. to stay out.
    If they stay out then the
  887. entrant gets nothing and the
    incumbent continues to get
  888. monopoly profits.
    So think of this as 3 million a
  889. year.
    If the entrant enters then the
  890. incumbent can do one of two
    things.
  891. The incumbent could choose to
    fight the entrant,
  892. by which I mean charge low
    prices, advertise a lot,
  893. try and drive him out of the
    market.
  894. He could play very
    competitively,
  895. in which case the entrant will
    actually make losses,
  896. and the incumbent will drive
    her profits down to zero.
  897. Conversely, the incumbent could
    choose not to fight.
  898. If they don't fight then
    they'll simply share the market
  899. and they'll both make,
    let's say Cournot profits or
  900. duopoly profits of a million
    each.
  901. So in this game,
    let's just say again,
  902. there's a market there.
    The monopolist is in the market
  903. and the monopolist is making 3
    million a year which seems
  904. pretty nice for the monopolist.
    The entrant is trying to decide
  905. whether to invade this market.
    If she invades this market,
  906. if she's fought she's in
    trouble, but if the monopolist
  907. accommodates her,
    then they just go to duopoly
  908. profits and the entrant does
    very well, gets a million
  909. dollars in profit.
    Let's have a look at this game.
  910. Analyze--first of all this
    time, let's analyze it by Nash
  911. Equilibrium.
    So I claim in this game,
  912. this is pretty simple:
    the entrant has two strategies.
  913. I'll put it straight into the
    matrix.
  914. The entrant has two strategies,
    they're either to go in or to
  915. stay out;
    and the incumbent has two
  916. strategies, they are either to
    fight the entrant or not to
  917. fight.
    The payoffs of this game are as
  918. follows--let's put them in.
    In and fight was (-1,0),
  919. in and not fight was (1,1) and
    out led to the incumbents
  920. maintaining the monopoly
    profits.
  921. Let's have a look at the Nash
    Equilibria of this game.
  922. So the first thing to do is to
    look at the best responses for
  923. the entrant.
    So if the incumbent chooses
  924. fight, then the entrant's best
    response is to stay out.
  925. If the incumbent chooses not
    fight, then the entrant's best
  926. response is to enter.
    How about for the incumbent?
  927. If the entrant chooses to come
    in then the incumbent's best
  928. response is not to fight because
    1 is bigger than 0.
  929. But if the entrant stays out it
    really doesn't matter what the
  930. incumbent does,
    they'll get 3 either way.
  931. So the Nash Equilibria here are
    either in followed by not
  932. fighting, you end up with a
    duopoly, or out followed by
  933. fighting.
    Of course the fight doesn't
  934. take place: out followed by "we
    would have fought had you
  935. entered."
    So these are the Nash
  936. Equilibria.
    Let's analyze this game by
  937. backward induction.
    From the incumbent's point of
  938. view, if the incumbent gets to
    move then is she going to choose
  939. fight or not fight?
    She's going to choose not
  940. fight, so the entrant should do
    what?
  941. Somebody?
    The entrant should enter,
  942. because if she enters she gets
    1, if she stays out she gets 0.
  943. So backward induction just
    gives us this equilibrium.
  944. Now the question is what do we
    think is going on with this
  945. other equilibrium?
    Let's talk it through.
  946. So here you are,
    you're about to enter the
  947. market, you leave Yale,
    you set up your business,
  948. and your business is
    challenging some monopolist or
  949. quasi-monopolist like Microsoft
    say.,
  950. And you go out there and you're
    about to put out your new
  951. operating system.
    And you know that your new
  952. operating system will make
    plenty of money provided
  953. Microsoft doesn't retaliate and
    drop its prices by half and
  954. advertise you out of the market.
    So what does Bill Gates do?
  955. Bill Gates is the head of
    Microsoft.
  956. He says, wait a second before
    you graduate from Yale and go
  957. set up this company,
    let me just tell you I'm going
  958. to fight.
    If you enter I'm going to fight.
  959. And if you believe him,
    if you believe that he's going
  960. to fight, what should you do?
    You're not going to bother to
  961. enter his market.
    And if you believe that you're
  962. going to get fought by Bill
    Gates then you believe that if
  963. you enter you're going to make
    losses,
  964. so you choose to stay out.
    And this threat that Bill Gates
  965. makes here it doesn't cost him
    anything, provided you go out,
  966. he doesn't actually have to
    fight at all.
  967. That's why it is in fact an
    equilibrium to imagine you
  968. staying out believing that
    you're going to get fought if
  969. you enter,
    and it is a best response for
  970. the guy to fight knowing that
    you're going to stay out.
  971. But this doesn't sound right.
    Why doesn't this sound right?
  972. It doesn't sound right because
    you really shouldn't believe the
  973. guy who says he's going to fight
    you when you know that if you
  974. did enter fighting would cost
    the incumbent money.
  975. If you did enter,
    if he fights you he gets 0,
  976. if he doesn't fight he gets a
    million dollars.
  977. So this threat that the guy is
    going to fight you,
  978. this threat is not credible.
    This is an equilibrium but it
  979. relies on believing an
    incredible threat.
  980. It's true that if you think
    about entering the market that
  981. Microsoft is in,
    you're very likely to get a
  982. little email from Bill Gates.
    (But it won't be an email
  983. because that can be taken to
    Court, but some little
  984. threatening remark in your ear
    from Bill Gates.) It's true if
  985. you entered into the market that
    the people who build aircraft
  986. in,
    the head of Boeing might pay
  987. you a call one day or send
    someone round,
  988. but these threats are not
    credible threats.
  989. Is that right?
    They're not credible threats
  990. because we know that if you did
    enter, it isn't in their
  991. interest to fight.
    They're making a lot of noise
  992. about it but you know that if
    you did enter,
  993. backward induction tells us
    they're not going to fight.
  994. But now we're in slightly an
    odd situation,
  995. why are we in an odd situation?
    Two things.
  996. One, we seem to be finding that
    there are Nash Equilibria in
  997. games--we found one here and one
    up here--there are Nash
  998. Equilibria that are not
    supported by backward induction.
  999. That's the first reason we're a
    little worried here,
  1000. and the second reason we're
    worried is even the economics of
  1001. this doesn't quite smell right.
    For example,
  1002. if you in fact did
    enter--sorry,
  1003. if you did in fact announce you
    were going to operate,
  1004. you were going to build a new
    operating system and got a
  1005. threatening phone call from Bill
    Gates--there might be a reason
  1006. why you might actually believe
    that call.
  1007. Why might you believe that call?
    Can we get a mike out,
  1008. I'll do it.
    So why might you believe a call
  1009. from Bill Gates saying he's
    going to beat you up--not beat
  1010. you up but he's going to charge
    low prices if you produce an
  1011. operating system?
    Student: If you look at
  1012. the future revenue streams,
    like for the next ten years and
  1013. he consistently gets 1 million
    dollars for the next ten years,
  1014. he would be better off if he
    just drove you out of the market
  1015. for the first year and then get
    3 million dollars for the next
  1016. nine years.
    Professor Ben Polak:
  1017. That wasn't quite what I wasn't
    thinking of.
  1018. Okay, it's true if we cheat a
    bit and make one of the--Okay,
  1019. so what you're saying is I
    could get 3 forever versus 1
  1020. forever,
    assume these are both present
  1021. discounted values of future cash
    earnings.
  1022. So we've done the future cash
    flow analysis of this.
  1023. So this isn't an accounting
    mistake, not for accounting
  1024. reasons.
    There's some other reason when
  1025. Gates threatens you,
    you might want to believe it.
  1026. Let's try up here first.
    Student: He can afford
  1027. to make an example of you so no
    other people will invade.
  1028. Professor Ben Polak:
    Right, so one thing that's in
  1029. Bill Gates' mind is right now,
    he's thinking about competing
  1030. with you but down the road
    there's a lot more of you guys
  1031. coming--whatever it is,
    280 of you in this room and
  1032. there's another 280 in next
    year's class and so on.
  1033. And Bill Gates knows that each
    of you might come out and
  1034. threaten his monopoly,
    his Microsoft monopoly.
  1035. And he might think that by
    setting an example to one of you
  1036. he might be able to keep the
    others out.
  1037. And somewhere that kind of
    argument, the argument of making
  1038. an example of somebody is
    missing here:
  1039. missing completely.
    But it's got to be important,
  1040. it's got to be an important
    idea.
  1041. So we're going to come back and
    spend the first half of
  1042. Wednesday's class picking up
    just precisely that idea.