Return to Video

Intelligent Citizens of the World

  • 0:00 - 0:06
    Hi, welcome back. In this lecture, I want
    to talk a little bit more about how using
  • 0:06 - 0:09
    models can help make you a more
    intelligent citizen of the world. And so,
  • 0:09 - 0:13
    we're gonna break this down into a bunch
    of set of sub-reasons about why models
  • 0:13 - 0:17
    make you better able to engage in all the
    things that are going on in this modern,
  • 0:17 - 0:21
    complex world in which we live. Okay, so.
    When we think about models, they're
  • 0:21 - 0:25
    simplifications. They're abstractions. So
    in a sense, there's a sense in which
  • 0:25 - 0:28
    they're wrong. There's a famous quote by
    George Box, where he says, "All models are
  • 0:28 - 0:32
    wrong." And that's true, right? They are.
    "But some are useful." And that's gonna be
  • 0:32 - 0:36
    a mantra that comes up throughout this
    course. These models are gonna be
  • 0:36 - 0:39
    abstractions, they're gonna be
    simplifications, but they're be useful to
  • 0:39 - 0:42
    us. They're gonna help us do things in
    better ways. 'K? So. In a, in a sense,
  • 0:42 - 0:46
    right? And this is a, a big thing in this
    course. Models are the new lingua franca.
  • 0:46 - 0:50
    They're the language of not only the
    academy, you know, which I talked about
  • 0:50 - 0:53
    some in the last lecture, but they're the
    language of business. They're the language
  • 0:53 - 0:57
    of politics. They're the language of the
    nonprofit world. Wherever you go, where
  • 0:57 - 1:01
    there's people are trying to do good, make
    money, cure disease, whatever it is that
  • 1:01 - 1:04
    they wanna do, right? You're gonna find
    that people are using models to enable
  • 1:04 - 1:08
    them to be better at what their purpose
    is. Okay? That's why they've really become
  • 1:08 - 1:12
    the new lingua franca. So, if you think
    back. Remember, I talked about this in the
  • 1:12 - 1:15
    first lecture. The whole idea of having a
    great books movement was that there was
  • 1:15 - 1:20
    these... set of ideas that any person should
    know. So within the hundred and so great
  • 1:20 - 1:23
    books, there were thousands of ideas. And
    one of our ad learners, Robert Hutchins
  • 1:23 - 1:27
    President of the University of Chicago. They had
    this thing that they wrote called the
  • 1:27 - 1:31
    Synopticon which was a list, right, as they put
    this together. This was kind of list of
  • 1:31 - 1:35
    sort of all the ideas that someone should
    know, that an intelligent person should
  • 1:35 - 1:39
    know. So what are those ideas? So one of
    those ideas was to tie yourself to the
  • 1:39 - 1:44
    mast. And this comes from the Odyssey, you
    know, this says the ship is going past the
  • 1:44 - 1:48
    sirens and he wants to hear the sirens
    beautiful love song. So what he does is he
  • 1:48 - 1:52
    has his crew tie him to the mast. He tie.
    Ties himself to the mast so he can listen
  • 1:52 - 1:56
    to them but pre-commit to not driving his
    boat over to hear the sirens, at the same
  • 1:56 - 2:00
    time he puts wax in the ears of his crew
    so they also won't be, you know,
  • 2:00 - 2:04
    encouraged to sort of drive the boat over
    there. Well this is an idea that recurs in
  • 2:04 - 2:08
    history when we think about. Cortez
    burning his ships, right, so his men won't
  • 2:08 - 2:13
    you know retreat, they'll continue to
    advance. So this idea to tie yourself to
  • 2:13 - 2:16
    math, is a real worthwhile thing. But
    here's the problem. One of, one of my
  • 2:16 - 2:20
    favorite websites is a website called
    Office of Proverbs. So on this websites,
  • 2:20 - 2:24
    it says things like he who hesitates is
    lost, a stitch in time saves nine, or
  • 2:24 - 2:28
    two heads are better than one, too many
    cooks spoil the broth. So you've got to
  • 2:28 - 2:32
    hear this really good advice, something
    that probably made it in the Synopticon,
  • 2:32 - 2:35
    but then you get something that says the
    exact opposite. Well, how do they
  • 2:35 - 2:39
    adjudicate between those two things? The
    way we adjudicate between those two things
  • 2:39 - 2:44
    is by constructing models because models
    give us the conditions under which he who
  • 2:44 - 2:48
    hesitates is lost, and then there's the
    conditions under which a stitch in time
  • 2:48 - 2:52
    saves nine. So when we talk about the wide
    diversity and prediction, we'll see why
  • 2:52 - 2:56
    it's the case that two heads is better
    than one, and we'll see why it's the case
  • 2:56 - 3:00
    that too many cooks spoil the broth. So,
    ironically, what models do is they tie us
  • 3:00 - 3:04
    to a mast, they tie us to a mast of logic
    and by tying us to a mast of logic, we
  • 3:04 - 3:09
    figure out which ways of thinking, which
    ideas in this Are useful to us.'K? So, if
  • 3:09 - 3:13
    you look at almost any discipline, whether
    its economics, and here what you see in
  • 3:13 - 3:18
    this diagram, is you see a description of
    sort of, this is a, a utility function for
  • 3:18 - 3:22
    an agent. And what that agent is doing
    trying to maximize their pay-off, right?
  • 3:22 - 3:26
    So, economists use models all the time.
    Biologists use models, as well. They,
  • 3:26 - 3:30
    they, you know, have, you know, models of
    the brain, where they have little axons
  • 3:30 - 3:34
    and dendrites going between the neurons.
    They have models of gene regulatory
  • 3:34 - 3:38
    networks. They have models species, right?
    Things like that. Sociology, we have
  • 3:38 - 3:43
    models, as well, right? So, there's models
    of, sort of. How your identity effects
  • 3:43 - 3:46
    your actions, and your behaviors and
    things like that. Okay, in political
  • 3:46 - 3:50
    science. We have models. Political science
    these days, this is a picture of a spatial
  • 3:50 - 3:54
    voting model. So they might say candidates
    are a little more conservative on certain
  • 3:54 - 3:57
    dimensions and voters are a little more
    conservative and you say that, well,
  • 3:57 - 4:01
    you're more likely to vote for a candidate
    who takes positions similar to yourself.
  • 4:01 - 4:05
    So my work at the University of Michigan
    we have something called the National
  • 4:05 - 4:08
    Election Studies that's run out of there
    where we sort of gather all this data
  • 4:08 - 4:11
    about where politicians are and where
    voters are, and that allows us to make
  • 4:11 - 4:15
    sense of who votes for whom and why. Okay?
    So models help us understand the decisions
  • 4:15 - 4:19
    people make. Linguistics, right? Here's
    another area, right? So you might think,
  • 4:19 - 4:22
    how can you use models in linguistics?
    Well, this little model here, you see
  • 4:22 - 4:26
    things where it says you see v's and n and
    p's in here, if you look closely. Well, v
  • 4:26 - 4:29
    stands for verb, n stands for noun, and
    well you gotta. And S stands for, you
  • 4:29 - 4:32
    know, subject, let's say, right? So you
    can do this: you can ask "What is the
  • 4:32 - 4:36
    structure of a language?" You can ask,
    formally and mathematically, what are the
  • 4:36 - 4:39
    structure of a language is, and whether
    some languages are more like other
  • 4:39 - 4:43
    languages or not, depending on how people,
    you know, set up their sentences. So in
  • 4:43 - 4:46
    German, where they may put all the
    adjectives. At the end of the sentence
  • 4:46 - 4:50
    that looks very different than let's say
    English. All right. Even the law. This is
  • 4:50 - 4:54
    a graph from one of my graduate students,
    former graduate student. Now, he's a law
  • 4:54 - 4:58
    professor, Dan Katz. Where he's got sort
    of a network model of which Supreme Court
  • 4:58 - 5:02
    justices, you know, who they appoint, so
    who, if someone appoints judges from some
  • 5:02 - 5:06
    other judge. By putting that data that's
    out there in this sort of model-based
  • 5:06 - 5:10
    form, we can begin to understand how
    conservative and how liberal certain
  • 5:10 - 5:14
    judges are. All right? So, there's lots of
    ways to use models, and there's even whole
  • 5:14 - 5:18
    disciplines now, that have evolved, that
    are based entirely on models. So, game
  • 5:18 - 5:22
    theory, which is what I was really trained
    in as a graduate student, is all about
  • 5:22 - 5:26
    strategic behavior. Behavior. It's the
    study of strategic interactions between,
  • 5:26 - 5:30
    you know, individuals, companies, nations.
    Right? And game theory can also be applied
  • 5:30 - 5:33
    to biology, right? So there's all sorts of
    stuff, right? When you go to, when you go
  • 5:33 - 5:37
    to, you know, college, you go to college,
    you'll find that there's game theory
  • 5:37 - 5:41
    models of just about anything. Right? So
    it's actually a field based entirely just
  • 5:41 - 5:46
    on models. Right? Why, right? [laugh] Why
    all these models, right? Why does
  • 5:46 - 5:51
    everything from linguistics, to economics
    to, you know, political science use
  • 5:51 - 5:55
    models? Well, cuz, they're better, right?
    They're just better than we are. So, let
  • 5:55 - 6:00
    me show you a graph, here. This is a graph
    from a book by Phil Tetlock. It's a
  • 6:00 - 6:06
    fabulous book. And in this graph, he, what
    he's showing is, he's showing the accuracy
  • 6:06 - 6:10
    of, some different, let me pull up a pin
    here. Different ways of predicting. So,
  • 6:10 - 6:14
    what you see on this axis, this
    calibration axis, right here. This is
  • 6:14 - 6:17
    asking, sort of how, showing you how
    accurate a model is. And this axis is
  • 6:17 - 6:21
    saying how discriminating is it, in terms
    of how particular, how fine of predictions
  • 6:21 - 6:26
    is it making. So, instead of saying is it
    hot or cold, it might be saying it's gonna
  • 6:26 - 6:29
    be 90 degrees, or 80 degrees, or 70
    degrees. So this axis here, this up and
  • 6:29 - 6:33
    down axis, is discriminatoriness,
    discrimination, and this axis is how
  • 6:33 - 6:37
    accurate. So, what you see here, down
    here, are hedgehogs. So, these are people
  • 6:37 - 6:41
    who use a single model. Hedgehogs are not
    very good at predicting. Right? They're
  • 6:41 - 6:45
    terrible at predicting. Up here are people
    he calls foxes. Now, foxes are people who
  • 6:45 - 6:50
    use lots of models. They have sort of lots
    of loose models in their head. And, they
  • 6:50 - 6:54
    do much better at, you know, sort of at
    calibration, a little bit better at
  • 6:54 - 6:58
    discrimination, than individuals. But way
    up here, [laugh] better than anybody, are
  • 6:58 - 7:02
    formal models. Formal models just do
    better than either foxes or hedgehogs. Now
  • 7:02 - 7:06
    [inaudible] how much data is this? Tetlock
    actually had tens of thousands of
  • 7:06 - 7:11
    predictions. So, over a 20-year period, he
    gathered predictions by people. And
  • 7:11 - 7:15
    compared how those people did to models.
    And the answer is models do much, much
  • 7:15 - 7:20
    better. Okay. All right, so. What about
    people, then, who actually make
  • 7:20 - 7:23
    predictions for a living? So, this is a
    picture of Bruce Bueno de Mesquita, who
  • 7:23 - 7:27
    makes predictions about what's gonna
    happen in international relationships, and
  • 7:27 - 7:30
    he's very good at it. He's so good at it
    that they put his picture on the cover of
  • 7:30 - 7:34
    magazines, right? He's at, Stanford and
    NYU. Chair of the department at NYU. Used
  • 7:34 - 7:37
    to be, anyway. So, Bruce, uses models.
    He's got a very elaborate model that helps
  • 7:37 - 7:40
    him figure out, based on sort of
    bargaining position and interest, what
  • 7:40 - 7:44
    different countries are gonna do. But,
    just like George Box said at the
  • 7:44 - 7:47
    beginning, he doesn't base his decision
    entirely on that model. What the model
  • 7:47 - 7:51
    does is gives him guidance as to what he
    then thinks. So, it's a blending of what
  • 7:51 - 7:55
    the formal model tells him, and.
    Experience tells them so smart people who
  • 7:55 - 8:00
    use models but the models don't tell them
    what to do. Okay. Another reason models
  • 8:00 - 8:05
    have taken yeah they are better but
    they're also very fertile. So once you
  • 8:05 - 8:09
    learn a model. It's, you know, for one
    domain, you can apply to a whole bunch of
  • 8:09 - 8:12
    other domains, which is fascinating. So
    we're gonna learn something called
  • 8:12 - 8:16
    mark-off processes, which are models about
    dynamic processes. So they can be used to
  • 8:16 - 8:19
    model things like disease spread and stuff
    like that, right? We're gonna finally
  • 8:19 - 8:23
    learn though that you can also use them,
    this is sorta surprising, to figure out
  • 8:23 - 8:27
    who wrote a book. >> [laugh] And they say,
    how does that happen? Well that happens
  • 8:27 - 8:30
    because you can think of words, writing a
    sentence, as an anemic process. So
  • 8:30 - 8:34
    different authors, right, use different
    sequences of words. Different patterns. So
  • 8:34 - 8:39
    therefore we can use this mathematical
    model that wasn't developed in any way for
  • 8:39 - 8:42
    this purpose to figure out who wrote what
    book, okay? Totally cool. All right.
  • 8:42 - 8:47
    Another big reason. Models really make us
    humble. The reason they make us humble is
  • 8:47 - 8:51
    we just have to lay out sort of all the
    logic and then we realize holy cow, I had
  • 8:51 - 8:55
    no idea that this was going to happen,
    right. So often when we construct the
  • 8:55 - 8:59
    model, we're going to get very different
    predictions than what we thought before,
  • 8:59 - 9:03
    right. So if you look at things, here's a
    picture of a, the tulip graph, right, from
  • 9:03 - 9:06
    When there's a big, in the six-,
    seventeenth century, when there's a, you
  • 9:06 - 9:10
    know, this big spike in tulip prices. You
    can imagine that people thought that
  • 9:10 - 9:14
    prices were gonna continue to go up and up
    and up. Well, if you had a simple linear
  • 9:14 - 9:18
    model, you might have invested heavily in
    tulips, and lost a lot of money. So, one
  • 9:18 - 9:21
    reason that models make us humble is,
    never go back to the George Box code. All
  • 9:21 - 9:25
    models are wrong, right? So, a model is
    going to be wrong. But the models are
  • 9:25 - 9:28
    humbling to us, because they sort of make
    us see the full dimensionality of a
  • 9:28 - 9:32
    problem. So, once we try and write down a
    model of any sort of system, it's a very
  • 9:32 - 9:36
    humbling exercise, because we realize how
    much we've gotta leave out to try and
  • 9:36 - 9:39
    understand what's going on. All right.
    Here's another example, right? This is the
  • 9:39 - 9:43
    Case-Shiller Home Price Index, and what
    you see is, you see prices going up and up
  • 9:43 - 9:46
    an up, right? And then you see this, let
    me put a pin up here, precipitous crash
  • 9:46 - 9:50
    right here, right? A lot of people had
    models that just said, look, things are
  • 9:50 - 9:53
    gonna continue this way. There were a few
    people that had models that said things go
  • 9:53 - 9:57
    down. These people, the ones whose models
    went down, they made a lotta money. These
  • 9:57 - 10:00
    people thought it was gonna go up, didn't.
    So, we're always gonna see a lot of
  • 10:00 - 10:04
    diversity in models, and you're really not
    gonna know, often until after the fact,
  • 10:04 - 10:07
    which one is right. And so, one thing
    that's gonna be really important is to
  • 10:07 - 10:10
    have many models. So, let's go back to
    that fox-hedgehog graphite that we, I
  • 10:10 - 10:14
    showed you before. The, the foxes, the
    people with lots of models, did much
  • 10:14 - 10:19
    better than the hedgehogs, the people with
    no models. And former models did better
  • 10:19 - 10:22
    than. The foxes. Well, what would do
    better than formal models? Well, people
  • 10:22 - 10:26
    with lots of formal models. Right? So if
    we really want to make sense of the world
  • 10:26 - 10:30
    what we want to do is have lots of formal
    models in our disclosures. So what we're
  • 10:30 - 10:33
    going to do in this class is almost like,
    remember the old, like, sixteen, 32 box of
  • 10:33 - 10:37
    Crayolas? That's sort of what we're doing
    here. Right? We're just going to pick up a
  • 10:37 - 10:41
    whole bunch of models. And we're going to
    have them, right, they're fertile. We're
  • 10:41 - 10:44
    going to plot across a bunch of settings.
    So when we're confronted with something
  • 10:44 - 10:48
    what we can do is pull out our models. Ask
    which ones are appropriate, and in doing
  • 10:48 - 10:52
    so, right, be better at what we do. So the
    essence of Tetlocks's book, right? That's
  • 10:52 - 10:56
    where that graph came from with the foxes
    and hedgehogs is that, the only people who
  • 10:56 - 11:00
    are really even better than what he. He
    has a way of classifying what a random
  • 11:00 - 11:04
    choice would be. The only people who are
    better than random at predicting what's
  • 11:04 - 11:08
    gonna happen are people who use multiple
    models. And that's the kind of people that
  • 11:08 - 11:12
    we wanna be. Okay, so thats, sort of the
    big, intelligent citizen of the world
  • 11:12 - 11:17
    logic, right. There is, models, are
    incredibly fertile, they make us humble,
  • 11:17 - 11:21
    they help, you know really clarify the
    logic, and they're just better. Okay? So
  • 11:21 - 11:25
    if you wanna be out there, you know,
    helping to change the world in useful
  • 11:25 - 11:30
    ways, it's really, really helpful to have
    some understanding of models. Thank you
  • 11:30 - 11:31
    very much.
Title:
Intelligent Citizens of the World
Video Language:
English

English subtitles

Revisions