< Return to Video

How Will Machine Learning Impact Economics?

  • 0:00 - 0:02
    ♪ [music] ♪
  • 0:04 - 0:06
    - [narrator] Welcome
    to Nobel Conversations.
  • 0:07 - 0:10
    In this episode, Josh Angrist
    and Guido Imbens
  • 0:10 - 0:14
    sit down with Isaiah Andrews
    to discuss and disagree
  • 0:14 - 0:17
    over the role of machine learning
    in applied econometrics.
  • 0:18 - 0:20
    - [Isaiah] So, of course,
    there are a lot of topics
  • 0:20 - 0:21
    where you guys largely agree,
  • 0:21 - 0:22
    but I'd like to turn to one
  • 0:22 - 0:24
    where maybe you have
    some differences of opinion.
  • 0:24 - 0:26
    So I'd love to hear
    some of your thoughts
  • 0:26 - 0:27
    about machine learning
  • 0:27 - 0:30
    and the goal that it's playing
    and is going to play in economics.
  • 0:30 - 0:33
    - [Guido] I've looked at some data
    like the proprietary
  • 0:33 - 0:35
    so that there's
    no published paper there.
  • 0:37 - 0:38
    There was an experiment
    that was done
  • 0:38 - 0:40
    on some search algorithm.
  • 0:40 - 0:41
    And the question was...
  • 0:43 - 0:46
    it was about ranking things
    and changing the ranking.
  • 0:46 - 0:48
    That was sort of clear...
  • 0:48 - 0:51
    that was going to be
    a lot of heterogeneity there.
  • 0:51 - 0:52
    Mmm,
  • 0:52 - 0:58
    You know, if you look for say,
  • 0:58 - 1:00
    a picture of Britney Spears
  • 1:00 - 1:02
    that it doesn't really matter
    where you rank it
  • 1:02 - 1:06
    because you're going to figure out
    what you're looking for,
  • 1:06 - 1:08
    whether you put it
    in the first or second
  • 1:08 - 1:10
    or third position of the ranking.
  • 1:10 - 1:12
    But if you're looking
    for the best econometrics book,
  • 1:13 - 1:16
    if you put your book
    first or your book tenth,
  • 1:16 - 1:18
    that's going to make
    a big difference
  • 1:19 - 1:22
    how much how often people
    are going to click on it.
  • 1:22 - 1:23
    And so there you go --
  • 1:23 - 1:27
    - [Josh] Why do I need
    machine learning to discover that?
  • 1:27 - 1:29
    It seems like because
    I can discover it simply.
  • 1:29 - 1:32
    So in general, there
    were lots of possible.
  • 1:32 - 1:36
    You what you want to think about there
    being lots of characteristics of the
  • 1:36 - 1:42
    the items that you want to understand
    where, what drives the heterogeneity
  • 1:42 - 1:46
    in the effect of your just rekt,
    you know, that in some sense.
  • 1:46 - 1:48
    You're solving a marketing problem.
  • 1:48 - 1:52
    Also affect you, it's causal,
    but it has no scientific content.
  • 1:52 - 1:53
    I think about think about,
  • 1:54 - 1:57
    but it's similar things
    and medical settings.
  • 1:58 - 2:01
    If you do an experiment, you
    may actually be very interested
  • 2:01 - 2:04
    in whether the treatment
    works for some groups or not.
  • 2:04 - 2:06
    And you have a lot of individual
    characteristics and you want
  • 2:06 - 2:10
    to systematically search.
    Yeah. I'm skeptical about that.
  • 2:10 - 2:14
    That sort of idea that there's this personal
    causal effect that I should care about,
  • 2:14 - 2:18
    and that machine learning can Discover it
    in some way that's useful. So think about
  • 2:18 - 2:21
    I've done a lot of work
    on schools, going to say
  • 2:21 - 2:26
    a charter school publicly funded
    private school effectively, you know,
  • 2:26 - 2:29
    that's free to structure its own
    curriculum for context there.
  • 2:29 - 2:33
    Some types of charter, schools
    are generate spectacular,
  • 2:33 - 2:36
    achievement gains and in the data
    set that produces that result.
  • 2:36 - 2:38
    I have a lot of covariance.
  • 2:38 - 2:41
    So I have Baseline scores,
    and I have family background,
  • 2:41 - 2:46
    the education of the parents, the sex
    of the child, the race of the child.
  • 2:46 - 2:48
    And, well, soon as I put
  • 2:48 - 2:52
    Half a dozen of those together. I
    have a very high dimensional space.
  • 2:52 - 2:55
    I'm definitely interested
    in in sort, of course,
  • 2:55 - 2:59
    features of that treatment effect,
    like whether it's better for people who
  • 3:00 - 3:02
    come from lower income families.
  • 3:03 - 3:06
    I have a hard time believing
    that there's an application,
  • 3:06 - 3:10
    you know, for the very high
    dimensional version of that, where
  • 3:10 - 3:13
    I discovered that for
    non-white children who have
  • 3:14 - 3:18
    high family incomes, but Baseline
    scores in the third quartile,
  • 3:18 - 3:23
    And only went to public school in the
    third grade, but not the sixth grade.
  • 3:23 - 3:26
    So that's what that high
    dimensional analysis produces.
  • 3:26 - 3:28
    This very elaborate conditional statement.
  • 3:28 - 3:31
    There's two things that are wrong
    with that. In my view first.
  • 3:31 - 3:34
    I don't see it as I just can't
    imagine why it's actionable.
  • 3:35 - 3:37
    I don't know why you'd want to act on it.
  • 3:37 - 3:41
    And I know also that there's some
    alternative model that fits almost as well.
  • 3:42 - 3:43
    That flips everything,
  • 3:43 - 3:48
    right? Because machine learning doesn't
    tell me that this is really the predictor
  • 3:48 - 3:48
    that
  • 3:48 - 3:52
    Is it just tells me that this
    is a good predictor? And so,
  • 3:53 - 3:56
    you know, I think there is
    something different about the
  • 3:56 - 3:58
    Moss social science contest. So I think
  • 3:58 - 4:03
    the socialized signs of applications
    you're talking about once where
  • 4:03 - 4:08
    I think there's not a huge amount
    of heterogeneity in the effects.
  • 4:08 - 4:14
    And so what there might be a few
    allow me to to fill that space. No,
  • 4:15 - 4:18
    not even then I think for
    a lot of those those into
  • 4:18 - 4:22
    Sanctions even effect. You would expect
    that. The effect is the same sign
  • 4:22 - 4:23
    for everybody.
  • 4:23 - 4:28
    It may be there may be small differences
    in the magnitude, but it's not
  • 4:28 - 4:32
    for a lot of these education
    defenses. They're good for everybody.
  • 4:32 - 4:32
    They're
  • 4:33 - 4:38
    the it's not that they're bad for some
    people and good for other people and
  • 4:38 - 4:41
    that is kind of very small
    Pockets where they're bad the
  • 4:41 - 4:44
    but it may be some
    variation in the magnitude,
  • 4:44 - 4:48
    but you would need very very big
    data sets to find those and I
  • 4:48 - 4:51
    Then in those cases, they probably
    wouldn't be very actionable anyone.
  • 4:52 - 4:54
    But there's I think there's
    a lot of other settings
  • 4:54 - 4:57
    where there is much more hydrogen it.
  • 4:57 - 5:02
    Well, I'm open to that possibility
    and I think the example you gave of
  • 5:02 - 5:05
    it's essentially a marketing example.
  • 5:06 - 5:08
    Now that maybe they
    say there's a there's a
  • 5:08 - 5:11
    have implications for
    and that's organization.
  • 5:11 - 5:14
    How you actually need to
    whether you need to worry about
  • 5:14 - 5:18
    the well, I know Market
    power, some see that paper.
  • 5:18 - 5:21
    So that's the sense. The
    sense I'm getting is that
  • 5:22 - 5:24
    we still disagree on something. Yes.
  • 5:24 - 5:27
    We have it converged on
    everything. I'm getting that sense.
  • 5:27 - 5:31
    Actually. We've diverged on this because
    this wasn't around to argue about.
  • 5:33 - 5:38
    Is it getting a little warm here? Yeah.
    Warm warmed up. Warmed up is good.
  • 5:38 - 5:41
    The sense. I'm getting his Jaws.
    Sort of, you're not, you're not
  • 5:41 - 5:43
    saying that you're confident
    that there is no way.
  • 5:43 - 5:45
    That there is an application
    where the stuff is useful.
  • 5:45 - 5:48
    You are saying you are you're
    unconvinced by the existing.
  • 5:48 - 5:52
    Applications to dedicate fair
    that I'm very confident. Yeah,
  • 5:54 - 5:55
    in this case.
  • 5:55 - 5:58
    I think Josh does have a point that today
  • 5:58 - 6:02
    even in the prediction cases the where
  • 6:02 - 6:05
    a lot of the machine learning
    methods really shine is
  • 6:05 - 6:07
    where there's just a lot of heterogeneity.
  • 6:07 - 6:11
    You don't really care much
    about the details there, right?
  • 6:11 - 6:15
    Yes. It does. It doesn't have
    a policy angle or something.
  • 6:15 - 6:18
    They kind of recognizing
    handwritten digits and stuff.
  • 6:18 - 6:24
    For it does much better there than
    building some complicated model.
  • 6:24 - 6:28
    But a lot of the social science, a
    lot of the economic applications.
  • 6:28 - 6:32
    We actually know a huge amount about the
    relationship between various variables.
  • 6:32 - 6:35
    A lot of the relationships
    are strictly monotone.
  • 6:35 - 6:39
    There and education is going
    to increase people's earnings,
  • 6:40 - 6:44
    irrespective of the demographic,
    irrespective of the level of Education.
  • 6:44 - 6:48
    You already have until they get to a
    PhD. Yeah. There is a graduate school.
  • 6:50 - 6:51
    A reasonable range.
  • 6:52 - 6:56
    It's a it's not going to
    go down very much. We're
  • 6:56 - 7:00
    in a lot of the settings. For these
    machine learning method shine.
  • 7:00 - 7:02
    It's going to there's a lot
    of non-monetary Necessities
  • 7:02 - 7:05
    kind of multi modality
    in these relationships
  • 7:05 - 7:12
    and they're they're going to be very
    powerful but I still stand by that.
  • 7:12 - 7:16
    It kind of It kind of this message just
    have a huge amount to offer the for
  • 7:16 - 7:18
    for economists and they go.
  • 7:18 - 7:22
    To be a big part of the future.
  • 7:23 - 7:26
    Feels like there's something interesting
    to be said about machine learning here.
  • 7:26 - 7:28
    So, here I was wondering,
    could you give some more,
  • 7:28 - 7:29
    maybe some examples
  • 7:29 - 7:32
    of the sorts of examples you're thinking
    about with applications? I'm at the moment.
  • 7:32 - 7:34
    So while I'm on areas where
  • 7:35 - 7:36
    instead of looking for average
  • 7:36 - 7:42
    cause of facts were looking for
    individualized estimates, and predictions of
  • 7:42 - 7:48
    of course of facts and their machine
    learning algorithms have been very effective,
  • 7:48 - 7:48
    too.
  • 7:48 - 7:52
    Surely would have, we would have done
    these things, using kernel methods.
  • 7:52 - 7:54
    And theoretically they work great and
  • 7:55 - 7:57
    the sort of some arguments that
    you formally can't do any better.
  • 7:58 - 8:00
    But in practice, they
    don't work very well and
  • 8:01 - 8:05
    random Forest, random cause of forest
    type things that stuff on wagon, Susan.
  • 8:05 - 8:10
    I think I've been working
    on. I used very widely.
  • 8:10 - 8:12
    They've been very effective,
    kind of, in the settings
  • 8:12 - 8:18
    to actually get cause of facts
    that are that the ferry by
  • 8:18 - 8:20
    Bike over has, and this kind of,
  • 8:21 - 8:26
    I think this is still just the beginning
    of these methods. But in many cases,
  • 8:26 - 8:32
    the these algorithms are very
    effective as searching over big spaces
  • 8:32 - 8:36
    and finding the functions that fit
  • 8:36 - 8:41
    the very well in ways that we
    couldn't really do the beforehand.
  • 8:42 - 8:45
    I don't know of an example, where
    machine learning has generated insights
  • 8:45 - 8:48
    about a causal effect that
    I'm interested in. And I,
  • 8:48 - 8:51
    You know of examples where it's
    potentially very misleading.
  • 8:51 - 8:54
    So I've done some work
    with Brigham Franz and
  • 8:54 - 8:55
    using, for example,
  • 8:55 - 9:00
    random Forest to model covariate effects
    in an instrumental variables problem.
  • 9:00 - 9:01
    Where you need,
  • 9:02 - 9:04
    you need to condition on covariance
  • 9:04 - 9:08
    and you don't particularly have strong
    feelings about the functional form for that.
  • 9:08 - 9:10
    So maybe you should curve
  • 9:10 - 9:11
    think,
  • 9:11 - 9:14
    be open to flexible curve fitting
    and that leads you down a path
  • 9:14 - 9:18
    where there's a lot of
    nonlinearities in the model and
  • 9:18 - 9:23
    That's very dangerous with IV because
    any sort of excluded non-linearity
  • 9:23 - 9:28
    potentially generates a spurious, causal
    effect and Brigham. And I showed that
  • 9:28 - 9:32
    very powerfully. I think in
    the case of two instruments
  • 9:33 - 9:36
    that come from a paper, mine
    with Bill Evans. Where if you,
  • 9:36 - 9:38
    you know, replace it
  • 9:38 - 9:43
    in a traditional two stage least squares,
    estimator with some kind of random Forest.
  • 9:43 - 9:48
    You get very precisely at
    estimated nonsense estimates and
  • 9:49 - 9:51
    You know, I think that's
    a, that's a big caution.
  • 9:51 - 9:53
    And I, you know, in view of those findings
  • 9:54 - 9:57
    in an example, I care about where
    the instruments are very simple
  • 9:57 - 9:59
    and I believe that they're valid,
  • 9:59 - 10:02
    you know, I would be skeptical of that. So
  • 10:03 - 10:07
    non-linearity and Ivy don't mix
    very comfortably. Now I said,
  • 10:07 - 10:11
    you know in some sense that's already
    a more complicated. Well, it's Ivy.
  • 10:12 - 10:12
    Yeah,
  • 10:12 - 10:17
    but then we work on that and friend out.
  • 10:19 - 10:22
    I sat in tow vehicle actually guy a lot
    of these papers Cross by my desk and it,
  • 10:23 - 10:30
    but the motivation is is not
    clear at a fact, really lacking.
  • 10:30 - 10:35
    And they're not, they're not, they called
    type semi-parametric foundational papers.
  • 10:35 - 10:37
    So that that's a big problem
  • 10:38 - 10:42
    and kind of related problem is that
    we have this tradition in econometrics
  • 10:43 - 10:48
    being very focused on these formulas
    and tonic results kind of weird.
  • 10:49 - 10:53
    We have just have a lot of papers
    that where you people, propose
  • 10:53 - 10:56
    a method and then establish
    the asymptotic properties
  • 10:56 - 11:02
    in in a very kind of
    standardized way that bad.
  • 11:03 - 11:07
    Well, I think it's sort of close
    the door for a lot of work.
  • 11:07 - 11:12
    That doesn't fit it into that. We're
    in the machine learning literature.
  • 11:12 - 11:14
    A lot of things are
    more algorithmic people.
  • 11:16 - 11:18
    Had algorithms for coming
    up with predictions.
  • 11:19 - 11:24
    The turn out to actually work much better
    than say, nonparametric kernel regression
  • 11:24 - 11:27
    for a long-ass time. We're doing all
    the nonparametric syndecan, metrics.
  • 11:27 - 11:31
    We do it using kernel regression and
    I was great for proving theorems.
  • 11:31 - 11:35
    You could get confidence, intervals and
    consistency, and asymptotic normality,
  • 11:35 - 11:37
    and it was all great, but
    it wasn't very useful.
  • 11:37 - 11:41
    And the things they did in machine
    learning. I just way way better,
  • 11:41 - 11:45
    but they didn't have to the proper. That's
    not my beef with machine learning theory.
  • 11:45 - 11:51
    As we know my name, I'm saying
    there for the prediction part.
  • 11:51 - 11:54
    It does much better. Yeah, that's
    a better curve fitting to it.
  • 11:55 - 11:56
    But it did. So
  • 11:57 - 12:03
    in a way that would not have made
    those papers initially easy to get into
  • 12:03 - 12:06
    the econometrics journals because it
    wasn't proving the type of things.
  • 12:06 - 12:11
    You know, when when Brian was doing his
    regression trees that just didn't fit in
  • 12:12 - 12:15
    and I think he would have
    had a very hard time.
  • 12:15 - 12:18
    Polishing these things. And it
    could have had six journals.
  • 12:19 - 12:24
    I, so I think we're we limited
    ourselves too much and we
  • 12:25 - 12:28
    that left us close things off
  • 12:28 - 12:31
    for a lot of these machine learning
    methods, that actually very useful.
  • 12:31 - 12:34
    Hmm. I mean, I think they're in general,
  • 12:35 - 12:36
    that literature the computer.
  • 12:36 - 12:39
    Scientists have brought a huge
    number of these algorithms.
  • 12:40 - 12:44
    The have proposed a huge number of these
    algorithms that actually very useful
  • 12:44 - 12:45
    at that are
  • 12:46 - 12:49
    Affecting the way we're going
    to be doing empirical work,
  • 12:50 - 12:55
    but we've not fully internalize that
    because we're still very focused on getting
  • 12:55 - 12:58
    Point estimates and
    getting standard errors
  • 12:59 - 13:01
    and getting P values in a way that
  • 13:02 - 13:03
    we need to move Beyond
  • 13:03 - 13:04
    to fully harness.
  • 13:04 - 13:11
    The force, the quote, the benefits
    from machine learning literature.
  • 13:11 - 13:15
    Hmm. On the one hand. I guess I very
    much take your point that sort of the the
  • 13:15 - 13:19
    Tional. Econometrics, framework
    of sort of propose, a method,
  • 13:19 - 13:23
    proved a limit theorem under some
    asymptotic story, story story,
  • 13:23 - 13:27
    story story publish a
    paper is constraining.
  • 13:27 - 13:30
    And that in some sense by thinking, more,
  • 13:30 - 13:33
    broadly about what a methods paper could
    look. Like we may write in some sense.
  • 13:33 - 13:36
    Certainly the machine learning
    literature has found a bunch of things,
  • 13:36 - 13:38
    which seem to work quite
    well for a number of problems
  • 13:38 - 13:42
    and are now having substantial influence
    in economics. I guess a question.
  • 13:42 - 13:45
    I'm interested in is, how do you think?
  • 13:45 - 13:48
    The goal of fear.
  • 13:48 - 13:51
    Sort of, do you think there is? There's
    no value in the theory part of it?
  • 13:52 - 13:55
    Because I guess it's sort of a question
    that I often have to sort of seeing
  • 13:55 - 13:57
    that output from a machine learning tool
  • 13:57 - 13:59
    that actually a number of the
    methods that you talked about.
  • 13:59 - 14:02
    Actually do have inferential
    results, develop for them,
  • 14:03 - 14:06
    something that I always wonder about a sort
    of uncertainty quantification and just,
  • 14:06 - 14:08
    you know, I I have my prior,
  • 14:08 - 14:11
    I come into the world with my view.
    I see the result of this thing.
  • 14:11 - 14:14
    How should I update based on it? And
    in some sense, if I'm in a world where
  • 14:15 - 14:15
    things are.
  • 14:15 - 14:18
    Normally distributed. I know
    how to do it here. I don't.
  • 14:18 - 14:21
    And so I'm interested to hear
    had I think it sounds. So
  • 14:22 - 14:24
    I don't see this as sort
    of close it saying, well
  • 14:24 - 14:26
    we do these results
    are not not interesting
  • 14:27 - 14:28
    but it's gonna be a lot of cases
  • 14:28 - 14:31
    where it's going to be incredibly hard to
    get those results and we may not be able
  • 14:31 - 14:33
    to get there and
  • 14:33 - 14:38
    we may need to do it in stages. Where
    first someone says. Hey I have this
  • 14:40 - 14:45
    interesting algorithm for for doing
    something and it works well by some
  • 14:46 - 14:50
    The Criterion that on this
    this particular data set
  • 14:51 - 14:53
    and I'm visit put it
    out there and we should
  • 14:54 - 14:58
    maybe someone will figure out a way that
    you can later actually still do inference
  • 14:58 - 14:59
    on the some condition.
  • 14:59 - 15:02
    So and maybe those are not
    particularly realistic conditions,
  • 15:02 - 15:06
    then we kind of go further,
    but I think we've been
  • 15:07 - 15:11
    Too constraining things too much where we
    said, you know, this is the type of things
  • 15:12 - 15:14
    that we need to do. And I had some sense
  • 15:16 - 15:18
    that goes back to kind of
    the way they dress and I
  • 15:20 - 15:22
    thought about things for the
    local average treatment effect.
  • 15:22 - 15:25
    That wasn't quite the way people
    were thinking about these problems.
  • 15:25 - 15:29
    Before they say they there was a sense
    that some of the people said, you know,
  • 15:30 - 15:32
    the way you need to do. These
    things, is you first, say
  • 15:32 - 15:36
    what you're interested in estimating
    and then you do the best job you can.
  • 15:36 - 15:38
    In estimating that
  • 15:38 - 15:44
    and what you have you guys had doing is
    doing it, you guys are doing it backwards.
  • 15:44 - 15:47
    You're going to say
    here. I have an estimator
  • 15:47 - 15:50
    and now I'm going to figure out what what
  • 15:50 - 15:51
    what it says estimating then expose.
  • 15:51 - 15:54
    You're going to say why you
    think that's interesting
  • 15:54 - 15:57
    or maybe why it's not interesting
    and that's that's not okay.
  • 15:57 - 15:59
    You're not allowed to do that that way.
  • 15:59 - 16:04
    And I think we should just be a little
    bit more flexible and thinking about the
  • 16:04 - 16:06
    how to look at at
  • 16:06 - 16:11
    Problems because I think we've missed
    some things by not by not doing that.
  • 16:13 - 16:17
    So you've heard our views.
    Isaiah, you've seen that, we have
  • 16:17 - 16:20
    some points of disagreement. Why
    don't you referee this dispute for us?
  • 16:22 - 16:28
    Oh, I'm so so nice of you to ask me
    a small question. So I guess for one.
  • 16:28 - 16:33
    I very much agree with something
    that he do said earlier of.
  • 16:36 - 16:36
    So what?
  • 16:36 - 16:38
    Where it seems. Where the,
  • 16:38 - 16:41
    the case for machine learning seems
    relatively clear is in settings, where
  • 16:42 - 16:45
    you know, we're interested in some version
    of a nonparametric prediction problem.
  • 16:45 - 16:50
    So I'm interested in estimating a conditional
    expectation or conditional probability
  • 16:50 - 16:52
    and in the past, maybe I
    would have run a colonel,
  • 16:52 - 16:56
    I would have run a kernel regression or
    I would have run a series regression or
  • 16:56 - 16:57
    something along those lines.
  • 16:58 - 16:58
    Sort of,
  • 16:58 - 16:59
    it seems like
  • 16:59 - 17:02
    at this point we've a fairly good
    sense that in a fairly wide range
  • 17:02 - 17:06
    of applications machine learning
    methods seem to do better for
  • 17:06 - 17:07
    Or, you know,
  • 17:07 - 17:09
    estimating conditional mean functions
  • 17:09 - 17:12
    or conditional probabilities or
    various other nonparametric objects
  • 17:12 - 17:17
    than more traditional nonparametric
    methods that were studied in econometrics
  • 17:17 - 17:19
    and statistics, especially
    in high dimensional settings.
  • 17:20 - 17:23
    So you thinking of maybe the propensity
    score or something like that?
  • 17:23 - 17:25
    So exactly, so nuisance functions. Yeah.
  • 17:25 - 17:29
    So things like propensity scores
    things or I mean even objects
  • 17:29 - 17:30
    of more direct inference
  • 17:30 - 17:32
    interest, like conditional
    average treatment effects, right?
  • 17:32 - 17:35
    Which of the difference of two
    conditional, expectation functions,
  • 17:35 - 17:36
    potentially things like that.
  • 17:36 - 17:40
    Of course, even there,
    right? We the the theory
  • 17:40 - 17:44
    for in France or the theory for
    sort of how to how to interpret,
  • 17:44 - 17:46
    how to make large simple statements
    about some of these things are
  • 17:46 - 17:50
    less well-developed depending on the
    machine learning, estimator used.
  • 17:50 - 17:54
    And so, I think there's something
    that is tricky is that we
  • 17:54 - 17:56
    can have these methods, which work a lot,
  • 17:56 - 17:58
    which seemed to work a lot
    better for some purposes.
  • 17:58 - 18:02
    But which we need to be a bit
    careful in how we plug them in or how
  • 18:02 - 18:03
    we interpret the resulting statements.
  • 18:04 - 18:06
    But of course, that's a very,
    very active area right now. We're
  • 18:06 - 18:10
    People are doing tons of great work.
    And so I exfoli expect and hope
  • 18:10 - 18:13
    to see much more going forward there.
  • 18:13 - 18:17
    So one issue with machine learning,
    that always seems a danger is, or
  • 18:17 - 18:20
    that is sometimes a danger
    and had some times led to
  • 18:20 - 18:23
    applications that have
    made. Less sense, is
  • 18:23 - 18:25
    when folks start with a method that are
  • 18:25 - 18:28
    start with a method that they're very
    excited about rather than a question,
  • 18:29 - 18:32
    right? So sort of starting with
    a question where here's the
  • 18:32 - 18:36
    object I'm interested in here is
    the parameter of Interest. Let me
  • 18:37 - 18:37
    You know,
  • 18:37 - 18:40
    think about how I would
    identify that thing,
  • 18:40 - 18:42
    how I would recover that
    thing, if I had a ton of data,
  • 18:42 - 18:44
    oh, here's a conditional
    expectation function.
  • 18:44 - 18:47
    Let me plug in an estimator on
    machine. Learning estimator for that.
  • 18:47 - 18:49
    That seems very very sensible.
  • 18:49 - 18:53
    Whereas, you know, if I
    digress quantity on price
  • 18:54 - 18:56
    and say that I used a
    machine learning method,
  • 18:56 - 18:59
    maybe I'm satisfied that that
    solves the in dodging, 80 problem.
  • 18:59 - 19:01
    We're usually worried
    about their maybe I'm not,
  • 19:02 - 19:03
    but again, that's something where the,
  • 19:03 - 19:06
    the way to address. It, seems
    relatively clear, right?
  • 19:06 - 19:09
    It's the find your object of interest and
  • 19:09 - 19:12
    think about, is that just
    bringing the economics?
  • 19:12 - 19:12
    Exactly.
  • 19:12 - 19:15
    And and can I think about it,
    and they denied it, but harnessed
  • 19:15 - 19:18
    the power of the machine
    learning methods for precisely
  • 19:18 - 19:23
    for some of the components precisely.
    Exactly. So sort of, you know, the, the,
  • 19:23 - 19:26
    the question of interest is the same as
    the question of interest is always been,
  • 19:26 - 19:30
    but we now better methods for estimating
    some pieces of this, right? The
  • 19:30 - 19:32
    the place that seems harder to, uh,
  • 19:32 - 19:33
    harder to forecast is Right.
  • 19:33 - 19:36
    Obviously, there's a huge amount
    going in going on in the machine.
  • 19:36 - 19:37
    Learning literature
  • 19:38 - 19:40
    and the great sort of The Limited ways
  • 19:40 - 19:43
    of plugging it in that I've referenced
    so far are limited piece of that.
  • 19:43 - 19:46
    And so I think there are all sorts of
    other interesting questions about where,
  • 19:46 - 19:47
    right sort of
  • 19:47 - 19:49
    where does this interaction
    go? What else can we learn?
  • 19:49 - 19:52
    And that's something where,
    you know, I think there's
  • 19:52 - 19:56
    a ton going on which seems very promising
    and I have no idea what the answer is.
  • 19:57 - 20:01
    No, no. No, it's I so I totally
    agree with that but it's no.
  • 20:02 - 20:04
    That's makes it very exciting.
  • 20:04 - 20:06
    And I think that's just a
    little work to be done there.
  • 20:07 - 20:11
    All right. So I say agrees
    with me there, say that person.
  • 20:14 - 20:18
    If you'd like to watch more
    Nobel conversations, click here,
  • 20:18 - 20:20
    or if you'd like to learn
    more about econometrics,
  • 20:20 - 20:23
    check out Josh's mastering
    econometrics series.
  • 20:24 - 20:26
    If you'd like to learn more
    about he do Josh and Isaiah
  • 20:27 - 20:28
    check out the links in the description.
Title:
How Will Machine Learning Impact Economics?
ASR Confidence:
0.83
Description:

more » « less
Video Language:
English
Team:
Marginal Revolution University
Duration:
20:33

English subtitles

Revisions Compare revisions