♪ [music] ♪
- [Narrator] Welcome to
Nobel Conversations.
In this episode,
Josh Angrist and Guido Imbens
sit down with Isaiah Andrews
to discuss how the field
of econometrics is evolving.
So Guido and Josh,
you're both pioneers
in developing tools for
empirical research in economics.
And so I'd like to explore sort of where
you feel like the field is heading,
sort of Economics econometrics.
The whole thing to start,
I'd be interested to hear
about whether you feel like
the sort of the way in which
the local average treatment effects from.
Work sort of took hold
has any lessons for how new empirical
methods and economics develop and spread or
how they should? That's a good question.
You go first.
Yeah, somebody I think the important
thing is to come up with with
the good conferencing cases. Where
The questions are clear
and we're kind of the methods
apply in general. So maybe
one thing I kind of looking back
at the subsequent literature.
So I really like the
the regression discontinuity
literature. Whenever
clearly a bunch of really convincing
examples and that allowed people to kind of
think more clearly look harder at
a methodological. The questions,
can it do a clear applications that
allow you to kind of think about
Wow, do this type of sumption
seem reasonable here. What kind of
what kind of things do we not?
Like in this, the early papers?
How can we improve things? So having
clear applications motivating,
these legislators? I think
it's it's very helpful.
I'm glad you mentioned the
regression discontinuity. He do.
I think there's a lot of
complementarity between IV and RD.
Instrumental variables and
regression discontinuity and the
A lot of the Econo metric applications
of regression discontinuity are what used
to be called? Fuzzy rdd where
you know, it's not discrete or
deterministic at the cutoff, but
just to change in rates or intensity.
And and the late framework helps us
understand those applications and gives us
a clear interpretation for say something
like, in my paper with Victor Lavie,
where we use, maimonides
rule, the class size cut-offs.
What are you getting there? So?
Of course, you can answer that question
with a linear constant effects model,
but it turns out we're not limited
to that. And an RD is still
very powerful and Illuminating,
even when you know,
the correlation between the cut
off and the variable of interest in
this case class size is partial,
maybe even not that strong.
So there was definitely a kind of a
parallel development. It's also interesting,
you know, nobody talked about
regression discontinuity designs when we
Were in graduate school. It was
something that other social
scientists were interested in
and that kind of grew up alongside the
late framework and we've both done work on
both applications and methods there and
it's been very exciting to see that
kind of develop and become so important.
It's part of a general Evolution.
I think towards you know,
credible identification, strategies, causal
effects less, you know, making a condom
Tricks more about causal
questions that about models
in terms of the future. I think one thing
that Slade has helped facilitate as a move
towards more creative
randomized, trials, where,
you know, there's something of interest,
it's not possible or straightforward
to Simply turn it off or on
but you can encourage it
or discourage it. So, you subsidize
schooling with financial aid, for example,
so now we have a whole
Framework for interpreting that.
And,
and it kind of opens
the doors to randomized,
Trials of things that that maybe would,
you know,
not have seen possible before we've,
we've used that a lot in the work. We do
on schools in our in the blueprint Lab
at MIT were exploiting random assignment
and in very creative ways, I think.
Related to that. Do you see sort
of particular factors that make for
useful research and econometrics.
You've alluded to it?
Having a clear connection to
problems that are actually coming up.
And empirical practice is often a good
idea. I'll send it. Always a good idea.
I often find myself sitting in an
economy metrics Theory, seminar.
Say the Harvard MIT seminar
and I'm thinking what problem is
this guy solving who has this?
This problem and you know,
sometimes there's an
embarrassing silence if I ask
or there might be a
fairly contrived scenario.
I want to see where the tool is useful.
There are some purely foundational tools.
I do take the point, you
know, there are people who are
working on conceptual
foundations of you know,
it's more becomes more like
mathematical statistics.
I mean, I remember an early example
I believe that that I,
you know, I struggled to understand was
the idea of stochastic Equity continuity,
which my one of my thesis advisors Whitney
knew he was using to great effect and
I was trying to understand
that and there isn't really.
It's really foundational. It's not
but an application that's driving that
at least not immediately
but but most things are not like that
and so there should be a problem.
And the I think it's on the it's on
On the, the seller of that sort of thing,
you know, because there's opportunity
cost the time and attention and effort
to understand things to, you
know, it's on the seller to say.
Hey, I'm solving this problem
and and here's a set of results
that show that it's useful.
And here's some insight that I get.
As you said, Josh, great,
sort of there's been a move in the
direction of thinking more about causality
in economics and empirical
work in economics,
any consequences of sort of the Wilds,
the spread of that view. That
surprised you or anything.
UB was downsides of sort of the way that
he could empirical, economics has gone
sometimes.
I see somebody does Ivy and they get a
result which seems implausibly large.
That's the usual case.
So it might be, you know,
an extraordinarily large causal effect
of some relatively minor Intervention,
which was randomized or for
which you could make a case that
that there's a good design.
And then when I see that,
That and,
you know, I think,
you know,
it's very hard for me to believe that this
relatively minor intervention has such
a large defect,
the author. Well, sometimes
resort to the local average,
treatment effects theorem and say,
wow, these compliers, you know,
they're special in some way.
And, you know, they just benefit
extraordinarily from this intervention
and I'm reluctant to take that
at face value. I think, you know,
often when effects are too big,
it's because the exclusion
restriction is failing. So
Don't really have the right endogenous
variable to scale that result.
And so I'm not too happy to see
you know, just sort of
a generic heterogeneity
argument being used to excuse something
that I think might be a deeper problem.
I think it played somewhat
of an unfortunate roll pin.
The discussions kind of between reduced
form and structural approaches where
I feel that wasn't quite
right. The instrumental
variables assumptions are
at the core structural assumptions about
Behavior. They were coming from economic
thinking about the economic
behavior of agents,
and it's somehow it got
pushed in a Direction.
That I think wasn't
really very helpful. If
the way I think, initially the
we wrote things up. It was it was describing
what was happening, there was set of
methods. People were using be
clarified what those methods were doing
and in a way that I think
contain a fair amount of insight,
but it somehow it got pushed into a corner
that I think was not necessarily very
or even just the language of
reduced form versus structural.
I find kind of funny in
the sense that the right
the local average treatment
effect model, right?
The potential outcomes
model is a nonparametric.
Structural model,
if you want to think about it, as
you sort of suggested, he does.
So, there's something,
there's something a little funny about
putting these two things in a position when
yes, well, that language, of
course, comes from the area, the
70s equations framework that we inherited.
It has the advantage that people seem
to know what you mean
when you use it, but might
That people are hearing different. Different
people are hearing different things.
Yeah. I think I think veggies
Farmers had become use
a little bit of the
pejoratives. Okay? Yeah.
The word, which is not really quite
what it was originally intended for.
I guess something else that strikes
me in thinking about the effects of
the local average treatment effect
framework is that often folks will appeal to
a local average, treatment effects
intuition for settings. Well, beyond
ones, where any sort of formal
results has actually been
Shhhhht. And I'm curious given
all the work that you guys did to,
you know, establish late results in
different in different settings. I'm curious
any thoughts on that. I think there's
going to be a lot of cases where
the intuition does get.
You get you some distance,
but it's going to be somewhat limited
and establishing formal results. There
may be a little tricky and there may
be only work in special circumstances,
you need.
And you end up with a lot of formality
that may not quite capture the intuition
sometimes I'm somewhat uneasy with them
and they are not necessarily the papers.
I would want to ride that the
but I do think something do intuition
orphaned US capture part of the
of the problem.
I think, in some sense we were
kind of very fortunate there
in the way. The late paper go handle. It.
Don't know if that, actually the editor,
made it much shorter
and that then allowed us to kind of
focus on very clear, crisp results
where if, you know, this,
this is somewhat unfortunate tendency in
the commercialization of having the papers.
Well, you should be able to fix that, man.
I'm trying to take some time to fix that.
I think this is an example where it's sort
of very clear that having it. Be sure.
It's actually impose that no paper can
be longer than the late paper that wow.
Great. At least no Theory. No Theory Pig.
Yeah, and I think, I think they're well,
I'm trying very hard to get
the papers to be shorter.
And I think there's a lot of value
today because it's often the second
part of the paper that doesn't actually
Get you much further
and understanding things
but and it does make things much
harder to read and, you know,
it sort of goes back to
how I think he kind of a trick should
be done to you should focus on the see.
It should be reasonably
close to empirical problems.
They should be very clear problems.
But then often the the theory
doesn't need to be quite so long.
Yeah,
I think they had things have
On a little off track.
The relatively recent change has been a
seeming big increase in demand for
people with sort of econometrics.
Causal effect, estimation
skills in the tech sector.
I'm interested either of you have
thoughts on sort of how that's gonna
how that's going to interact with
the development of empirical methods,
or Empirical research, and
economics. Going forward, sort of
whether sort of a meta point, which
is there's this new kind of employer
the Amazons and the Uber and, you know,
Riser world
and I think that's great.
And I'd like to tell my students about
that, you know, especially at MIT.
We have a lot of computer science
Majors. That's our biggest major
and I try to seduce some of those folks
into economics by saying, you know,
you can go work for these,
you know companies that
people are very keen to
work for because the work seems exciting,
you know that the skills that you get in
econometrics are are as good or better.
Better than than any competing discipline
has to offer. So you should at least
take some econ, take some
econometrics. And some econ.
I did a fun project with a uber
on labor supply of Uber drivers and was
very, very exciting to be part of that.
Plus. I got to drive for Uber for a while
and I thought that was fun tonight. I did
not make enough that I was attempted to
give up by a mighty job, but
I enjoyed the experience.
I see a
Cho challenge to our model
of graduate education here,
which is if we're trading people
to go work at Amazon, you know,
it's not clear. Why? You know, we should
be paying graduate stipends for that.
Why should the taxpayer effectively
be subsidizing? That our graduate education
in the u.s. Is generously subsidized?
Even in private universities. It's
ultimately there's a lot of public money.
Me there. And I think the
traditional rationale for that is,
you know, we were training, Educators and
Scholars, and there's a great externality
from the work that we do.
It's either the research externality,
or a teaching externality.
But, you know, if many of our students
are going to work in the private sector,
that's fine, but that maybe their
employers should pay for that.
He says, so different from
people working for a Consulting.
Trust me.
It's not clear to me that the number
of jobs in academics has changed.
It's just, I feel like this is a
growing sector whereas Consulting,
your right to raise that, it might
be the same for for Consulting.
But this,
you know, I'm placing more and
more students in these businesses.
So, it's on my mind in
a way that I've sort of,
you know, not been attentive to consulting
jobs, you know, Consulting was always,
It's important and I think they'll so
there's some movement from Consulting back
into research. It's a little more fluid.
The,
a lot of the work in the
in both domains. I have to say,
it's not really different but
you know, people who are working
in the tech sector are doing things
that are potentially of scientific
interest, but mostly it's hidden.
Then you really I have to say, you know,
why, why is the government paying for this?
Yeah, although yeah, I mean taquitos point,
I guess it. There's a, there's a data.
Question here of it has the sort
of total nanak. It sort of say
private
for-profit sector employment of econ Ph.D.
Program graduates increased or has
it just been a substitution from
finance and Consulting towards tack.
I may be a reaction to something
that's not really happening
so bad. I've actually done some work
with some of these tech companies.
So I don't disagree with Justice
point that we need to think
a little bit about the funding model
whose it was in the end paying for the
It education. But from a
scientific perspective.
The only do these places have
have great data and nowadays.
They tend to be very careful
with that for privacy reasons,
but also have great questions.
I find it very
inspiring kind of to listen to
the people there and kind of see
what kind of questions they have
and often their questions.
That also come up outside of these.
These companies have a couple of
papers with the rights in the chat.
And then
as soon as an atheist kind of where we
look at ways of combining experimental data
and observational data, and can it there.
Rights Chetty was interested
in what is the effect
of Early Childhood programs on outcomes
later in life? Not just kind of test scores,
but on earnings and stuff, and
we cannot be developed methods
that would help you shed
light on that, on the some,
in some settings and the same problems.
Came up kind of in this
tech company settings.
And so for my perspective, it's
the same kind of a stocking two
people doing a protocol work.
I tried to kind of look at these
specific problems and then try to come up
with more General problems that we
formulating the problems at a higher level.
So that I can think about solutions
that work in a range of settings.
And so from that perspective, the
His with the the tech companies I
just very valuable and very useful.
It's know.
We do have students. Now spent
doing internships there and
then coming back and writing
more interesting thesis, as a
result of their experiences there.
If you'd like to watch more
Nobel conversations, click here,
or if you'd like to learn
more about econometrics,
check out Josh's mastering
econometrics series.
If you'd like to learn more about he do.
Josh and Isaiah check out
the links in the description.