♪ [music] ♪
- [Narrator] Welcome
to Nobel Conversations.
In this episode,
Josh Angrist and Guido Imbens
sit down with Isaiah Andrews
to discuss how the field
of econometrics is evolving.
- [Isaiah] So Guido and Josh,
you're both pioneers
in developing tools for
empirical research in economics.
And so I'd like to explore
sort of where you feel like
the field is heading,
sort of economics, econometrics,
the whole thing.
To start, I'd be interested to hear
about whether you feel like
sort of the way in which
the local average treatment
effects framework sort of took hold
has any lessons for how
new empirical methods in economics
develop and spread
or how they should.
- [Josh] That's a good question.
You go first.
(laughter)
Yeah, so I think
the important thing
is to come up
with good convincing cases
where the questions are clear
and where kind of
the methods apply in general.
So one thing I --
Kind of looking back
at the subsequent literature,
so I really like the regression
discontinuity literature
[where there were] clearly a bunch
of really convincing examples
and that allowed people to kind of
think more clearly, look harder
at the methodological questions.
Kind of do clear applications
that then allow you
to kind of think about,
"Wow, do this type of assumption
seem reasonable here?
What kind of things do we not like
in the early papers?
How can we improve things?"
So having clear applications
motivating these literatures,
I think it's very helpful.
- [Josh] I'm glad you mentioned
the regression
discontinuity, Guido.
I think there's a lot of
complementarity between IV and RD,
Instrumental Variables and
Regression Discontinuity.
And a lot of
the econometric applications
of regression discontinuity
are what used to be called
"fuzzy" RD,
where, you know, it's not discrete
or deterministic at the cutoff,
but just the change
in rates or intensity.
And the late framework helps us
understand those applications
and gives us a clear interpretation
for say, something like,
in my paper with Victor Lavy,
where we use Maimonides'
rule, the class size cutoffs.
What are you getting there?
Of course, you can
answer that question
with a linear
constant effects model,
but it turns out
we're not limited to that,
and RD is still very powerful
and illuminating,
even when, you know,
the correlation between the cutoff
and the variable of interest,
in this case class size,
is partial,
maybe even not that strong.
So there was definitely a kind of,
a parallel development.
It's also interesting,
you know, nobody talked about
regression discontinuity designs
when we were in graduate school,
it was something that other social
scientists were interested in,
and that kind of grew up
alongside the LATE framework
and we've both done work on
both applications and methods there
and it's been very exciting
to see that kind of develop
and become so important.
It's part of a general evolution,
I think, towards, you know,
credible identification strategies,
causal effects...
less, you know, making econometrics
more about causal questions
than about models.
In terms of the future,
I think one thing that LATE
has helped facilitate
is a move towards more creative,
randomized trials, where,
you know, there's
something of interest,
it's not possible
or straightforward
to simply turn it off or on,
but you can encourage it
or discourage it.
So you subsidize schooling
with financial aid, for example.
So now we have a whole
framework for interpreting that,
and it kind of opens
the doors to randomized trials
of things that maybe would,
you know,
not have seem possible before.
We've used that a lot in the work
we do on schools in our --
in the Blueprint Lab at MIT,
we're exploiting random assignment
and in very creative ways, I think.
- [Isaiah] Related to that, do you
see sort of particular factors
that make for useful research
in econometrics?
You've alluded to
it having a clear connection
to problems
that are actually coming up
and empirical practice
is often a good idea.
- [Josh] Isn't it always
a good idea?
I often find myself sitting in an
econometrics theory seminar,
say the Harvard MIT seminar,
and I'm thinking, "What problem
is this guy solving?
Who has this problem?"
And, you know,
sometimes there's an
embarrassing silence if I ask
or there might be
a fairly contrived scenario.
I want to see
where the tool is useful.
There are some
purely foundational tools,
I do take the point, you know,
there are people who are
working on conceptual
foundations of, you know,
it's more -- becomes more like
mathematical statistics.
I mean, I remember
an early example of that that I,
you know, I struggled to understand
was the idea
of stochastic equicontinuity,
which one of my thesis advisors,
Whitney Newey,
was using to great effect
and I was trying to understand that
and there isn't really --
It's really foundational, it's not
an application that's driving that,
at least not immediately.
But most things are not like that
and so there should be a problem.
And I think it's on the seller
of that sort of thing,
you know, because there's
opportunity cost,
the time and attention,
and effort to understand things
to, you know,
it's on the seller to say,
"Hey, I'm solving this problem
and here's a set of results
that show that it's useful,
and here's some insight
that I get."
- [Isaiah] As you said, Josh, great,
sort of there's been a move
in the direction of thinking
more about causality
in economics and empirical
work in economics,
any consequences of sort of the --
the spread of that view
that surprised you
or anything that you view
as downsides
of sort of the way
that empirical economics has gone?
- [Josh] Sometimes I see,
somebody does IV
and they get a result
which seems implausibly large.
That's the usual case.
So it might be, you know,
an extraordinarily large
causal effect of some
relatively minor intervention,
which was randomized
or for which you could make a case
that there's a good design.
And then when I see that,
and, you know, I think
it's very hard for me to believe
that this relatively
minor intervention
has such a large effect.
The author will sometimes resort
to the local average
treatment effects theorem
and say, "Well, these compliers,
you know, they're
special in some way."
And, you know,
they just benefit extraordinarily
from this intervention.
And I'm reluctant
to take that at face value.
I think, you know,
often when effects are too big,
it's because the exclusion
restriction is failing,
so you don't really have the right
endogenous variable
to scale that result.
And so I'm not too happy to see
you know, just sort of
a generic heterogeneity
argument being used
to excuse something
that I think might be
a deeper problem.
- [Guido] I think it played
somewhat of an unfortunate role
when the discussions kind of
between reduced form
and structural approaches,
where I feel
that wasn't quite right.
The instrumental
variables assumptions
are at the core - structural
assumptions about behavior -
they were coming from economic --
thinking about the economic
behavior of agents,
and somehow it got
pushed in a direction
that I think wasn't
really very helpful.
The way I think, initially the --
we wrote things up, it was
describing what was happening,
there were a set of methods
people were using,
we clarified what
those methods were doing
and in a way that I think
contain a fair amount of insight,
but it somehow
it got pushed into a corner
that I don't think
was necessarily very helpful.
- [Isaiah] I mean,
just the language
of reduced form versus structural
I find kind of funny
in the sense that,
right, the local average treatment
effect model, right,
the potential outcomes model is
a nonparametric structural model,
if you want to think about it,
as you sort of suggested, Guido.
So there's something a little funny
about putting these
two things in oposition when --
- [Guido] Yes.
- [Josh] Well, that language,
of course, comes from
the [inaudible] equations framework
that we inherited.
It has the advantage
that people seem to know
what you mean when you use it,
but might be that people
are hearing different,
different people
are hearing different things.
- [Guido] Yeah. I think [inaudible]
has sort of become --
used in a little bit
of the pejorative way, yeah?
- [Josh] Sometimes.
- [Guido] [The word].
Which is not really quite what
it was originally intended for.
- [Isaiah] I guess something else
that strikes me in thinking about
the effects of the local average
treatment effect framework
is that often folks will appeal to
a local average treatment effects
intuition for settings well beyond
ones where any sort
of formal result
has actually been established.
And I'm curious, given all the work
that you guys did to, you know,
establish late results
in different settings,
I'm curious, any thoughts on that?
- [Guido] I think there's
going to be a lot of cases
where the intuition
does get you some distance,
but it's going to be
somewhat limited
and establishing
formal results there
may be a little tricky
and then maybe only work
in special circumstances,
and you end up
with a lot of formality
that may not quite
capture the intuition.
Sometimes I'm somewhat
uneasy with them
and they are not necessarily
the papers I would want to write,
but I do think something --
intuition often does capture
part of the problem.
I think, in some sense we were
kind of very fortunate there
in the way that the late paper
got handled at the journal,
is that, actually, the editor,
made it much shorter
and that then allowed us to kind of
focus on very clear, crisp results.
Where if you -- you know, this
somewhat unfortunate tendency
in the econometrics literature
of having the papers
get longer and longer.
- [Josh] Well, you should
be able to fix that, man.
- [Guido] I'm trying to fix that.
But I think this is an example
where it's sort of very clear
that having it be short
is actually --
- [Josh] You should impose
that no paper
can be longer than the late paper.
- [Guido] That, wow.
That may be great.
- [Josh] At least no theory,
no theory paper.
- [Guido] Yeah,
and I think, I think...
I'm trying very hard to get
the papers to be shorter.
And I think there's a lot of value
today because it's often
the second part of the paper
that doesn't actually get you much
further in understanding things
and it does make things
much harder to read
and, you know,
it sort of goes back
to how I think econometrics
should be done,
you should focus on --
It should be reasonably
close to empirical problems.
They should be very clear problems.
But then often the theory
doesn't need to be quite so long.
- [Josh] Yeah.
- [Guido] I think things have gone
a little off track.
- [Isaiah] A new relatively
recent change
has been a seeming big
increase in demand
for people with sort of
econometrics,
causal effect estimation skills
in the tech sector.
I'm interested,
do either of you have thoughts
on sort of how
that's going to interact
with the development
of empirical methods,
or empirical research
in economics going forward?
- [Josh] Well, there's
sort of a meta point,
which is, there's this new
kind of employer,
the Amazons and the Uber,
and, you know, TripAdvisor world,
and I think that's great.
And I like to tell my students
about that, you know, especially --
at MIT we have a lot of
computer science majors.
That's our biggest major.
And I try to seduce some of those
folks into economics by saying,
you know, you can go
work for these,
you know, companies that people
are very keen to work for
because the work seems exciting,
you know, that the skills
that you get in econometrics
are as good or better
than any competing
discipline has to offer.
So you should at least
take some econ, take some
econometrics, and some econ.
I did a fun project with Uber
on labor supply of Uber drivers
and was very, very exciting
to be part of that.
Plus I got to drive
for Uber for a while
and I thought that was fun too.
I did not make enough
that I was tempted to
give up my MIT job,
but I enjoyed the experience.
I see a potential challenge to our
model of graduate education here,
which is, if we're training people
to go work at Amazon, you know,
it's not clear why, you know,
we should be paying
graduate stipends for that.
Why should the taxpayer effectively
be subsidizing that.
Our graduate education
in the US Is generously subsidized,
even in private universities,
it's ultimately -- there's a lot of
public money there,
and I think the
traditional rationale for that is,
you know, we were training educators and
scholars, and there's a great externality
from the work that we do.
It's either the research externality,
or a teaching externality.
But, you know, if many of our students
are going to work in the private sector,
that's fine, but that maybe their
employers should pay for that.
He says, so different from
people working for a Consulting.
Trust me.
It's not clear to me that the number
of jobs in academics has changed.
It's just, I feel like this is a
growing sector whereas Consulting,
your right to raise that, it might
be the same for for Consulting.
But this,
you know, I'm placing more and
more students in these businesses.
So, it's on my mind in
a way that I've sort of,
you know, not been attentive to consulting
jobs, you know, Consulting was always,
It's important and I think they'll so
there's some movement from Consulting back
into research. It's a little more fluid.
The,
a lot of the work in the
in both domains. I have to say,
it's not really different but
you know, people who are working
in the tech sector are doing things
that are potentially of scientific
interest, but mostly it's hidden.
Then you really I have to say, you know,
why, why is the government paying for this?
Yeah, although yeah, I mean taquitos point,
I guess it. There's a, there's a data.
Question here of it has the sort
of total nanak. It sort of say
private
for-profit sector employment of econ Ph.D.
Program graduates increased or has
it just been a substitution from
finance and Consulting towards tack.
I may be a reaction to something
that's not really happening
so bad. I've actually done some work
with some of these tech companies.
So I don't disagree with Justice
point that we need to think
a little bit about the funding model
whose it was in the end paying for the
It education. But from a
scientific perspective.
The only do these places have
have great data and nowadays.
They tend to be very careful
with that for privacy reasons,
but also have great questions.
I find it very
inspiring kind of to listen to
the people there and kind of see
what kind of questions they have
and often their questions.
That also come up outside of these.
These companies have a couple of
papers with the rights in the chat.
And then
as soon as an atheist kind of where we
look at ways of combining experimental data
and observational data, and can it there.
Rights Chetty was interested
in what is the effect
of Early Childhood programs on outcomes
later in life? Not just kind of test scores,
but on earnings and stuff, and
we cannot be developed methods
that would help you shed
light on that, on the some,
in some settings and the same problems.
Came up kind of in this
tech company settings.
And so for my perspective, it's
the same kind of a stocking two
people doing a protocol work.
I tried to kind of look at these
specific problems and then try to come up
with more General problems that we
formulating the problems at a higher level.
So that I can think about solutions
that work in a range of settings.
And so from that perspective, the
His with the the tech companies I
just very valuable and very useful.
It's know.
We do have students. Now spent
doing internships there and
then coming back and writing
more interesting thesis, as a
result of their experiences there.
- [Narrator] If you'd like to watch
more Nobel Conversations,
click here,
or if you'd like to learn
more about econometrics,
check out Josh's
"Mastering Econometrics" series.
If you'd like to learn more
about Guido, Josh and Isaiah
check out the links
in the description.
♪ [music] ♪