0:00:00.100,0:00:02.350 ♪ [music] ♪ 0:00:03.700,0:00:05.700 - [narrator] Welcome[br]to Nobel Conversations. 0:00:07.000,0:00:10.128 In this episode, Josh Angrist[br]and Guido Imbens 0:00:10.128,0:00:13.700 sit down with Isaiah Andrews[br]to discuss and disagree 0:00:13.700,0:00:16.580 over the role of machine learning[br]in applied econometrics. 0:00:18.300,0:00:19.769 - [Isaiah] So, of course,[br]there are a lot of topics 0:00:19.769,0:00:21.087 where you guys largely agree, 0:00:21.087,0:00:22.313 but I'd like to turn to one 0:00:22.313,0:00:24.240 where maybe you have[br]some differences of opinion. 0:00:24.240,0:00:25.728 So I'd love to hear[br]some of your thoughts 0:00:25.728,0:00:26.883 about machine learning 0:00:26.883,0:00:29.900 and the goal that it's playing[br]and is going to play in economics. 0:00:30.200,0:00:33.352 - [Guido] I've looked at some data[br]like the proprietary 0:00:33.352,0:00:35.100 so that there's[br]no published paper there. 0:00:36.719,0:00:38.159 There was an experiment[br]that was done 0:00:38.159,0:00:39.500 on some search algorithm. 0:00:39.700,0:00:41.497 And the question was... 0:00:42.901,0:00:45.600 it was about ranking things[br]and changing the ranking. 0:00:45.900,0:00:47.500 That was sort of clear... 0:00:48.400,0:00:50.600 that was going to be[br]a lot of heterogeneity there. 0:00:50.600,0:00:51.700 Mmm, 0:00:51.700,0:00:58.120 You know, if you look for say, 0:00:58.300,0:01:00.350 a picture of Britney Spears 0:01:00.350,0:01:02.400 that it doesn't really matter[br]where you rank it 0:01:02.400,0:01:05.500 because you're going to figure out[br]what you're looking for, 0:01:06.200,0:01:07.867 whether you put it[br]in the first or second 0:01:07.867,0:01:09.800 or third position of the ranking. 0:01:10.100,0:01:12.500 But if you're looking[br]for the best econometrics book, 0:01:13.300,0:01:16.500 if you put your book[br]first or your book tenth, 0:01:16.500,0:01:18.100 that's going to make[br]a big difference 0:01:18.600,0:01:21.829 how much how often people[br]are going to click on it. 0:01:21.829,0:01:23.417 And so there you go -- 0:01:23.417,0:01:27.218 - [Josh] Why do I need[br]machine learning to discover that? 0:01:27.218,0:01:29.195 It seems like could[br]I can discover it simply? 0:01:29.195,0:01:30.435 - [Guido] So in general-- 0:01:30.435,0:01:32.100 - [Josh] There were lots[br]of possible... 0:01:32.100,0:01:35.490 - You what you want to think about[br]there being lots of characteristics 0:01:35.490,0:01:37.610 of the items 0:01:37.610,0:01:41.682 that you want to understand[br]what drives the heterogeneity 0:01:42.300,0:01:43.427 in the effect of-- 0:01:43.427,0:01:45.600 - But you're just predicting 0:01:45.600,0:01:47.700 In some sense, you're solving[br]a marketing problem. 0:01:48.400,0:01:49.580 - [inaudible] it's causal effect, 0:01:49.580,0:01:51.800 - It's causal, but it has[br]no scientific content. 0:01:51.800,0:01:53.300 Think about... 0:01:54.100,0:01:57.300 - No, but it's similar things[br]in medical settings. 0:01:58.000,0:02:01.300 If you do an experiment, [br]you may actually be very interested 0:02:01.300,0:02:03.900 in whether the treatment[br]works for some groups or not. 0:02:03.900,0:02:06.500 And you have a lot of individual[br]characteristics, 0:02:06.500,0:02:08.000 and you want[br]to systematically search. 0:02:08.000,0:02:09.500 - Yeah. I'm skeptical about that -- 0:02:09.500,0:02:12.603 that sort of idea that there's[br]this personal causal effect 0:02:12.603,0:02:13.900 that I should care about, 0:02:14.000,0:02:16.063 and that machine learning[br]can discover it 0:02:16.063,0:02:17.596 in some way that's useful. 0:02:17.596,0:02:21.400 So think about -- I've done[br]a lot of work on schools, 0:02:21.400,0:02:23.950 going to, say, a charter school, 0:02:23.950,0:02:25.225 a publicly funded private school, 0:02:25.225,0:02:26.500 effectively, you know,[br]that's free to structure 0:02:26.500,0:02:29.300 its own curriculum[br]for context there. 0:02:29.300,0:02:31.000 Some types of charter schools 0:02:31.000,0:02:32.700 generate spectacular[br]achievement gains, 0:02:32.700,0:02:36.400 and in the data set[br]that produces that result, 0:02:36.400,0:02:37.800 I have a lot of covariance. 0:02:37.800,0:02:41.353 So I have baseline scores,[br]and I have family background, 0:02:41.353,0:02:43.576 the education of the parents, 0:02:43.576,0:02:45.800 the sex of the child, [br]the race of the child. 0:02:45.800,0:02:48.300 And, well, soon as I put[br]half a dozen of those together, 0:02:48.400,0:02:51.900 I have a very high dimensional space. 0:02:52.300,0:02:53.600 I'm definitely interested[br]in sort of coarse features 0:02:53.600,0:02:54.900 of that treatment effect, 0:02:54.900,0:02:57.150 like whether it's better for people 0:02:57.150,0:02:59.400 who come from[br]lower income families. 0:03:02.600,0:03:06.000 I have a hard time believing[br]that there's an application, 0:03:06.400,0:03:10.300 for the very high dimensional[br]version of that, 0:03:10.500,0:03:11.850 where I discovered[br]that for non-white children 0:03:11.850,0:03:13.200 who have high family incomes 0:03:13.800,0:03:17.800 but baseline scores[br]in the third quartile 0:03:18.300,0:03:20.650 and only went to public school[br]in the third grade 0:03:20.650,0:03:23.000 but not the sixth grade. 0:03:23.000,0:03:25.500 So that's what that high[br]dimensional analysis produces. 0:03:25.800,0:03:28.100 This very elaborate[br]conditional statement. 0:03:28.300,0:03:31.000 There's two things that are wrong[br]with that in my view. 0:03:31.000,0:03:32.500 First, I don't see it as... 0:03:32.500,0:03:34.000 I just can't imagine[br]why it's actionable. 0:03:34.600,0:03:36.600 I don't know why[br]you'd want to act on it. 0:03:36.600,0:03:38.900 And I know also[br]that there's some alternative model 0:03:38.900,0:03:41.200 that fits almost as well, 0:03:41.800,0:03:43.000 that flips everything, 0:03:43.200,0:03:45.350 Because machine learning[br]doesn't tell me 0:03:45.350,0:03:47.500 that this is really[br]the predictor that matters. 0:03:48.400,0:03:52.300 It just tells me that[br]this is a good predictor. 0:03:52.800,0:03:54.350 And so, I think[br]there is something different 0:03:54.350,0:03:55.900 about the social science contest. 0:03:57.940,0:03:59.545 - [Guido] I think[br]the [socialized sign] applications 0:03:59.545,0:04:01.150 you're talking about, 0:04:01.150,0:04:02.600 once were... 0:04:03.400,0:04:08.100 I think there's not a huge amount[br]of heterogeneity in the effects. 0:04:08.400,0:04:11.200 - [Josh] There might be 0:04:11.200,0:04:14.000 if you allow me[br]to to fill that space. 0:04:14.600,0:04:16.350 - No... not even then. 0:04:16.350,0:04:18.100 I think for a lot[br]of those interventions, 0:04:18.300,0:04:22.000 you would expect that the effect[br]is the same sign for everybody. 0:04:23.400,0:04:27.600 There may be small differences[br]in the magnitude, but it's not... 0:04:28.200,0:04:31.700 For a lot of these education[br]defenses -- they're good for everybody. 0:04:32.900,0:04:35.250 It's not that they're bad[br]for some people 0:04:35.250,0:04:37.600 and good for other people, 0:04:37.600,0:04:39.200 and that is kind[br]of very small pockets 0:04:39.200,0:04:40.800 where they're bad there. 0:04:40.900,0:04:43.900 But it may be some variation[br]in the magnitude, 0:04:44.000,0:04:48.200 but you would need very, [br]very big data sets to find those. 0:04:48.400,0:04:49.900 I agree that in those cases, 0:04:49.900,0:04:51.400 they probably wouldn't be[br]very actionable anyone. 0:04:51.700,0:04:53.800 But I think there's a lot[br]of other settings 0:04:54.100,0:04:56.600 where there is[br]much more heterogeneity. 0:04:57.400,0:04:59.500 - Well, I'm open[br]to that possibility, 0:04:59.500,0:05:05.550 and I think the example you gave[br]is essentially a marketing example. 0:05:06.430,0:05:10.700 - No, those have implications for it[br]and that's the organization, 0:05:10.700,0:05:13.900 whether you need[br]to worry about the... 0:05:14.000,0:05:17.900 - Well, I need to see that paper. 0:05:18.400,0:05:21.200 - So the sense I'm getting... 0:05:21.500,0:05:23.100 - We still disagree on something.[br]- Yes. 0:05:23.100,0:05:24.100 [laughter] 0:05:24.100,0:05:25.400 - We haven't converged[br]on everything. 0:05:25.400,0:05:26.050 - I'm getting that sense. 0:05:26.050,0:05:26.700 [laughter] 0:05:27.200,0:05:29.100 - Actually, we've diverged on this 0:05:29.100,0:05:30.050 because this wasn't around[br]to argue about. 0:05:30.050,0:05:31.000 [laughter] 0:05:33.200,0:05:35.600 - Is it getting a little warm here? 0:05:35.600,0:05:38.000 - Warmed up. Warmed up is good. 0:05:38.100,0:05:40.800 The sense I'm getting is, Josh,[br]you're not saying 0:05:40.900,0:05:43.400 that you're confident[br]that there is no way 0:05:43.400,0:05:45.400 that there is an application[br]where the stuff. 0:05:45.400,0:05:46.800 It's useful you are saying 0:05:46.800,0:05:48.200 you are unconvinced by[br]the existing application to date. 0:05:48.300,0:05:51.280 Fair enough. 0:05:51.280,0:05:53.120 - I'm very confident. 0:05:53.120,0:05:54.300 [laughter] 0:05:54.300,0:05:55.300 - In this case. 0:05:55.300,0:05:57.500 - I think Josh does have a point 0:05:58.000,0:06:02.100 that even in the prediction cases 0:06:02.300,0:06:05.000 where a lot of the machine learning[br]methods really shine 0:06:05.000,0:06:06.600 is where there's just a lot[br]of heterogeneity. 0:06:07.300,0:06:10.600 - You don't really care much[br]about the details there, right? 0:06:10.900,0:06:15.000 It doesn't have[br]a policy angle or something. 0:06:15.200,0:06:18.100 - They kind of recognizing[br]handwritten digits and stuff. 0:06:18.300,0:06:21.150 It does much better there 0:06:21.150,0:06:24.000 than building[br]some complicated model. 0:06:24.400,0:06:28.100 But a lot of the social science,[br]a lot of the economic applications, 0:06:28.300,0:06:30.200 we actually know a huge amount[br]about the relationship 0:06:30.200,0:06:32.100 between its variables. 0:06:32.100,0:06:34.600 A lot of the relationships[br]are strictly monotone. 0:06:35.400,0:06:39.400 Education is going to increase[br]people's earnings, 0:06:39.800,0:06:41.950 irrespective of the demographic, 0:06:41.950,0:06:44.100 irrespective of the level[br]of education you already have. 0:06:44.100,0:06:45.950 - Until they get to a Ph.D. 0:06:45.950,0:06:47.800 - Yeah, there is a graduate school... 0:06:48.150,0:06:49.150 [laughter] 0:06:49.500,0:06:50.700 but go over a reasonable range. 0:06:51.600,0:06:55.900 It's not going[br]to go down very much. 0:06:56.100,0:06:57.900 In a lot of the settings 0:06:57.900,0:06:59.700 where these machine learning[br]methods shine, 0:06:59.700,0:07:01.900 there's a lot of [ ] 0:07:02.100,0:07:04.900 kind of multimodality[br]in these relationships, 0:07:05.300,0:07:08.400 and they're going to be[br]very powerful. 0:07:08.400,0:07:11.500 But I still stand by that. 0:07:11.700,0:07:16.100 These methods just have[br]a huge amount to offer 0:07:16.400,0:07:18.100 for economists, 0:07:18.200,0:07:21.700 and they're going to be[br]a big part of the future. 0:07:23.400,0:07:24.600 - [Isaiah] Feels like[br]there's something interesting 0:07:24.600,0:07:25.800 to be said about[br]machine learning here. 0:07:25.800,0:07:27.700 So, Guido, I was wondering,[br]could you give some more... 0:07:28.000,0:07:29.000 maybe some examples[br]of the sorts of examples 0:07:29.000,0:07:32.500 you're thinking about[br]with applications [ ] at the moment? 0:07:32.500,0:07:34.100 - So on areas where 0:07:34.700,0:07:36.400 instead of looking[br]for average cause or effects 0:07:36.500,0:07:39.350 we're looking for[br]individualized estimates, 0:07:39.350,0:07:42.200 predictions of cause or effects 0:07:42.400,0:07:44.950 and the machine learning algorithms[br]have been very effective, 0:07:48.300,0:07:51.500 Traditionally, we would have done[br]these things using kernel methods. 0:07:51.600,0:07:54.500 And theoretically they work great, 0:07:54.600,0:07:56.000 and there's some arguments 0:07:56.000,0:07:57.400 that, formally, [br]you can't do any better. 0:07:57.600,0:08:00.500 But in practice, [br]they don't work very well. 0:08:00.900,0:08:03.150 Random causal forest-type things 0:08:03.150,0:08:05.400 that Stefan Wager and Susan Athey[br]have been working on 0:08:05.400,0:08:09.500 have used very widely. 0:08:09.600,0:08:12.200 They've been very effective[br]in these settings 0:08:12.400,0:08:18.100 to actually get causal effects[br]that vary be [ ]. 0:08:20.700,0:08:23.200 I think this is still just the beginning[br]of these methods. 0:08:23.200,0:08:25.700 But in many cases, 0:08:26.400,0:08:31.600 these algorithms are very effective[br]as searching over big spaces 0:08:31.800,0:08:35.600 and finding the functions that fit very well 0:08:35.900,0:08:41.100 in ways that we couldn't[br]really do beforehand. 0:08:41.500,0:08:43.400 - I don't know of an example 0:08:43.400,0:08:45.300 where machine learning[br]has generated insights 0:08:45.300,0:08:48.100 about a causal effect[br]that I'm interested in. 0:08:48.300,0:08:49.800 And I do know of examples 0:08:49.800,0:08:51.300 where it's potentially[br]very misleading. 0:08:51.300,0:08:53.700 So I've done some work[br]with Brigham Frandsen, 0:08:54.100,0:08:55.100 using, for example, random forest[br]to model covariate effects 0:08:55.100,0:08:59.900 in an instrumental[br]variables problem 0:09:00.200,0:09:01.200 Where you need you need[br]to condition on covariance. 0:09:04.400,0:09:06.300 And you don't particularly[br]have strong feelings 0:09:06.300,0:09:08.200 about the functional form for that, 0:09:08.200,0:09:10.000 so maybe you should curve... 0:09:10.900,0:09:12.700 be open to flexible curve fitting, 0:09:12.700,0:09:14.500 and that leads you down a path 0:09:14.500,0:09:18.000 where there's a lot[br]of nonlinearities in the model, 0:09:18.200,0:09:20.600 and that's very dangerous with IV 0:09:20.600,0:09:23.000 because any sort[br]of excluded non-linearity 0:09:23.300,0:09:25.450 potentially generates[br]a spurious causal effect 0:09:25.450,0:09:27.600 and Brigham and I[br]showed that very powerfully. 0:09:27.900,0:09:32.200 very powerfully. I think in[br]the case of two instruments 0:09:32.700,0:09:36.000 that come from a paper, mine[br]with Bill Evans. Where if you, 0:09:36.500,0:09:37.600 you know, replace it 0:09:38.100,0:09:42.600 in a traditional two stage least squares,[br]estimator with some kind of random Forest. 0:09:42.900,0:09:48.000 You get very precisely at[br]estimated nonsense estimates and 0:09:49.000,0:09:51.100 You know, I think that's[br]a, that's a big caution. 0:09:51.100,0:09:53.400 And I, you know, in view of those findings 0:09:53.700,0:09:57.100 in an example, I care about where[br]the instruments are very simple 0:09:57.400,0:09:59.100 and I believe that they're valid, 0:09:59.300,0:10:01.600 you know, I would be skeptical of that. So 0:10:02.900,0:10:06.800 non-linearity and Ivy don't mix[br]very comfortably. Now I said, 0:10:07.200,0:10:11.400 you know in some sense that's already[br]a more complicated. Well, it's Ivy. 0:10:11.600,0:10:11.900 Yeah, 0:10:12.500,0:10:16.700 but then we work on that and friend out. 0:10:18.600,0:10:22.300 I sat in tow vehicle actually guy a lot[br]of these papers Cross by my desk and it, 0:10:22.700,0:10:29.500 but the motivation is is not[br]clear at a fact, really lacking. 0:10:29.800,0:10:35.100 And they're not, they're not, they called[br]type semi-parametric foundational papers. 0:10:35.400,0:10:37.100 So that that's a big problem 0:10:38.000,0:10:42.400 and kind of related problem is that[br]we have this tradition in econometrics 0:10:42.600,0:10:47.500 being very focused on these formulas[br]and tonic results kind of weird. 0:10:48.800,0:10:52.600 We have just have a lot of papers[br]that where you people, propose 0:10:52.800,0:10:55.700 a method and then establish[br]the asymptotic properties 0:10:56.300,0:11:01.900 in in a very kind of[br]standardized way that bad. 0:11:02.900,0:11:07.200 Well, I think it's sort of close[br]the door for a lot of work. 0:11:07.200,0:11:11.600 That doesn't fit it into that. We're[br]in the machine learning literature. 0:11:11.900,0:11:14.300 A lot of things are[br]more algorithmic people. 0:11:15.700,0:11:18.500 Had algorithms for coming[br]up with predictions. 0:11:18.800,0:11:23.600 The turn out to actually work much better[br]than say, nonparametric kernel regression 0:11:24.000,0:11:26.800 for a long-ass time. We're doing all[br]the nonparametric syndecan, metrics. 0:11:26.800,0:11:31.100 We do it using kernel regression and[br]I was great for proving theorems. 0:11:31.300,0:11:34.800 You could get confidence, intervals and[br]consistency, and asymptotic normality, 0:11:34.800,0:11:37.000 and it was all great, but[br]it wasn't very useful. 0:11:37.300,0:11:40.900 And the things they did in machine[br]learning. I just way way better, 0:11:41.000,0:11:45.100 but they didn't have to the proper. That's[br]not my beef with machine learning theory. 0:11:45.300,0:11:51.200 As we know my name, I'm saying[br]there for the prediction part. 0:11:51.400,0:11:54.500 It does much better. Yeah, that's[br]a better curve fitting to it. 0:11:54.900,0:11:56.500 But it did. So 0:11:57.100,0:12:02.700 in a way that would not have made[br]those papers initially easy to get into 0:12:03.000,0:12:06.300 the econometrics journals because it[br]wasn't proving the type of things. 0:12:06.400,0:12:11.200 You know, when when Brian was doing his[br]regression trees that just didn't fit in 0:12:11.800,0:12:15.100 and I think he would have[br]had a very hard time. 0:12:15.200,0:12:18.400 Polishing these things. And it[br]could have had six journals. 0:12:18.900,0:12:24.400 I, so I think we're we limited[br]ourselves too much and we 0:12:24.700,0:12:27.900 that left us close things off 0:12:28.000,0:12:30.800 for a lot of these machine learning[br]methods, that actually very useful. 0:12:30.900,0:12:34.000 Hmm. I mean, I think they're in general, 0:12:34.900,0:12:36.200 that literature the computer. 0:12:36.200,0:12:39.300 Scientists have brought a huge[br]number of these algorithms. 0:12:39.600,0:12:43.900 The have proposed a huge number of these[br]algorithms that actually very useful 0:12:44.000,0:12:44.700 at that are 0:12:45.500,0:12:49.100 Affecting the way we're going[br]to be doing empirical work, 0:12:49.800,0:12:55.100 but we've not fully internalize that[br]because we're still very focused on getting 0:12:55.300,0:12:57.500 Point estimates and[br]getting standard errors 0:12:58.600,0:13:01.200 and getting P values in a way that 0:13:01.700,0:13:03.100 we need to move Beyond 0:13:03.300,0:13:04.300 to fully harness. 0:13:04.300,0:13:10.700 The force, the quote, the benefits[br]from machine learning literature. 0:13:10.900,0:13:15.100 Hmm. On the one hand. I guess I very[br]much take your point that sort of the the 0:13:15.200,0:13:18.600 Tional. Econometrics, framework[br]of sort of propose, a method, 0:13:18.600,0:13:22.600 proved a limit theorem under some[br]asymptotic story, story story, 0:13:22.600,0:13:26.900 story story publish a[br]paper is constraining. 0:13:26.900,0:13:29.700 And that in some sense by thinking, more, 0:13:29.700,0:13:33.200 broadly about what a methods paper could[br]look. Like we may write in some sense. 0:13:33.200,0:13:35.900 Certainly the machine learning[br]literature has found a bunch of things, 0:13:35.900,0:13:38.300 which seem to work quite[br]well for a number of problems 0:13:38.300,0:13:42.400 and are now having substantial influence[br]in economics. I guess a question. 0:13:42.400,0:13:44.800 I'm interested in is, how do you think? 0:13:45.200,0:13:47.600 The goal of fear. 0:13:47.900,0:13:51.200 Sort of, do you think there is? There's[br]no value in the theory part of it? 0:13:51.600,0:13:54.800 Because I guess it's sort of a question[br]that I often have to sort of seeing 0:13:54.800,0:13:56.900 that output from a machine learning tool 0:13:56.900,0:13:59.400 that actually a number of the[br]methods that you talked about. 0:13:59.400,0:14:01.800 Actually do have inferential[br]results, develop for them, 0:14:02.600,0:14:06.400 something that I always wonder about a sort[br]of uncertainty quantification and just, 0:14:06.500,0:14:08.000 you know, I I have my prior, 0:14:08.000,0:14:11.000 I come into the world with my view.[br]I see the result of this thing. 0:14:11.000,0:14:14.500 How should I update based on it? And[br]in some sense, if I'm in a world where 0:14:14.600,0:14:15.100 things are. 0:14:15.200,0:14:18.200 Normally distributed. I know[br]how to do it here. I don't. 0:14:18.200,0:14:21.400 And so I'm interested to hear[br]had I think it sounds. So 0:14:21.500,0:14:24.300 I don't see this as sort[br]of close it saying, well 0:14:24.400,0:14:26.500 we do these results[br]are not not interesting 0:14:26.600,0:14:27.700 but it's gonna be a lot of cases 0:14:28.000,0:14:31.200 where it's going to be incredibly hard to[br]get those results and we may not be able 0:14:31.200,0:14:33.200 to get there and 0:14:33.400,0:14:37.700 we may need to do it in stages. Where[br]first someone says. Hey I have this 0:14:39.600,0:14:44.800 interesting algorithm for for doing[br]something and it works well by some 0:14:45.600,0:14:49.900 The Criterion that on this[br]this particular data set 0:14:51.000,0:14:53.400 and I'm visit put it[br]out there and we should 0:14:53.700,0:14:58.000 maybe someone will figure out a way that[br]you can later actually still do inference 0:14:58.000,0:14:59.100 on the some condition. 0:14:59.100,0:15:02.100 So and maybe those are not[br]particularly realistic conditions, 0:15:02.100,0:15:05.500 then we kind of go further,[br]but I think we've been 0:15:06.700,0:15:11.400 Too constraining things too much where we[br]said, you know, this is the type of things 0:15:12.100,0:15:14.400 that we need to do. And I had some sense 0:15:15.700,0:15:18.200 that goes back to kind of[br]the way they dress and I 0:15:19.700,0:15:21.900 thought about things for the[br]local average treatment effect. 0:15:21.900,0:15:24.600 That wasn't quite the way people[br]were thinking about these problems. 0:15:24.600,0:15:29.200 Before they say they there was a sense[br]that some of the people said, you know, 0:15:29.500,0:15:31.900 the way you need to do. These[br]things, is you first, say 0:15:32.200,0:15:36.300 what you're interested in estimating[br]and then you do the best job you can. 0:15:36.500,0:15:37.700 In estimating that 0:15:38.100,0:15:44.200 and what you have you guys had doing is[br]doing it, you guys are doing it backwards. 0:15:44.300,0:15:46.700 You're going to say[br]here. I have an estimator 0:15:47.300,0:15:49.600 and now I'm going to figure out what what 0:15:49.800,0:15:51.400 what it says estimating then expose. 0:15:51.400,0:15:53.900 You're going to say why you[br]think that's interesting 0:15:53.900,0:15:56.600 or maybe why it's not interesting[br]and that's that's not okay. 0:15:56.600,0:15:58.600 You're not allowed to do that that way. 0:15:59.000,0:16:04.100 And I think we should just be a little[br]bit more flexible and thinking about the 0:16:04.300,0:16:06.300 how to look at at 0:16:06.400,0:16:11.300 Problems because I think we've missed[br]some things by not by not doing that. 0:16:13.000,0:16:16.600 So you've heard our views.[br]Isaiah, you've seen that, we have 0:16:17.000,0:16:20.400 some points of disagreement. Why[br]don't you referee this dispute for us? 0:16:22.500,0:16:28.100 Oh, I'm so so nice of you to ask me[br]a small question. So I guess for one. 0:16:28.200,0:16:33.200 I very much agree with something[br]that he do said earlier of. 0:16:36.000,0:16:36.300 So what? 0:16:36.500,0:16:37.900 Where it seems. Where the, 0:16:37.900,0:16:41.400 the case for machine learning seems[br]relatively clear is in settings, where 0:16:41.500,0:16:45.100 you know, we're interested in some version[br]of a nonparametric prediction problem. 0:16:45.100,0:16:49.700 So I'm interested in estimating a conditional[br]expectation or conditional probability 0:16:50.000,0:16:52.100 and in the past, maybe I[br]would have run a colonel, 0:16:52.100,0:16:55.800 I would have run a kernel regression or[br]I would have run a series regression or 0:16:56.100,0:16:57.400 something along those lines. 0:16:57.700,0:16:58.000 Sort of, 0:16:58.000,0:16:58.700 it seems like 0:16:58.700,0:17:02.000 at this point we've a fairly good[br]sense that in a fairly wide range 0:17:02.000,0:17:06.300 of applications machine learning[br]methods seem to do better for 0:17:06.400,0:17:06.800 Or, you know, 0:17:06.800,0:17:08.800 estimating conditional mean functions 0:17:08.800,0:17:12.000 or conditional probabilities or[br]various other nonparametric objects 0:17:12.400,0:17:16.600 than more traditional nonparametric[br]methods that were studied in econometrics 0:17:16.600,0:17:19.100 and statistics, especially[br]in high dimensional settings. 0:17:19.500,0:17:23.100 So you thinking of maybe the propensity[br]score or something like that? 0:17:23.100,0:17:25.300 So exactly, so nuisance functions. Yeah. 0:17:25.300,0:17:28.900 So things like propensity scores[br]things or I mean even objects 0:17:28.900,0:17:30.100 of more direct inference 0:17:30.200,0:17:32.400 interest, like conditional[br]average treatment effects, right? 0:17:32.400,0:17:35.100 Which of the difference of two[br]conditional, expectation functions, 0:17:35.100,0:17:36.300 potentially things like that. 0:17:36.500,0:17:40.400 Of course, even there,[br]right? We the the theory 0:17:40.500,0:17:43.700 for in France or the theory for[br]sort of how to how to interpret, 0:17:43.700,0:17:45.900 how to make large simple statements[br]about some of these things are 0:17:46.000,0:17:50.100 less well-developed depending on the[br]machine learning, estimator used. 0:17:50.100,0:17:53.800 And so, I think there's something[br]that is tricky is that we 0:17:53.900,0:17:55.700 can have these methods, which work a lot, 0:17:55.700,0:17:58.000 which seemed to work a lot[br]better for some purposes. 0:17:58.000,0:18:01.600 But which we need to be a bit[br]careful in how we plug them in or how 0:18:01.600,0:18:03.300 we interpret the resulting statements. 0:18:03.600,0:18:06.200 But of course, that's a very,[br]very active area right now. We're 0:18:06.400,0:18:10.400 People are doing tons of great work.[br]And so I exfoli expect and hope 0:18:10.400,0:18:12.800 to see much more going forward there. 0:18:13.000,0:18:17.300 So one issue with machine learning,[br]that always seems a danger is, or 0:18:17.400,0:18:20.300 that is sometimes a danger[br]and had some times led to 0:18:20.500,0:18:22.600 applications that have[br]made. Less sense, is 0:18:22.800,0:18:25.100 when folks start with a method that are 0:18:25.300,0:18:28.500 start with a method that they're very[br]excited about rather than a question, 0:18:28.900,0:18:32.100 right? So sort of starting with[br]a question where here's the 0:18:32.500,0:18:36.200 object I'm interested in here is[br]the parameter of Interest. Let me 0:18:36.700,0:18:37.100 You know, 0:18:37.300,0:18:39.500 think about how I would[br]identify that thing, 0:18:39.500,0:18:41.800 how I would recover that[br]thing, if I had a ton of data, 0:18:41.900,0:18:44.000 oh, here's a conditional[br]expectation function. 0:18:44.000,0:18:47.100 Let me plug in an estimator on[br]machine. Learning estimator for that. 0:18:47.200,0:18:48.800 That seems very very sensible. 0:18:49.000,0:18:53.100 Whereas, you know, if I[br]digress quantity on price 0:18:53.700,0:18:56.000 and say that I used a[br]machine learning method, 0:18:56.300,0:18:58.900 maybe I'm satisfied that that[br]solves the in dodging, 80 problem. 0:18:58.900,0:19:01.200 We're usually worried[br]about their maybe I'm not, 0:19:01.500,0:19:03.200 but again, that's something where the, 0:19:03.400,0:19:06.300 the way to address. It, seems[br]relatively clear, right? 0:19:06.500,0:19:09.000 It's the find your object of interest and 0:19:09.200,0:19:11.600 think about, is that just[br]bringing the economics? 0:19:11.700,0:19:12.200 Exactly. 0:19:12.200,0:19:15.400 And and can I think about it,[br]and they denied it, but harnessed 0:19:15.400,0:19:18.300 the power of the machine[br]learning methods for precisely 0:19:18.500,0:19:22.800 for some of the components precisely.[br]Exactly. So sort of, you know, the, the, 0:19:22.900,0:19:25.600 the question of interest is the same as[br]the question of interest is always been, 0:19:25.600,0:19:29.500 but we now better methods for estimating[br]some pieces of this, right? The 0:19:29.900,0:19:31.600 the place that seems harder to, uh, 0:19:31.900,0:19:33.400 harder to forecast is Right. 0:19:33.400,0:19:36.300 Obviously, there's a huge amount[br]going in going on in the machine. 0:19:36.400,0:19:37.400 Learning literature 0:19:37.500,0:19:39.700 and the great sort of The Limited ways 0:19:39.700,0:19:42.900 of plugging it in that I've referenced[br]so far are limited piece of that. 0:19:43.000,0:19:46.100 And so I think there are all sorts of[br]other interesting questions about where, 0:19:46.300,0:19:46.900 right sort of 0:19:47.100,0:19:49.300 where does this interaction[br]go? What else can we learn? 0:19:49.300,0:19:52.000 And that's something where,[br]you know, I think there's 0:19:52.200,0:19:56.400 a ton going on which seems very promising[br]and I have no idea what the answer is. 0:19:57.000,0:20:01.200 No, no. No, it's I so I totally[br]agree with that but it's no. 0:20:01.800,0:20:03.500 That's makes it very exciting. 0:20:03.800,0:20:06.100 And I think that's just a[br]little work to be done there. 0:20:06.600,0:20:11.400 All right. So I say agrees[br]with me there, say that person. 0:20:14.500,0:20:17.700 If you'd like to watch more[br]Nobel conversations, click here, 0:20:18.000,0:20:20.400 or if you'd like to learn[br]more about econometrics, 0:20:20.500,0:20:23.100 check out Josh's mastering[br]econometrics series. 0:20:23.600,0:20:26.500 If you'd like to learn more[br]about he do Josh and Isaiah 0:20:26.700,0:20:28.200 check out the links in the description.