0:00:00.100,0:00:02.350 ♪ [music] ♪ 0:00:03.700,0:00:05.700 - [narrator] Welcome[br]to Nobel Conversations. 0:00:07.000,0:00:10.128 In this episode, Josh Angrist[br]and Guido Imbens 0:00:10.128,0:00:13.700 sit down with Isaiah Andrews[br]to discuss and disagree 0:00:13.700,0:00:16.580 over the role of machine learning[br]in applied econometrics. 0:00:18.300,0:00:19.769 - [Isaiah] So, of course,[br]there are a lot of topics 0:00:19.769,0:00:21.087 where you guys largely agree, 0:00:21.087,0:00:22.313 but I'd like to turn to one 0:00:22.313,0:00:24.240 where maybe you have[br]some differences of opinion. 0:00:24.240,0:00:25.728 So I'd love to hear[br]some of your thoughts 0:00:25.728,0:00:26.883 about machine learning 0:00:26.883,0:00:29.900 and the goal that it's playing[br]and is going to play in economics. 0:00:30.200,0:00:33.352 - [Guido] I've looked at some data[br]like the proprietary 0:00:33.352,0:00:35.100 so that there's[br]no published paper there. 0:00:36.719,0:00:38.159 There was an experiment[br]that was done 0:00:38.159,0:00:39.500 on some search algorithm. 0:00:39.700,0:00:41.497 And the question was... 0:00:42.901,0:00:45.600 it was about ranking things[br]and changing the ranking. 0:00:45.900,0:00:47.500 That was sort of clear... 0:00:48.400,0:00:50.600 that was going to be[br]a lot of heterogeneity there. 0:00:50.600,0:00:51.700 Mmm, 0:00:51.700,0:00:58.120 You know, if you look for say, 0:00:58.300,0:01:00.350 a picture of Britney Spears 0:01:00.350,0:01:02.400 that it doesn't really matter[br]where you rank it 0:01:02.400,0:01:05.500 because you're going to figure out[br]what you're looking for, 0:01:06.200,0:01:07.867 whether you put it[br]in the first or second 0:01:07.867,0:01:09.800 or third position of the ranking. 0:01:10.100,0:01:12.500 But if you're looking[br]for the best econometrics book, 0:01:13.300,0:01:16.500 if you put your book[br]first or your book tenth, 0:01:16.500,0:01:18.100 that's going to make[br]a big difference 0:01:18.600,0:01:21.829 how much how often people[br]are going to click on it. 0:01:21.829,0:01:23.417 And so there you go -- 0:01:23.417,0:01:27.218 - [Josh] Why do I need[br]machine learning to discover that? 0:01:27.218,0:01:29.100 It seems like because[br]I can discover it simply. 0:01:29.300,0:01:31.800 So in general, there[br]were lots of possible. 0:01:32.100,0:01:36.300 You what you want to think about there[br]being lots of characteristics of the 0:01:36.400,0:01:42.000 the items that you want to understand[br]where, what drives the heterogeneity 0:01:42.300,0:01:45.600 in the effect of your just rekt,[br]you know, that in some sense. 0:01:45.600,0:01:47.700 You're solving a marketing problem. 0:01:48.400,0:01:51.800 Also affect you, it's causal,[br]but it has no scientific content. 0:01:51.800,0:01:53.300 I think about think about, 0:01:54.100,0:01:57.300 but it's similar things[br]and medical settings. 0:01:58.000,0:02:01.200 If you do an experiment, you[br]may actually be very interested 0:02:01.300,0:02:03.800 in whether the treatment[br]works for some groups or not. 0:02:03.900,0:02:06.500 And you have a lot of individual[br]characteristics and you want 0:02:06.500,0:02:09.500 to systematically search.[br]Yeah. I'm skeptical about that. 0:02:09.500,0:02:13.900 That sort of idea that there's this personal[br]causal effect that I should care about, 0:02:14.000,0:02:18.200 and that machine learning can Discover it[br]in some way that's useful. So think about 0:02:18.300,0:02:21.400 I've done a lot of work[br]on schools, going to say 0:02:21.400,0:02:26.500 a charter school publicly funded[br]private school effectively, you know, 0:02:26.500,0:02:29.300 that's free to structure its own[br]curriculum for context there. 0:02:29.300,0:02:32.700 Some types of charter, schools[br]are generate spectacular, 0:02:32.700,0:02:36.400 achievement gains and in the data[br]set that produces that result. 0:02:36.400,0:02:37.800 I have a lot of covariance. 0:02:37.800,0:02:41.200 So I have Baseline scores,[br]and I have family background, 0:02:41.200,0:02:45.800 the education of the parents, the sex[br]of the child, the race of the child. 0:02:45.800,0:02:48.300 And, well, soon as I put 0:02:48.400,0:02:51.900 Half a dozen of those together. I[br]have a very high dimensional space. 0:02:52.300,0:02:54.900 I'm definitely interested[br]in in sort, of course, 0:02:54.900,0:02:59.400 features of that treatment effect,[br]like whether it's better for people who 0:02:59.900,0:03:02.100 come from lower income families. 0:03:02.600,0:03:06.000 I have a hard time believing[br]that there's an application, 0:03:06.400,0:03:10.300 you know, for the very high[br]dimensional version of that, where 0:03:10.500,0:03:13.200 I discovered that for[br]non-white children who have 0:03:13.800,0:03:17.800 high family incomes, but Baseline[br]scores in the third quartile, 0:03:18.300,0:03:23.000 And only went to public school in the[br]third grade, but not the sixth grade. 0:03:23.000,0:03:25.500 So that's what that high[br]dimensional analysis produces. 0:03:25.800,0:03:28.100 This very elaborate conditional statement. 0:03:28.300,0:03:31.000 There's two things that are wrong[br]with that. In my view first. 0:03:31.000,0:03:34.000 I don't see it as I just can't[br]imagine why it's actionable. 0:03:34.600,0:03:36.600 I don't know why you'd want to act on it. 0:03:36.600,0:03:41.200 And I know also that there's some[br]alternative model that fits almost as well. 0:03:41.800,0:03:43.000 That flips everything, 0:03:43.200,0:03:47.500 right? Because machine learning doesn't[br]tell me that this is really the predictor 0:03:47.900,0:03:48.100 that 0:03:48.400,0:03:52.300 Is it just tells me that this[br]is a good predictor? And so, 0:03:52.800,0:03:55.900 you know, I think there is[br]something different about the 0:03:56.000,0:03:58.400 Moss social science contest. So I think 0:03:58.500,0:04:02.600 the socialized signs of applications[br]you're talking about once where 0:04:03.400,0:04:08.100 I think there's not a huge amount[br]of heterogeneity in the effects. 0:04:08.400,0:04:14.000 And so what there might be a few[br]allow me to to fill that space. No, 0:04:14.600,0:04:18.100 not even then I think for[br]a lot of those those into 0:04:18.300,0:04:22.000 Sanctions even effect. You would expect[br]that. The effect is the same sign 0:04:22.100,0:04:22.900 for everybody. 0:04:23.400,0:04:27.600 It may be there may be small differences[br]in the magnitude, but it's not 0:04:28.200,0:04:31.700 for a lot of these education[br]defenses. They're good for everybody. 0:04:31.800,0:04:32.300 They're 0:04:32.900,0:04:37.600 the it's not that they're bad for some[br]people and good for other people and 0:04:37.600,0:04:40.800 that is kind of very small[br]Pockets where they're bad the 0:04:40.900,0:04:43.900 but it may be some[br]variation in the magnitude, 0:04:44.000,0:04:48.200 but you would need very very big[br]data sets to find those and I 0:04:48.400,0:04:51.400 Then in those cases, they probably[br]wouldn't be very actionable anyone. 0:04:51.700,0:04:53.800 But there's I think there's[br]a lot of other settings 0:04:54.100,0:04:56.600 where there is much more hydrogen it. 0:04:57.400,0:05:01.600 Well, I'm open to that possibility[br]and I think the example you gave of 0:05:01.900,0:05:05.000 it's essentially a marketing example. 0:05:06.400,0:05:08.400 Now that maybe they[br]say there's a there's a 0:05:08.500,0:05:10.700 have implications for[br]and that's organization. 0:05:10.700,0:05:13.900 How you actually need to[br]whether you need to worry about 0:05:14.000,0:05:17.900 the well, I know Market[br]power, some see that paper. 0:05:18.400,0:05:21.200 So that's the sense. The[br]sense I'm getting is that 0:05:21.500,0:05:23.500 we still disagree on something. Yes. 0:05:24.100,0:05:26.700 We have it converged on[br]everything. I'm getting that sense. 0:05:27.200,0:05:31.000 Actually. We've diverged on this because[br]this wasn't around to argue about. 0:05:33.200,0:05:38.000 Is it getting a little warm here? Yeah.[br]Warm warmed up. Warmed up is good. 0:05:38.100,0:05:40.800 The sense. I'm getting his Jaws.[br]Sort of, you're not, you're not 0:05:40.900,0:05:43.400 saying that you're confident[br]that there is no way. 0:05:43.400,0:05:45.400 That there is an application[br]where the stuff is useful. 0:05:45.400,0:05:48.200 You are saying you are you're[br]unconvinced by the existing. 0:05:48.300,0:05:52.200 Applications to dedicate fair[br]that I'm very confident. Yeah, 0:05:54.200,0:05:55.000 in this case. 0:05:55.300,0:05:57.500 I think Josh does have a point that today 0:05:58.000,0:06:02.100 even in the prediction cases the where 0:06:02.300,0:06:05.000 a lot of the machine learning[br]methods really shine is 0:06:05.000,0:06:06.600 where there's just a lot of heterogeneity. 0:06:07.300,0:06:10.600 You don't really care much[br]about the details there, right? 0:06:10.900,0:06:15.000 Yes. It does. It doesn't have[br]a policy angle or something. 0:06:15.200,0:06:18.100 They kind of recognizing[br]handwritten digits and stuff. 0:06:18.300,0:06:24.000 For it does much better there than[br]building some complicated model. 0:06:24.400,0:06:28.100 But a lot of the social science, a[br]lot of the economic applications. 0:06:28.300,0:06:32.100 We actually know a huge amount about the[br]relationship between various variables. 0:06:32.100,0:06:34.600 A lot of the relationships[br]are strictly monotone. 0:06:35.400,0:06:39.400 There and education is going[br]to increase people's earnings, 0:06:39.800,0:06:44.100 irrespective of the demographic,[br]irrespective of the level of Education. 0:06:44.100,0:06:47.800 You already have until they get to a[br]PhD. Yeah. There is a graduate school. 0:06:49.500,0:06:50.700 A reasonable range. 0:06:51.600,0:06:55.900 It's a it's not going to[br]go down very much. We're 0:06:56.100,0:06:59.700 in a lot of the settings. For these[br]machine learning method shine. 0:06:59.700,0:07:01.900 It's going to there's a lot[br]of non-monetary Necessities 0:07:02.100,0:07:04.900 kind of multi modality[br]in these relationships 0:07:05.300,0:07:11.500 and they're they're going to be very[br]powerful but I still stand by that. 0:07:11.700,0:07:16.100 It kind of It kind of this message just[br]have a huge amount to offer the for 0:07:16.400,0:07:18.100 for economists and they go. 0:07:18.200,0:07:21.700 To be a big part of the future. 0:07:23.400,0:07:25.800 Feels like there's something interesting[br]to be said about machine learning here. 0:07:25.800,0:07:27.700 So, here I was wondering,[br]could you give some more, 0:07:28.000,0:07:29.000 maybe some examples 0:07:29.000,0:07:32.500 of the sorts of examples you're thinking[br]about with applications? I'm at the moment. 0:07:32.500,0:07:34.100 So while I'm on areas where 0:07:34.700,0:07:36.400 instead of looking for average 0:07:36.500,0:07:42.200 cause of facts were looking for[br]individualized estimates, and predictions of 0:07:42.400,0:07:47.500 of course of facts and their machine[br]learning algorithms have been very effective, 0:07:48.000,0:07:48.100 too. 0:07:48.300,0:07:51.500 Surely would have, we would have done[br]these things, using kernel methods. 0:07:51.600,0:07:54.500 And theoretically they work great and 0:07:54.600,0:07:57.400 the sort of some arguments that[br]you formally can't do any better. 0:07:57.600,0:08:00.500 But in practice, they[br]don't work very well and 0:08:00.900,0:08:05.400 random Forest, random cause of forest[br]type things that stuff on wagon, Susan. 0:08:05.400,0:08:09.500 I think I've been working[br]on. I used very widely. 0:08:09.600,0:08:12.200 They've been very effective,[br]kind of, in the settings 0:08:12.400,0:08:18.100 to actually get cause of facts[br]that are that the ferry by 0:08:18.200,0:08:19.900 Bike over has, and this kind of, 0:08:20.700,0:08:25.700 I think this is still just the beginning[br]of these methods. But in many cases, 0:08:26.400,0:08:31.600 the these algorithms are very[br]effective as searching over big spaces 0:08:31.800,0:08:35.600 and finding the functions that fit 0:08:35.900,0:08:41.100 the very well in ways that we[br]couldn't really do the beforehand. 0:08:41.500,0:08:45.300 I don't know of an example, where[br]machine learning has generated insights 0:08:45.300,0:08:48.100 about a causal effect that[br]I'm interested in. And I, 0:08:48.300,0:08:51.300 You know of examples where it's[br]potentially very misleading. 0:08:51.300,0:08:53.700 So I've done some work[br]with Brigham Franz and 0:08:54.100,0:08:55.100 using, for example, 0:08:55.100,0:08:59.900 random Forest to model covariate effects[br]in an instrumental variables problem. 0:09:00.200,0:09:01.200 Where you need, 0:09:01.600,0:09:03.500 you need to condition on covariance 0:09:04.400,0:09:08.200 and you don't particularly have strong[br]feelings about the functional form for that. 0:09:08.200,0:09:10.000 So maybe you should curve 0:09:10.500,0:09:10.900 think, 0:09:10.900,0:09:14.500 be open to flexible curve fitting[br]and that leads you down a path 0:09:14.500,0:09:18.000 where there's a lot of[br]nonlinearities in the model and 0:09:18.200,0:09:23.000 That's very dangerous with IV because[br]any sort of excluded non-linearity 0:09:23.300,0:09:27.600 potentially generates a spurious, causal[br]effect and Brigham. And I showed that 0:09:27.900,0:09:32.200 very powerfully. I think in[br]the case of two instruments 0:09:32.700,0:09:36.000 that come from a paper, mine[br]with Bill Evans. Where if you, 0:09:36.500,0:09:37.600 you know, replace it 0:09:38.100,0:09:42.600 in a traditional two stage least squares,[br]estimator with some kind of random Forest. 0:09:42.900,0:09:48.000 You get very precisely at[br]estimated nonsense estimates and 0:09:49.000,0:09:51.100 You know, I think that's[br]a, that's a big caution. 0:09:51.100,0:09:53.400 And I, you know, in view of those findings 0:09:53.700,0:09:57.100 in an example, I care about where[br]the instruments are very simple 0:09:57.400,0:09:59.100 and I believe that they're valid, 0:09:59.300,0:10:01.600 you know, I would be skeptical of that. So 0:10:02.900,0:10:06.800 non-linearity and Ivy don't mix[br]very comfortably. Now I said, 0:10:07.200,0:10:11.400 you know in some sense that's already[br]a more complicated. Well, it's Ivy. 0:10:11.600,0:10:11.900 Yeah, 0:10:12.500,0:10:16.700 but then we work on that and friend out. 0:10:18.600,0:10:22.300 I sat in tow vehicle actually guy a lot[br]of these papers Cross by my desk and it, 0:10:22.700,0:10:29.500 but the motivation is is not[br]clear at a fact, really lacking. 0:10:29.800,0:10:35.100 And they're not, they're not, they called[br]type semi-parametric foundational papers. 0:10:35.400,0:10:37.100 So that that's a big problem 0:10:38.000,0:10:42.400 and kind of related problem is that[br]we have this tradition in econometrics 0:10:42.600,0:10:47.500 being very focused on these formulas[br]and tonic results kind of weird. 0:10:48.800,0:10:52.600 We have just have a lot of papers[br]that where you people, propose 0:10:52.800,0:10:55.700 a method and then establish[br]the asymptotic properties 0:10:56.300,0:11:01.900 in in a very kind of[br]standardized way that bad. 0:11:02.900,0:11:07.200 Well, I think it's sort of close[br]the door for a lot of work. 0:11:07.200,0:11:11.600 That doesn't fit it into that. We're[br]in the machine learning literature. 0:11:11.900,0:11:14.300 A lot of things are[br]more algorithmic people. 0:11:15.700,0:11:18.500 Had algorithms for coming[br]up with predictions. 0:11:18.800,0:11:23.600 The turn out to actually work much better[br]than say, nonparametric kernel regression 0:11:24.000,0:11:26.800 for a long-ass time. We're doing all[br]the nonparametric syndecan, metrics. 0:11:26.800,0:11:31.100 We do it using kernel regression and[br]I was great for proving theorems. 0:11:31.300,0:11:34.800 You could get confidence, intervals and[br]consistency, and asymptotic normality, 0:11:34.800,0:11:37.000 and it was all great, but[br]it wasn't very useful. 0:11:37.300,0:11:40.900 And the things they did in machine[br]learning. I just way way better, 0:11:41.000,0:11:45.100 but they didn't have to the proper. That's[br]not my beef with machine learning theory. 0:11:45.300,0:11:51.200 As we know my name, I'm saying[br]there for the prediction part. 0:11:51.400,0:11:54.500 It does much better. Yeah, that's[br]a better curve fitting to it. 0:11:54.900,0:11:56.500 But it did. So 0:11:57.100,0:12:02.700 in a way that would not have made[br]those papers initially easy to get into 0:12:03.000,0:12:06.300 the econometrics journals because it[br]wasn't proving the type of things. 0:12:06.400,0:12:11.200 You know, when when Brian was doing his[br]regression trees that just didn't fit in 0:12:11.800,0:12:15.100 and I think he would have[br]had a very hard time. 0:12:15.200,0:12:18.400 Polishing these things. And it[br]could have had six journals. 0:12:18.900,0:12:24.400 I, so I think we're we limited[br]ourselves too much and we 0:12:24.700,0:12:27.900 that left us close things off 0:12:28.000,0:12:30.800 for a lot of these machine learning[br]methods, that actually very useful. 0:12:30.900,0:12:34.000 Hmm. I mean, I think they're in general, 0:12:34.900,0:12:36.200 that literature the computer. 0:12:36.200,0:12:39.300 Scientists have brought a huge[br]number of these algorithms. 0:12:39.600,0:12:43.900 The have proposed a huge number of these[br]algorithms that actually very useful 0:12:44.000,0:12:44.700 at that are 0:12:45.500,0:12:49.100 Affecting the way we're going[br]to be doing empirical work, 0:12:49.800,0:12:55.100 but we've not fully internalize that[br]because we're still very focused on getting 0:12:55.300,0:12:57.500 Point estimates and[br]getting standard errors 0:12:58.600,0:13:01.200 and getting P values in a way that 0:13:01.700,0:13:03.100 we need to move Beyond 0:13:03.300,0:13:04.300 to fully harness. 0:13:04.300,0:13:10.700 The force, the quote, the benefits[br]from machine learning literature. 0:13:10.900,0:13:15.100 Hmm. On the one hand. I guess I very[br]much take your point that sort of the the 0:13:15.200,0:13:18.600 Tional. Econometrics, framework[br]of sort of propose, a method, 0:13:18.600,0:13:22.600 proved a limit theorem under some[br]asymptotic story, story story, 0:13:22.600,0:13:26.900 story story publish a[br]paper is constraining. 0:13:26.900,0:13:29.700 And that in some sense by thinking, more, 0:13:29.700,0:13:33.200 broadly about what a methods paper could[br]look. Like we may write in some sense. 0:13:33.200,0:13:35.900 Certainly the machine learning[br]literature has found a bunch of things, 0:13:35.900,0:13:38.300 which seem to work quite[br]well for a number of problems 0:13:38.300,0:13:42.400 and are now having substantial influence[br]in economics. I guess a question. 0:13:42.400,0:13:44.800 I'm interested in is, how do you think? 0:13:45.200,0:13:47.600 The goal of fear. 0:13:47.900,0:13:51.200 Sort of, do you think there is? There's[br]no value in the theory part of it? 0:13:51.600,0:13:54.800 Because I guess it's sort of a question[br]that I often have to sort of seeing 0:13:54.800,0:13:56.900 that output from a machine learning tool 0:13:56.900,0:13:59.400 that actually a number of the[br]methods that you talked about. 0:13:59.400,0:14:01.800 Actually do have inferential[br]results, develop for them, 0:14:02.600,0:14:06.400 something that I always wonder about a sort[br]of uncertainty quantification and just, 0:14:06.500,0:14:08.000 you know, I I have my prior, 0:14:08.000,0:14:11.000 I come into the world with my view.[br]I see the result of this thing. 0:14:11.000,0:14:14.500 How should I update based on it? And[br]in some sense, if I'm in a world where 0:14:14.600,0:14:15.100 things are. 0:14:15.200,0:14:18.200 Normally distributed. I know[br]how to do it here. I don't. 0:14:18.200,0:14:21.400 And so I'm interested to hear[br]had I think it sounds. So 0:14:21.500,0:14:24.300 I don't see this as sort[br]of close it saying, well 0:14:24.400,0:14:26.500 we do these results[br]are not not interesting 0:14:26.600,0:14:27.700 but it's gonna be a lot of cases 0:14:28.000,0:14:31.200 where it's going to be incredibly hard to[br]get those results and we may not be able 0:14:31.200,0:14:33.200 to get there and 0:14:33.400,0:14:37.700 we may need to do it in stages. Where[br]first someone says. Hey I have this 0:14:39.600,0:14:44.800 interesting algorithm for for doing[br]something and it works well by some 0:14:45.600,0:14:49.900 The Criterion that on this[br]this particular data set 0:14:51.000,0:14:53.400 and I'm visit put it[br]out there and we should 0:14:53.700,0:14:58.000 maybe someone will figure out a way that[br]you can later actually still do inference 0:14:58.000,0:14:59.100 on the some condition. 0:14:59.100,0:15:02.100 So and maybe those are not[br]particularly realistic conditions, 0:15:02.100,0:15:05.500 then we kind of go further,[br]but I think we've been 0:15:06.700,0:15:11.400 Too constraining things too much where we[br]said, you know, this is the type of things 0:15:12.100,0:15:14.400 that we need to do. And I had some sense 0:15:15.700,0:15:18.200 that goes back to kind of[br]the way they dress and I 0:15:19.700,0:15:21.900 thought about things for the[br]local average treatment effect. 0:15:21.900,0:15:24.600 That wasn't quite the way people[br]were thinking about these problems. 0:15:24.600,0:15:29.200 Before they say they there was a sense[br]that some of the people said, you know, 0:15:29.500,0:15:31.900 the way you need to do. These[br]things, is you first, say 0:15:32.200,0:15:36.300 what you're interested in estimating[br]and then you do the best job you can. 0:15:36.500,0:15:37.700 In estimating that 0:15:38.100,0:15:44.200 and what you have you guys had doing is[br]doing it, you guys are doing it backwards. 0:15:44.300,0:15:46.700 You're going to say[br]here. I have an estimator 0:15:47.300,0:15:49.600 and now I'm going to figure out what what 0:15:49.800,0:15:51.400 what it says estimating then expose. 0:15:51.400,0:15:53.900 You're going to say why you[br]think that's interesting 0:15:53.900,0:15:56.600 or maybe why it's not interesting[br]and that's that's not okay. 0:15:56.600,0:15:58.600 You're not allowed to do that that way. 0:15:59.000,0:16:04.100 And I think we should just be a little[br]bit more flexible and thinking about the 0:16:04.300,0:16:06.300 how to look at at 0:16:06.400,0:16:11.300 Problems because I think we've missed[br]some things by not by not doing that. 0:16:13.000,0:16:16.600 So you've heard our views.[br]Isaiah, you've seen that, we have 0:16:17.000,0:16:20.400 some points of disagreement. Why[br]don't you referee this dispute for us? 0:16:22.500,0:16:28.100 Oh, I'm so so nice of you to ask me[br]a small question. So I guess for one. 0:16:28.200,0:16:33.200 I very much agree with something[br]that he do said earlier of. 0:16:36.000,0:16:36.300 So what? 0:16:36.500,0:16:37.900 Where it seems. Where the, 0:16:37.900,0:16:41.400 the case for machine learning seems[br]relatively clear is in settings, where 0:16:41.500,0:16:45.100 you know, we're interested in some version[br]of a nonparametric prediction problem. 0:16:45.100,0:16:49.700 So I'm interested in estimating a conditional[br]expectation or conditional probability 0:16:50.000,0:16:52.100 and in the past, maybe I[br]would have run a colonel, 0:16:52.100,0:16:55.800 I would have run a kernel regression or[br]I would have run a series regression or 0:16:56.100,0:16:57.400 something along those lines. 0:16:57.700,0:16:58.000 Sort of, 0:16:58.000,0:16:58.700 it seems like 0:16:58.700,0:17:02.000 at this point we've a fairly good[br]sense that in a fairly wide range 0:17:02.000,0:17:06.300 of applications machine learning[br]methods seem to do better for 0:17:06.400,0:17:06.800 Or, you know, 0:17:06.800,0:17:08.800 estimating conditional mean functions 0:17:08.800,0:17:12.000 or conditional probabilities or[br]various other nonparametric objects 0:17:12.400,0:17:16.600 than more traditional nonparametric[br]methods that were studied in econometrics 0:17:16.600,0:17:19.100 and statistics, especially[br]in high dimensional settings. 0:17:19.500,0:17:23.100 So you thinking of maybe the propensity[br]score or something like that? 0:17:23.100,0:17:25.300 So exactly, so nuisance functions. Yeah. 0:17:25.300,0:17:28.900 So things like propensity scores[br]things or I mean even objects 0:17:28.900,0:17:30.100 of more direct inference 0:17:30.200,0:17:32.400 interest, like conditional[br]average treatment effects, right? 0:17:32.400,0:17:35.100 Which of the difference of two[br]conditional, expectation functions, 0:17:35.100,0:17:36.300 potentially things like that. 0:17:36.500,0:17:40.400 Of course, even there,[br]right? We the the theory 0:17:40.500,0:17:43.700 for in France or the theory for[br]sort of how to how to interpret, 0:17:43.700,0:17:45.900 how to make large simple statements[br]about some of these things are 0:17:46.000,0:17:50.100 less well-developed depending on the[br]machine learning, estimator used. 0:17:50.100,0:17:53.800 And so, I think there's something[br]that is tricky is that we 0:17:53.900,0:17:55.700 can have these methods, which work a lot, 0:17:55.700,0:17:58.000 which seemed to work a lot[br]better for some purposes. 0:17:58.000,0:18:01.600 But which we need to be a bit[br]careful in how we plug them in or how 0:18:01.600,0:18:03.300 we interpret the resulting statements. 0:18:03.600,0:18:06.200 But of course, that's a very,[br]very active area right now. We're 0:18:06.400,0:18:10.400 People are doing tons of great work.[br]And so I exfoli expect and hope 0:18:10.400,0:18:12.800 to see much more going forward there. 0:18:13.000,0:18:17.300 So one issue with machine learning,[br]that always seems a danger is, or 0:18:17.400,0:18:20.300 that is sometimes a danger[br]and had some times led to 0:18:20.500,0:18:22.600 applications that have[br]made. Less sense, is 0:18:22.800,0:18:25.100 when folks start with a method that are 0:18:25.300,0:18:28.500 start with a method that they're very[br]excited about rather than a question, 0:18:28.900,0:18:32.100 right? So sort of starting with[br]a question where here's the 0:18:32.500,0:18:36.200 object I'm interested in here is[br]the parameter of Interest. Let me 0:18:36.700,0:18:37.100 You know, 0:18:37.300,0:18:39.500 think about how I would[br]identify that thing, 0:18:39.500,0:18:41.800 how I would recover that[br]thing, if I had a ton of data, 0:18:41.900,0:18:44.000 oh, here's a conditional[br]expectation function. 0:18:44.000,0:18:47.100 Let me plug in an estimator on[br]machine. Learning estimator for that. 0:18:47.200,0:18:48.800 That seems very very sensible. 0:18:49.000,0:18:53.100 Whereas, you know, if I[br]digress quantity on price 0:18:53.700,0:18:56.000 and say that I used a[br]machine learning method, 0:18:56.300,0:18:58.900 maybe I'm satisfied that that[br]solves the in dodging, 80 problem. 0:18:58.900,0:19:01.200 We're usually worried[br]about their maybe I'm not, 0:19:01.500,0:19:03.200 but again, that's something where the, 0:19:03.400,0:19:06.300 the way to address. It, seems[br]relatively clear, right? 0:19:06.500,0:19:09.000 It's the find your object of interest and 0:19:09.200,0:19:11.600 think about, is that just[br]bringing the economics? 0:19:11.700,0:19:12.200 Exactly. 0:19:12.200,0:19:15.400 And and can I think about it,[br]and they denied it, but harnessed 0:19:15.400,0:19:18.300 the power of the machine[br]learning methods for precisely 0:19:18.500,0:19:22.800 for some of the components precisely.[br]Exactly. So sort of, you know, the, the, 0:19:22.900,0:19:25.600 the question of interest is the same as[br]the question of interest is always been, 0:19:25.600,0:19:29.500 but we now better methods for estimating[br]some pieces of this, right? The 0:19:29.900,0:19:31.600 the place that seems harder to, uh, 0:19:31.900,0:19:33.400 harder to forecast is Right. 0:19:33.400,0:19:36.300 Obviously, there's a huge amount[br]going in going on in the machine. 0:19:36.400,0:19:37.400 Learning literature 0:19:37.500,0:19:39.700 and the great sort of The Limited ways 0:19:39.700,0:19:42.900 of plugging it in that I've referenced[br]so far are limited piece of that. 0:19:43.000,0:19:46.100 And so I think there are all sorts of[br]other interesting questions about where, 0:19:46.300,0:19:46.900 right sort of 0:19:47.100,0:19:49.300 where does this interaction[br]go? What else can we learn? 0:19:49.300,0:19:52.000 And that's something where,[br]you know, I think there's 0:19:52.200,0:19:56.400 a ton going on which seems very promising[br]and I have no idea what the answer is. 0:19:57.000,0:20:01.200 No, no. No, it's I so I totally[br]agree with that but it's no. 0:20:01.800,0:20:03.500 That's makes it very exciting. 0:20:03.800,0:20:06.100 And I think that's just a[br]little work to be done there. 0:20:06.600,0:20:11.400 All right. So I say agrees[br]with me there, say that person. 0:20:14.500,0:20:17.700 If you'd like to watch more[br]Nobel conversations, click here, 0:20:18.000,0:20:20.400 or if you'd like to learn[br]more about econometrics, 0:20:20.500,0:20:23.100 check out Josh's mastering[br]econometrics series. 0:20:23.600,0:20:26.500 If you'd like to learn more[br]about he do Josh and Isaiah 0:20:26.700,0:20:28.200 check out the links in the description.