1 00:00:00,000 --> 00:00:05,561 Hi, welcome back. In this lecture, I want to talk a little bit more about how using 2 00:00:05,561 --> 00:00:09,374 models can help make you a more intelligent citizen of the world. And so, 3 00:00:09,374 --> 00:00:13,417 we're gonna break this down into a bunch of set of sub-reasons about why models 4 00:00:13,417 --> 00:00:17,352 make you better able to engage in all the things that are going on in this modern, 5 00:00:17,352 --> 00:00:20,903 complex world in which we live. Okay, so. When we think about models, they're 6 00:00:20,903 --> 00:00:24,550 simplifications. They're abstractions. So in a sense, there's a sense in which 7 00:00:24,550 --> 00:00:28,389 they're wrong. There's a famous quote by George Box, where he says, "All models are 8 00:00:28,389 --> 00:00:32,084 wrong." And that's true, right? They are. "But some are useful." And that's gonna be 9 00:00:32,084 --> 00:00:35,539 a mantra that comes up throughout this course. These models are gonna be 10 00:00:35,539 --> 00:00:38,946 abstractions, they're gonna be simplifications, but they're be useful to 11 00:00:38,946 --> 00:00:42,468 us. They're gonna help us do things in better ways. 'K? So. In a, in a sense, 12 00:00:42,468 --> 00:00:46,158 right? And this is a, a big thing in this course. Models are the new lingua franca. 13 00:00:46,158 --> 00:00:49,618 They're the language of not only the academy, you know, which I talked about 14 00:00:49,618 --> 00:00:53,447 some in the last lecture, but they're the language of business. They're the language 15 00:00:53,447 --> 00:00:57,046 of politics. They're the language of the nonprofit world. Wherever you go, where 16 00:00:57,046 --> 00:01:00,782 there's people are trying to do good, make money, cure disease, whatever it is that 17 00:01:00,782 --> 00:01:04,335 they wanna do, right? You're gonna find that people are using models to enable 18 00:01:04,335 --> 00:01:08,071 them to be better at what their purpose is. Okay? That's why they've really become 19 00:01:08,071 --> 00:01:11,716 the new lingua franca. So, if you think back. Remember, I talked about this in the 20 00:01:11,716 --> 00:01:15,453 first lecture. The whole idea of having a great books movement was that there was 21 00:01:15,453 --> 00:01:19,639 these... set of ideas that any person should know. So within the hundred and so great 22 00:01:19,639 --> 00:01:23,497 books, there were thousands of ideas. And one of our ad learners, Robert Hutchins 23 00:01:23,497 --> 00:01:27,354 President of the University of Chicago. They had this thing that they wrote called the 24 00:01:27,354 --> 00:01:31,264 Synopticon which was a list, right, as they put this together. This was kind of list of 25 00:01:31,264 --> 00:01:35,430 sort of all the ideas that someone should know, that an intelligent person should 26 00:01:35,430 --> 00:01:39,442 know. So what are those ideas? So one of those ideas was to tie yourself to the 27 00:01:39,442 --> 00:01:43,506 mast. And this comes from the Odyssey, you know, this says the ship is going past the 28 00:01:43,506 --> 00:01:47,672 sirens and he wants to hear the sirens beautiful love song. So what he does is he 29 00:01:47,672 --> 00:01:51,873 has his crew tie him to the mast. He tie. Ties himself to the mast so he can listen 30 00:01:51,873 --> 00:01:56,109 to them but pre-commit to not driving his boat over to hear the sirens, at the same 31 00:01:56,109 --> 00:01:59,784 time he puts wax in the ears of his crew so they also won't be, you know, 32 00:01:59,784 --> 00:02:04,071 encouraged to sort of drive the boat over there. Well this is an idea that recurs in 33 00:02:04,071 --> 00:02:08,182 history when we think about. Cortez burning his ships, right, so his men won't 34 00:02:08,182 --> 00:02:12,541 you know retreat, they'll continue to advance. So this idea to tie yourself to 35 00:02:12,541 --> 00:02:16,253 math, is a real worthwhile thing. But here's the problem. One of, one of my 36 00:02:16,253 --> 00:02:19,957 favorite websites is a website called Office of Proverbs. So on this websites, 37 00:02:19,957 --> 00:02:23,949 it says things like he who hesitates is lost, a stitch in time saves nine, or 38 00:02:23,949 --> 00:02:27,700 two heads are better than one, too many cooks spoil the broth. So you've got to 39 00:02:27,700 --> 00:02:31,591 hear this really good advice, something that probably made it in the Synopticon, 40 00:02:31,596 --> 00:02:35,148 but then you get something that says the exact opposite. Well, how do they 41 00:02:35,148 --> 00:02:39,340 adjudicate between those two things? The way we adjudicate between those two things 42 00:02:39,340 --> 00:02:43,532 is by constructing models because models give us the conditions under which he who 43 00:02:43,532 --> 00:02:47,522 hesitates is lost, and then there's the conditions under which a stitch in time 44 00:02:47,522 --> 00:02:51,613 saves nine. So when we talk about the wide diversity and prediction, we'll see why 45 00:02:51,613 --> 00:02:55,654 it's the case that two heads is better than one, and we'll see why it's the case 46 00:02:55,654 --> 00:02:59,694 that too many cooks spoil the broth. So, ironically, what models do is they tie us 47 00:02:59,694 --> 00:03:03,684 to a mast, they tie us to a mast of logic and by tying us to a mast of logic, we 48 00:03:03,684 --> 00:03:08,652 figure out which ways of thinking, which ideas in this Are useful to us.'K? So, if 49 00:03:08,652 --> 00:03:13,116 you look at almost any discipline, whether its economics, and here what you see in 50 00:03:13,116 --> 00:03:17,634 this diagram, is you see a description of sort of, this is a, a utility function for 51 00:03:17,634 --> 00:03:21,878 an agent. And what that agent is doing trying to maximize their pay-off, right? 52 00:03:21,878 --> 00:03:25,900 So, economists use models all the time. Biologists use models, as well. They, 53 00:03:25,900 --> 00:03:30,143 they, you know, have, you know, models of the brain, where they have little axons 54 00:03:30,143 --> 00:03:34,331 and dendrites going between the neurons. They have models of gene regulatory 55 00:03:34,331 --> 00:03:38,464 networks. They have models species, right? Things like that. Sociology, we have 56 00:03:38,464 --> 00:03:42,611 models, as well, right? So, there's models of, sort of. How your identity effects 57 00:03:42,611 --> 00:03:46,411 your actions, and your behaviors and things like that. Okay, in political 58 00:03:46,411 --> 00:03:50,186 science. We have models. Political science these days, this is a picture of a spatial 59 00:03:50,186 --> 00:03:53,934 voting model. So they might say candidates are a little more conservative on certain 60 00:03:53,934 --> 00:03:57,280 dimensions and voters are a little more conservative and you say that, well, 61 00:03:57,280 --> 00:04:00,983 you're more likely to vote for a candidate who takes positions similar to yourself. 62 00:04:00,983 --> 00:04:04,508 So my work at the University of Michigan we have something called the National 63 00:04:04,508 --> 00:04:08,032 Election Studies that's run out of there where we sort of gather all this data 64 00:04:08,032 --> 00:04:11,423 about where politicians are and where voters are, and that allows us to make 65 00:04:11,423 --> 00:04:15,184 sense of who votes for whom and why. Okay? So models help us understand the decisions 66 00:04:15,184 --> 00:04:18,550 people make. Linguistics, right? Here's another area, right? So you might think, 67 00:04:18,550 --> 00:04:21,871 how can you use models in linguistics? Well, this little model here, you see 68 00:04:21,871 --> 00:04:25,685 things where it says you see v's and n and p's in here, if you look closely. Well, v 69 00:04:25,685 --> 00:04:29,006 stands for verb, n stands for noun, and well you gotta. And S stands for, you 70 00:04:29,006 --> 00:04:32,238 know, subject, let's say, right? So you can do this: you can ask "What is the 71 00:04:32,238 --> 00:04:35,693 structure of a language?" You can ask, formally and mathematically, what are the 72 00:04:35,693 --> 00:04:39,014 structure of a language is, and whether some languages are more like other 73 00:04:39,014 --> 00:04:42,560 languages or not, depending on how people, you know, set up their sentences. So in 74 00:04:42,560 --> 00:04:46,001 German, where they may put all the adjectives. At the end of the sentence 75 00:04:46,001 --> 00:04:50,132 that looks very different than let's say English. All right. Even the law. This is 76 00:04:50,132 --> 00:04:54,138 a graph from one of my graduate students, former graduate student. Now, he's a law 77 00:04:54,138 --> 00:04:58,194 professor, Dan Katz. Where he's got sort of a network model of which Supreme Court 78 00:04:58,194 --> 00:05:02,149 justices, you know, who they appoint, so who, if someone appoints judges from some 79 00:05:02,149 --> 00:05:06,055 other judge. By putting that data that's out there in this sort of model-based 80 00:05:06,055 --> 00:05:09,711 form, we can begin to understand how conservative and how liberal certain 81 00:05:09,711 --> 00:05:13,817 judges are. All right? So, there's lots of ways to use models, and there's even whole 82 00:05:13,817 --> 00:05:17,622 disciplines now, that have evolved, that are based entirely on models. So, game 83 00:05:17,622 --> 00:05:21,628 theory, which is what I was really trained in as a graduate student, is all about 84 00:05:21,628 --> 00:05:25,507 strategic behavior. Behavior. It's the study of strategic interactions between, 85 00:05:25,507 --> 00:05:29,525 you know, individuals, companies, nations. Right? And game theory can also be applied 86 00:05:29,525 --> 00:05:33,494 to biology, right? So there's all sorts of stuff, right? When you go to, when you go 87 00:05:33,494 --> 00:05:37,215 to, you know, college, you go to college, you'll find that there's game theory 88 00:05:37,215 --> 00:05:41,233 models of just about anything. Right? So it's actually a field based entirely just 89 00:05:41,233 --> 00:05:45,631 on models. Right? Why, right? [laugh] Why all these models, right? Why does 90 00:05:45,631 --> 00:05:50,508 everything from linguistics, to economics to, you know, political science use 91 00:05:50,508 --> 00:05:55,450 models? Well, cuz, they're better, right? They're just better than we are. So, let 92 00:05:55,450 --> 00:06:00,392 me show you a graph, here. This is a graph from a book by Phil Tetlock. It's a 93 00:06:00,392 --> 00:06:05,919 fabulous book. And in this graph, he, what he's showing is, he's showing the accuracy 94 00:06:05,919 --> 00:06:10,065 of, some different, let me pull up a pin here. Different ways of predicting. So, 95 00:06:10,065 --> 00:06:13,512 what you see on this axis, this calibration axis, right here. This is 96 00:06:13,512 --> 00:06:17,217 asking, sort of how, showing you how accurate a model is. And this axis is 97 00:06:17,217 --> 00:06:21,487 saying how discriminating is it, in terms of how particular, how fine of predictions 98 00:06:21,487 --> 00:06:25,705 is it making. So, instead of saying is it hot or cold, it might be saying it's gonna 99 00:06:25,705 --> 00:06:29,410 be 90 degrees, or 80 degrees, or 70 degrees. So this axis here, this up and 100 00:06:29,410 --> 00:06:32,908 down axis, is discriminatoriness, discrimination, and this axis is how 101 00:06:32,908 --> 00:06:36,715 accurate. So, what you see here, down here, are hedgehogs. So, these are people 102 00:06:36,715 --> 00:06:40,823 who use a single model. Hedgehogs are not very good at predicting. Right? They're 103 00:06:40,823 --> 00:06:45,295 terrible at predicting. Up here are people he calls foxes. Now, foxes are people who 104 00:06:45,295 --> 00:06:49,713 use lots of models. They have sort of lots of loose models in their head. And, they 105 00:06:49,713 --> 00:06:53,695 do much better at, you know, sort of at calibration, a little bit better at 106 00:06:53,695 --> 00:06:58,058 discrimination, than individuals. But way up here, [laugh] better than anybody, are 107 00:06:58,058 --> 00:07:02,312 formal models. Formal models just do better than either foxes or hedgehogs. Now 108 00:07:02,312 --> 00:07:06,457 [inaudible] how much data is this? Tetlock actually had tens of thousands of 109 00:07:06,457 --> 00:07:10,565 predictions. So, over a 20-year period, he gathered predictions by people. And 110 00:07:10,565 --> 00:07:15,154 compared how those people did to models. And the answer is models do much, much 111 00:07:15,154 --> 00:07:19,772 better. Okay. All right, so. What about people, then, who actually make 112 00:07:19,772 --> 00:07:23,194 predictions for a living? So, this is a picture of Bruce Bueno de Mesquita, who 113 00:07:23,194 --> 00:07:26,616 makes predictions about what's gonna happen in international relationships, and 114 00:07:26,616 --> 00:07:30,261 he's very good at it. He's so good at it that they put his picture on the cover of 115 00:07:30,261 --> 00:07:33,816 magazines, right? He's at, Stanford and NYU. Chair of the department at NYU. Used 116 00:07:33,816 --> 00:07:37,372 to be, anyway. So, Bruce, uses models. He's got a very elaborate model that helps 117 00:07:37,372 --> 00:07:40,483 him figure out, based on sort of bargaining position and interest, what 118 00:07:40,483 --> 00:07:43,594 different countries are gonna do. But, just like George Box said at the 119 00:07:43,594 --> 00:07:47,016 beginning, he doesn't base his decision entirely on that model. What the model 120 00:07:47,016 --> 00:07:50,572 does is gives him guidance as to what he then thinks. So, it's a blending of what 121 00:07:50,572 --> 00:07:54,716 the formal model tells him, and. Experience tells them so smart people who 122 00:07:54,716 --> 00:07:59,984 use models but the models don't tell them what to do. Okay. Another reason models 123 00:07:59,984 --> 00:08:04,858 have taken yeah they are better but they're also very fertile. So once you 124 00:08:04,858 --> 00:08:08,621 learn a model. It's, you know, for one domain, you can apply to a whole bunch of 125 00:08:08,621 --> 00:08:11,913 other domains, which is fascinating. So we're gonna learn something called 126 00:08:11,913 --> 00:08:15,657 mark-off processes, which are models about dynamic processes. So they can be used to 127 00:08:15,657 --> 00:08:19,220 model things like disease spread and stuff like that, right? We're gonna finally 128 00:08:19,220 --> 00:08:22,783 learn though that you can also use them, this is sorta surprising, to figure out 129 00:08:22,783 --> 00:08:26,715 who wrote a book. >> [laugh] And they say, how does that happen? Well that happens 130 00:08:26,715 --> 00:08:30,433 because you can think of words, writing a sentence, as an anemic process. So 131 00:08:30,433 --> 00:08:34,402 different authors, right, use different sequences of words. Different patterns. So 132 00:08:34,402 --> 00:08:38,521 therefore we can use this mathematical model that wasn't developed in any way for 133 00:08:38,521 --> 00:08:42,290 this purpose to figure out who wrote what book, okay? Totally cool. All right. 134 00:08:42,290 --> 00:08:46,543 Another big reason. Models really make us humble. The reason they make us humble is 135 00:08:46,543 --> 00:08:50,744 we just have to lay out sort of all the logic and then we realize holy cow, I had 136 00:08:50,744 --> 00:08:54,635 no idea that this was going to happen, right. So often when we construct the 137 00:08:54,635 --> 00:08:58,784 model, we're going to get very different predictions than what we thought before, 138 00:08:58,784 --> 00:09:02,934 right. So if you look at things, here's a picture of a, the tulip graph, right, from 139 00:09:03,090 --> 00:09:06,368 When there's a big, in the six-, seventeenth century, when there's a, you 140 00:09:06,368 --> 00:09:09,974 know, this big spike in tulip prices. You can imagine that people thought that 141 00:09:09,974 --> 00:09:13,814 prices were gonna continue to go up and up and up. Well, if you had a simple linear 142 00:09:13,814 --> 00:09:17,514 model, you might have invested heavily in tulips, and lost a lot of money. So, one 143 00:09:17,514 --> 00:09:21,167 reason that models make us humble is, never go back to the George Box code. All 144 00:09:21,167 --> 00:09:24,633 models are wrong, right? So, a model is going to be wrong. But the models are 145 00:09:24,633 --> 00:09:28,239 humbling to us, because they sort of make us see the full dimensionality of a 146 00:09:28,239 --> 00:09:31,985 problem. So, once we try and write down a model of any sort of system, it's a very 147 00:09:31,985 --> 00:09:35,685 humbling exercise, because we realize how much we've gotta leave out to try and 148 00:09:35,685 --> 00:09:39,190 understand what's going on. All right. Here's another example, right? This is the 149 00:09:39,190 --> 00:09:42,748 Case-Shiller Home Price Index, and what you see is, you see prices going up and up 150 00:09:42,748 --> 00:09:46,262 an up, right? And then you see this, let me put a pin up here, precipitous crash 151 00:09:46,262 --> 00:09:49,557 right here, right? A lot of people had models that just said, look, things are 152 00:09:49,557 --> 00:09:53,247 gonna continue this way. There were a few people that had models that said things go 153 00:09:53,247 --> 00:09:56,761 down. These people, the ones whose models went down, they made a lotta money. These 154 00:09:56,761 --> 00:10:00,099 people thought it was gonna go up, didn't. So, we're always gonna see a lot of 155 00:10:00,099 --> 00:10:03,614 diversity in models, and you're really not gonna know, often until after the fact, 156 00:10:03,614 --> 00:10:06,908 which one is right. And so, one thing that's gonna be really important is to 157 00:10:06,908 --> 00:10:10,159 have many models. So, let's go back to that fox-hedgehog graphite that we, I 158 00:10:10,159 --> 00:10:14,220 showed you before. The, the foxes, the people with lots of models, did much 159 00:10:14,220 --> 00:10:19,127 better than the hedgehogs, the people with no models. And former models did better 160 00:10:19,127 --> 00:10:22,438 than. The foxes. Well, what would do better than formal models? Well, people 161 00:10:22,438 --> 00:10:26,076 with lots of formal models. Right? So if we really want to make sense of the world 162 00:10:26,076 --> 00:10:29,759 what we want to do is have lots of formal models in our disclosures. So what we're 163 00:10:29,759 --> 00:10:33,397 going to do in this class is almost like, remember the old, like, sixteen, 32 box of 164 00:10:33,397 --> 00:10:37,079 Crayolas? That's sort of what we're doing here. Right? We're just going to pick up a 165 00:10:37,079 --> 00:10:40,627 whole bunch of models. And we're going to have them, right, they're fertile. We're 166 00:10:40,627 --> 00:10:44,265 going to plot across a bunch of settings. So when we're confronted with something 167 00:10:44,265 --> 00:10:47,947 what we can do is pull out our models. Ask which ones are appropriate, and in doing 168 00:10:47,947 --> 00:10:51,772 so, right, be better at what we do. So the essence of Tetlocks's book, right? That's 169 00:10:51,772 --> 00:10:55,959 where that graph came from with the foxes and hedgehogs is that, the only people who 170 00:10:55,959 --> 00:10:59,847 are really even better than what he. He has a way of classifying what a random 171 00:10:59,847 --> 00:11:03,834 choice would be. The only people who are better than random at predicting what's 172 00:11:03,834 --> 00:11:07,971 gonna happen are people who use multiple models. And that's the kind of people that 173 00:11:07,971 --> 00:11:12,325 we wanna be. Okay, so thats, sort of the big, intelligent citizen of the world 174 00:11:12,325 --> 00:11:16,556 logic, right. There is, models, are incredibly fertile, they make us humble, 175 00:11:16,556 --> 00:11:21,088 they help, you know really clarify the logic, and they're just better. Okay? So 176 00:11:21,088 --> 00:11:25,459 if you wanna be out there, you know, helping to change the world in useful 177 00:11:25,459 --> 00:11:30,251 ways, it's really, really helpful to have some understanding of models. Thank you 178 00:11:30,251 --> 00:11:30,912 very much.