1 00:00:00,000 --> 00:00:03,000 My name is Dan Cohen, and I am academic, as he said. 2 00:00:03,000 --> 00:00:07,000 And what that means is that I argue. 3 00:00:07,000 --> 00:00:10,000 It's an important part of my life, and I like to argue. 4 00:00:10,000 --> 00:00:13,000 And I'm not just an academic, I'm a philosopher, 5 00:00:13,000 --> 00:00:16,000 so I like to think that I'm actually pretty good at arguing. 6 00:00:16,000 --> 00:00:20,000 But I also like to think a lot about arguing. 7 00:00:20,000 --> 00:00:23,000 And thinking about arguing, I've come across some puzzles, 8 00:00:23,000 --> 00:00:25,000 and one of the puzzles is that 9 00:00:25,000 --> 00:00:27,000 as I've been thinking about arguing over the years, 10 00:00:27,000 --> 00:00:31,000 and it's been decades now, I've gotten better at arguing, 11 00:00:31,000 --> 00:00:34,000 but the more that I argue and the better I get at arguing, 12 00:00:34,000 --> 00:00:38,000 the more that I lose. And that's a puzzle. 13 00:00:38,000 --> 00:00:41,000 And the other puzzle is that I'm actually okay with that. 14 00:00:41,000 --> 00:00:43,000 Why is it that I'm okay with losing 15 00:00:43,000 --> 00:00:44,000 and why is it that I think that good arguers 16 00:00:44,000 --> 00:00:46,000 are actually better at losing? 17 00:00:46,000 --> 00:00:48,000 Well, there's some other puzzles. 18 00:00:48,000 --> 00:00:52,000 One is, why do we argue? Who benefits from arguments? 19 00:00:52,000 --> 00:00:54,000 And when I think about arguments now, I'm talking about, 20 00:00:54,000 --> 00:00:57,000 let's call them academic arguments or cognitive arguments, 21 00:00:57,000 --> 00:00:59,000 where something cognitive is at stake. 22 00:00:59,000 --> 00:01:02,000 Is this proposition true? Is this theory a good theory? 23 00:01:02,000 --> 00:01:06,000 Is this a viable interpretation of the data or the text? 24 00:01:06,000 --> 00:01:08,000 And so on. I'm not interested really in arguments about 25 00:01:08,000 --> 00:01:12,000 whose turn it is to to do the dishes or who has to take out the garbage. 26 00:01:12,000 --> 00:01:14,000 Yeah, we have those arguments too. 27 00:01:14,000 --> 00:01:16,000 I tend to win those arguments, because I know the tricks. 28 00:01:16,000 --> 00:01:18,000 But those aren't the important arguments. 29 00:01:18,000 --> 00:01:20,000 I'm interested in academic arguments today, 30 00:01:20,000 --> 00:01:22,000 and here are the things that puzzle me. 31 00:01:22,000 --> 00:01:27,000 First, what do good arguers win when they win an argument? 32 00:01:27,000 --> 00:01:30,000 What do I win if I convince you that 33 00:01:30,000 --> 00:01:33,000 utilitarianism isn't really the right framework for thinking about ethical theories? 34 00:01:33,000 --> 00:01:35,000 So what do we win when we win an argument? 35 00:01:35,000 --> 00:01:38,000 Even before that, what does it matter to me 36 00:01:38,000 --> 00:01:42,000 whether you have this idea that Kant's theory works 37 00:01:42,000 --> 00:01:44,000 or Mill's the right ethicist to follow? 38 00:01:44,000 --> 00:01:47,000 It's no skin off my back whether you think 39 00:01:47,000 --> 00:01:50,000 functionalism is a viable theory of mind. 40 00:01:50,000 --> 00:01:52,000 So why do we even try to argue? 41 00:01:52,000 --> 00:01:54,000 Why do we convince other people 42 00:01:54,000 --> 00:01:56,000 to believe things that they don't want to believe? 43 00:01:56,000 --> 00:01:58,000 And is that even a nice thing to do? Is that a nice way 44 00:01:58,000 --> 00:02:00,000 to treat another human being, try and make them 45 00:02:00,000 --> 00:02:03,000 think something they don't want to think? 46 00:02:03,000 --> 00:02:06,000 Well, my answer is going to make reference to 47 00:02:06,000 --> 00:02:08,000 three models for arguments. 48 00:02:08,000 --> 00:02:10,000 The first model, let's call this the dialectical model, 49 00:02:10,000 --> 00:02:12,000 is that we think of arguments as war, and you know what that's like. 50 00:02:12,000 --> 00:02:14,000 There's a lot of screaming and shouting 51 00:02:14,000 --> 00:02:15,000 and winning and losing, 52 00:02:15,000 --> 00:02:18,000 and that's not really a very helpful model for arguing 53 00:02:18,000 --> 00:02:21,000 but it's a pretty common and entrenched model for arguing. 54 00:02:21,000 --> 00:02:24,000 But there's a second model for arguing: arguments as proofs. 55 00:02:24,000 --> 00:02:26,000 Think of a mathematician's argument. 56 00:02:26,000 --> 00:02:29,000 Here's my argument. Does it work? Is it any good? 57 00:02:29,000 --> 00:02:33,000 Are the premises warranted? Are the inferences valid? 58 00:02:33,000 --> 00:02:36,000 Does the conclusion follow from the premises? 59 00:02:36,000 --> 00:02:39,000 No opposition, no adversariality, 60 00:02:39,000 --> 00:02:44,000 not necessarily any arguing in the adversarial sense. 61 00:02:44,000 --> 00:02:46,000 But there's a third model to keep in mind 62 00:02:46,000 --> 00:02:48,000 that I think is going to be very helpful, 63 00:02:48,000 --> 00:02:51,000 and that is arguments as performances, 64 00:02:51,000 --> 00:02:53,000 arguments as being in front of an audience. 65 00:02:53,000 --> 00:02:56,000 We can think of a politician trying to present a position, 66 00:02:56,000 --> 00:02:59,000 trying to a convince the audience of something. 67 00:02:59,000 --> 00:03:02,000 But there's another twist on this model that I really think is important, 68 00:03:02,000 --> 00:03:06,000 namely that when we argue before an audience, 69 00:03:06,000 --> 00:03:10,000 sometimes the audience has a more participatory role in the argument, 70 00:03:10,000 --> 00:03:15,000 that is, arguments are also audiences in front of juries 71 00:03:15,000 --> 00:03:17,000 who make a judgment and decide the case. 72 00:03:17,000 --> 00:03:19,000 Let's call this the rhetorical model, 73 00:03:19,000 --> 00:03:23,000 where you have to tailor your argument to the audience at hand. 74 00:03:23,000 --> 00:03:26,000 You know, presenting a sound, well-argued, 75 00:03:26,000 --> 00:03:29,000 tight argument in English before a francophone audience 76 00:03:29,000 --> 00:03:31,000 just isn't going to work. 77 00:03:31,000 --> 00:03:34,000 So we have these models -- argument as war, 78 00:03:34,000 --> 00:03:37,000 argument as proof, and argument as performance. 79 00:03:37,000 --> 00:03:42,000 Of those three, the argument as war is the dominant one. 80 00:03:42,000 --> 00:03:45,000 It dominates how we talk about arguments, 81 00:03:45,000 --> 00:03:47,000 it dominates how we think about arguments, 82 00:03:47,000 --> 00:03:50,000 and because of that, it shapes how we argue, 83 00:03:50,000 --> 00:03:51,000 our actual conduct in arguments. 84 00:03:51,000 --> 00:03:53,000 Now, when we talk about arguments, 85 00:03:53,000 --> 00:03:55,000 yeah, we talk in a very militaristic language. 86 00:03:55,000 --> 00:03:58,000 We want strong arguments, arguments that have a lot of punch, 87 00:03:58,000 --> 00:04:00,000 arguments that are right on target. 88 00:04:00,000 --> 00:04:03,000 We want to have our defenses up and our strategies all in order. 89 00:04:03,000 --> 00:04:06,000 We want killer arguments. 90 00:04:06,000 --> 00:04:09,000 That's the kind of argument we want. 91 00:04:09,000 --> 00:04:11,000 It is the dominant way of thinking about arguments. 92 00:04:11,000 --> 00:04:13,000 When I'm talking about arguments, that's probably 93 00:04:13,000 --> 00:04:16,000 what you thought of, the adversarial model. 94 00:04:16,000 --> 00:04:20,000 But the war metaphor, the war paradigm 95 00:04:20,000 --> 00:04:21,000 or model for thinking about arguments, 96 00:04:21,000 --> 00:04:24,000 has, I think, deforming effects on how we argue. 97 00:04:24,000 --> 00:04:28,000 First it elevates tactics over substance. 98 00:04:28,000 --> 00:04:30,000 You can take a class in logic, argumentation. 99 00:04:30,000 --> 00:04:33,000 You learn all about the subterfuges that people use 100 00:04:33,000 --> 00:04:35,000 to try and win arguments, the false steps. 101 00:04:35,000 --> 00:04:39,000 It magnifies the us-versus-them aspect of it. 102 00:04:39,000 --> 00:04:42,000 It makes it adversarial. It's polarizing. 103 00:04:42,000 --> 00:04:45,000 And the only foreseeable outcomes 104 00:04:45,000 --> 00:04:51,000 are triumph, glorious triumph, or abject, ignominious defeat. 105 00:04:51,000 --> 00:04:54,000 I think those are deforming effects, and worst of all, 106 00:04:54,000 --> 00:04:56,000 it seems to prevent things like negotiation 107 00:04:56,000 --> 00:04:59,000 or deliberation or compromise 108 00:04:59,000 --> 00:05:02,000 or collaboration. 109 00:05:02,000 --> 00:05:04,000 Think about that one. Have you ever entered an argument 110 00:05:04,000 --> 00:05:07,000 thinking, "Let's see if we can hash something out 111 00:05:07,000 --> 00:05:10,000 rather than fight it out. What can we work out together?" 112 00:05:10,000 --> 00:05:12,000 And I think the argument-as-war metaphor 113 00:05:12,000 --> 00:05:17,000 inhibits those other kinds of resolutions to argumentation. 114 00:05:17,000 --> 00:05:20,000 And finally, this is really the worst thing, 115 00:05:20,000 --> 00:05:21,000 arguments don't seem to get us anywhere. 116 00:05:21,000 --> 00:05:24,000 They're dead ends. They are roundabouts 117 00:05:24,000 --> 00:05:28,000 or traffic jams or gridlock in conversation. 118 00:05:28,000 --> 00:05:30,000 We don't get anywhere. 119 00:05:30,000 --> 00:05:32,000 Oh, and one more thing, and as an educator, 120 00:05:32,000 --> 00:05:34,000 this is the one that really bothers me: 121 00:05:34,000 --> 00:05:38,000 If argument is war, then there's an implicit equation 122 00:05:38,000 --> 00:05:41,000 of learning with losing. 123 00:05:41,000 --> 00:05:43,000 And let me explain what I mean. 124 00:05:43,000 --> 00:05:46,000 Suppose you and I have an argument. 125 00:05:46,000 --> 00:05:50,000 You believe a proposition, P, and I don't. 126 00:05:50,000 --> 00:05:52,000 And I say, "Well why do you believe P?" 127 00:05:52,000 --> 00:05:53,000 And you give me your reasons. 128 00:05:53,000 --> 00:05:56,000 And I object and say, "Well, what about ...?" 129 00:05:56,000 --> 00:05:57,000 And you answer my objection. 130 00:05:57,000 --> 00:06:00,000 And I have a question: "Well, what do you mean? 131 00:06:00,000 --> 00:06:03,000 How does it apply over here?" And you answer my question. 132 00:06:03,000 --> 00:06:05,000 Now, suppose at the end of the day, 133 00:06:05,000 --> 00:06:07,000 I've objected, I've questioned, 134 00:06:07,000 --> 00:06:10,000 I've raised all sorts of counter-considerations, 135 00:06:10,000 --> 00:06:13,000 and in every case you've responded to my satisfaction. 136 00:06:13,000 --> 00:06:16,000 And so at the end of the day, I say, 137 00:06:16,000 --> 00:06:20,000 "You know what? I guess you're right. P." 138 00:06:20,000 --> 00:06:23,000 So I have a new belief. And it's not just any belief, 139 00:06:23,000 --> 00:06:28,000 but it's a well-articulated, examined, 140 00:06:28,000 --> 00:06:31,000 it's a battle-tested belief. 141 00:06:31,000 --> 00:06:35,000 Great cognitive game. Okay. Who won that argument? 142 00:06:35,000 --> 00:06:38,000 Well, the war metaphor seems to force us into saying 143 00:06:38,000 --> 00:06:41,000 you won, even though I'm the only one who made any cognitive gain. 144 00:06:41,000 --> 00:06:45,000 What did you gain cognitively from convincing me? 145 00:06:45,000 --> 00:06:48,000 Sure, you got some pleasure out of it, maybe your ego stroked, 146 00:06:48,000 --> 00:06:51,000 maybe you get some professional status in the field. 147 00:06:51,000 --> 00:06:53,000 This guy's a good arguer. 148 00:06:53,000 --> 00:06:57,000 But cognitively, now -- just from a cognitive point of view -- who was the winner? 149 00:06:57,000 --> 00:06:59,000 The war metaphor forces us into thinking 150 00:06:59,000 --> 00:07:02,000 that you're the winner and I lost, 151 00:07:02,000 --> 00:07:04,000 even though I gained. 152 00:07:04,000 --> 00:07:06,000 And there's something wrong with that picture. 153 00:07:06,000 --> 00:07:09,000 And that's the picture I really want to change if we can. 154 00:07:09,000 --> 00:07:14,000 So how can we find ways to make arguments 155 00:07:14,000 --> 00:07:17,000 yield something positive? 156 00:07:17,000 --> 00:07:21,000 What we need is new exit strategies for arguments. 157 00:07:21,000 --> 00:07:24,000 But we're not going to have new exit strategies for arguments 158 00:07:24,000 --> 00:07:27,000 until we have new entry approaches to arguments. 159 00:07:27,000 --> 00:07:30,000 We need to think of new kinds of arguments. 160 00:07:30,000 --> 00:07:33,000 In order to do that, well, 161 00:07:33,000 --> 00:07:36,000 I don't know how to do that. 162 00:07:36,000 --> 00:07:37,000 That's the bad news. 163 00:07:37,000 --> 00:07:40,000 The argument-as-war metaphor is just, it's a monster. 164 00:07:40,000 --> 00:07:42,000 It's just taken up habitation in our mind, 165 00:07:42,000 --> 00:07:44,000 and there's no magic bullet that's going to kill it. 166 00:07:44,000 --> 00:07:47,000 There's no magic wand that's going to make it disappear. 167 00:07:47,000 --> 00:07:49,000 I don't have an answer. 168 00:07:49,000 --> 00:07:50,000 But I have some suggestions, 169 00:07:50,000 --> 00:07:53,000 and here's my suggestion. 170 00:07:53,000 --> 00:07:55,000 If we want to think of new kinds of arguments, 171 00:07:55,000 --> 00:07:59,000 what we need to do is think of new kinds of arguers. 172 00:07:59,000 --> 00:08:02,000 So try this. 173 00:08:02,000 --> 00:08:07,000 Think of all the roles that people play in arguments. 174 00:08:07,000 --> 00:08:10,000 There's the proponent and the opponent 175 00:08:10,000 --> 00:08:12,000 in an adversarial, dialectical argument. 176 00:08:12,000 --> 00:08:14,000 There's the audience in rhetorical arguments. 177 00:08:14,000 --> 00:08:18,000 There's the reasoner in arguments as proofs. 178 00:08:18,000 --> 00:08:22,000 All these different roles. Now, can you imagine an argument 179 00:08:22,000 --> 00:08:25,000 in which you are the arguer, but you're also in the audience 180 00:08:25,000 --> 00:08:27,000 watching yourself argue? 181 00:08:27,000 --> 00:08:30,000 Can you imagine yourself watching yourself argue, 182 00:08:30,000 --> 00:08:34,000 losing the argument, and yet still, at the end of the argument, 183 00:08:34,000 --> 00:08:38,000 say, "Wow, that was a good argument." 184 00:08:38,000 --> 00:08:41,000 Can you do that? I think you can. 185 00:08:41,000 --> 00:08:43,000 And I think, if you can imagine that kind of argument 186 00:08:43,000 --> 00:08:45,000 where the loser says to the winner 187 00:08:45,000 --> 00:08:47,000 and the audience and the jury can say, 188 00:08:47,000 --> 00:08:49,000 "Yeah, that was a good argument," 189 00:08:49,000 --> 00:08:51,000 then you have imagined a good argument. 190 00:08:51,000 --> 00:08:53,000 And more than that, I think you've imagined 191 00:08:53,000 --> 00:08:56,000 a good arguer, an arguer that's worthy 192 00:08:56,000 --> 00:08:59,000 of the kind of arguer you should try to be. 193 00:08:59,000 --> 00:09:02,000 Now, I lose a lot of arguments. 194 00:09:02,000 --> 00:09:04,000 It takes practice to become a good arguer 195 00:09:04,000 --> 00:09:06,000 in the sense of being able to benefit from losing, 196 00:09:06,000 --> 00:09:09,000 but fortunately, I've had many, many colleagues 197 00:09:09,000 --> 00:09:12,000 who have been willing to step up and provide that practice for me. 198 00:09:12,000 --> 00:09:13,000 Thank you. 199 00:09:13,000 --> 00:09:17,000 (Applause)