0:00:01.267,0:00:03.933 My name is Dan Cohen[br]and I am an academic, as he said. 0:00:04.500,0:00:07.567 And what that means is that I argue. 0:00:07.600,0:00:09.233 It's an important part of my life. 0:00:09.267,0:00:10.433 And I like to argue. 0:00:10.733,0:00:14.100 And I'm not just an academic,[br]I'm a philosopher, 0:00:14.133,0:00:17.067 so I like to think that I'm actually[br]pretty good at arguing. 0:00:17.100,0:00:19.900 But I also like to think[br]a lot about arguing. 0:00:20.367,0:00:23.833 And in thinking about arguing,[br]I've come across some puzzles. 0:00:23.867,0:00:25.767 And one of the puzzles is that, 0:00:25.800,0:00:28.333 as I've been thinking[br]about arguing over the years -- 0:00:28.367,0:00:29.767 and it's been decades now -- 0:00:29.800,0:00:31.467 I've gotten better at arguing. 0:00:31.500,0:00:35.000 But the more that I argue[br]and the better I get at arguing, 0:00:35.033,0:00:36.333 the more that I lose. 0:00:36.967,0:00:38.200 And that's a puzzle. 0:00:38.233,0:00:41.100 And the other puzzle[br]is that I'm actually okay with that. 0:00:41.500,0:00:43.333 Why is it that I'm okay with losing 0:00:43.367,0:00:46.800 and why is it that I think good arguers[br]are actually better at losing? 0:00:46.833,0:00:48.833 Well, there are some other puzzles. 0:00:48.867,0:00:50.733 One is: why do we argue? 0:00:50.767,0:00:52.400 Who benefits from arguments? 0:00:52.433,0:00:54.767 When I think about arguments,[br]I'm talking about -- 0:00:54.800,0:00:57.633 let's call them academic arguments[br]or cognitive arguments -- 0:00:57.667,0:00:59.500 where something cognitive is at stake: 0:00:59.533,0:01:02.267 Is this proposition true?[br]Is this theory a good theory? 0:01:02.300,0:01:06.767 Is this a viable interpretation[br]of the data or the text? And so on. 0:01:06.800,0:01:10.767 I'm not interested really in arguments[br]about whose turn it is to do the dishes 0:01:10.800,0:01:12.467 or who has to take out the garbage. 0:01:12.500,0:01:14.800 Yeah, we have those arguments, too. 0:01:14.833,0:01:17.567 I tend to win those arguments,[br]because I know the tricks. 0:01:17.600,0:01:19.567 But those aren't the important arguments. 0:01:19.600,0:01:21.367 I'm interested in academic arguments, 0:01:21.400,0:01:23.267 and here are the things that puzzle me. 0:01:24.667,0:01:27.733 First, what do good arguers win[br]when they win an argument? 0:01:27.767,0:01:30.233 What do I win if I convince you 0:01:30.267,0:01:32.700 that utilitarianism isn't really[br]the right framework 0:01:32.733,0:01:34.467 for thinking about ethical theories? 0:01:34.500,0:01:36.367 What do we win when we win an argument? 0:01:36.400,0:01:37.733 Even before that, 0:01:37.767,0:01:39.000 what does it matter to me 0:01:39.033,0:01:41.967 whether you have this idea[br]that Kant's theory works 0:01:42.000,0:01:45.167 or Mill is the right ethicist to follow? 0:01:45.200,0:01:46.567 It's no skin off my back 0:01:46.600,0:01:49.767 whether you think functionalism[br]is a viable theory of mind. 0:01:50.300,0:01:52.333 So why do we even try to argue? 0:01:52.367,0:01:54.200 Why do we try to convince other people 0:01:54.233,0:01:56.400 to believe things[br]they don't want to believe, 0:01:56.433,0:01:58.200 and is that even a nice thing to do? 0:01:58.233,0:02:00.467 Is that a nice way to treat[br]another human being, 0:02:00.500,0:02:03.467 try and make them think something[br]they don't want to think? 0:02:03.500,0:02:08.167 Well, my answer is going to make reference[br]to three models for arguments. 0:02:08.200,0:02:10.933 The first model -- let's call it[br]the dialectical model -- 0:02:10.967,0:02:13.867 is we think of arguments as war;[br]you know what that's like -- 0:02:13.900,0:02:16.600 a lot of screaming and shouting[br]and winning and losing. 0:02:16.633,0:02:18.767 That's not a very helpful[br]model for arguing, 0:02:18.800,0:02:21.533 but it's a pretty common[br]and entrenched model for arguing. 0:02:21.567,0:02:24.800 But there's a second model for arguing:[br]arguments as proofs. 0:02:24.833,0:02:26.900 Think of a mathematician's argument. 0:02:26.933,0:02:29.700 Here's my argument.[br]Does it work? Is it any good? 0:02:29.733,0:02:34.200 Are the premises warranted?[br]Are the inferences valid? 0:02:34.233,0:02:36.767 Does the conclusion follow[br]from the premises? 0:02:36.800,0:02:39.200 No opposition, no adversariality -- 0:02:39.233,0:02:44.900 not necessarily any arguing[br]in the adversarial sense. 0:02:44.933,0:02:46.900 But there's a third model to keep in mind 0:02:46.933,0:02:48.900 that I think is going to be very helpful, 0:02:48.933,0:02:53.933 and that is arguments as performances,[br]arguments in front of an audience. 0:02:53.967,0:02:56.900 We can think of a politician[br]trying to present a position, 0:02:56.933,0:02:59.067 trying to convince[br]the audience of something. 0:02:59.100,0:03:02.567 But there's another twist on this model[br]that I really think is important; 0:03:02.600,0:03:06.600 namely, that when we argue[br]before an audience, 0:03:06.633,0:03:10.700 sometimes the audience has[br]a more participatory role in the argument; 0:03:10.733,0:03:15.233 that is, arguments are also[br][performances] in front of juries, 0:03:15.267,0:03:18.033 who make a judgment and decide the case. 0:03:18.067,0:03:19.867 Let's call this the rhetorical model, 0:03:19.900,0:03:23.600 where you have to tailor your argument[br]to the audience at hand. 0:03:23.633,0:03:26.267 You know, presenting a sound, well-argued, 0:03:26.300,0:03:29.667 tight argument in English[br]before a francophone audience 0:03:29.700,0:03:31.300 just isn't going to work. 0:03:31.800,0:03:35.433 So we have these models --[br]argument as war, argument as proof 0:03:35.467,0:03:37.833 and argument as performance. 0:03:38.167,0:03:41.933 Of those three, the argument as war[br]is the dominant one. 0:03:42.467,0:03:45.067 It dominates how we talk about arguments, 0:03:45.100,0:03:47.133 it dominates how we think about arguments, 0:03:47.167,0:03:50.133 and because of that,[br]it shapes how we argue, 0:03:50.167,0:03:51.933 our actual conduct in arguments. 0:03:51.967,0:03:53.633 Now, when we talk about arguments, 0:03:53.667,0:03:55.633 we talk in a very militaristic language. 0:03:55.667,0:03:58.900 We want strong arguments,[br]arguments that have a lot of punch, 0:03:58.933,0:04:00.600 arguments that are right on target. 0:04:00.633,0:04:03.800 We want to have our defenses up[br]and our strategies all in order. 0:04:03.833,0:04:06.400 We want killer arguments. 0:04:06.433,0:04:08.433 That's the kind of argument we want. 0:04:09.200,0:04:11.600 It is the dominant way[br]of thinking about arguments. 0:04:11.633,0:04:13.233 When I'm talking about arguments, 0:04:13.267,0:04:16.100 that's probably what you thought of,[br]the adversarial model. 0:04:16.433,0:04:18.933 But the war metaphor, 0:04:18.967,0:04:21.700 the war paradigm or model[br]for thinking about arguments, 0:04:21.733,0:04:24.700 has, I think, deforming effects[br]on how we argue. 0:04:25.100,0:04:28.067 First, it elevates tactics over substance. 0:04:28.967,0:04:31.100 You can take a class[br]in logic, argumentation. 0:04:31.133,0:04:32.833 You learn all about the subterfuges 0:04:32.867,0:04:35.733 that people use to try and win[br]arguments -- the false steps. 0:04:35.767,0:04:38.933 It magnifies the us-versus[br]them aspect of it. 0:04:38.967,0:04:42.367 It makes it adversarial; it's polarizing. 0:04:42.400,0:04:48.200 And the only foreseeable outcomes[br]are triumph -- glorious triumph -- 0:04:48.233,0:04:51.300 or abject, ignominious defeat. 0:04:51.333,0:04:53.067 I think those are deforming effects, 0:04:53.100,0:04:56.933 and worst of all, it seems[br]to prevent things like negotiation 0:04:56.967,0:05:01.667 or deliberation or compromise[br]or collaboration. 0:05:02.233,0:05:05.400 Think about that one -- have you[br]ever entered an argument thinking, 0:05:05.433,0:05:08.800 "Let's see if we can hash something out,[br]rather than fight it out. 0:05:08.833,0:05:10.733 What can we work out together?" 0:05:10.767,0:05:13.133 I think the argument-as-war metaphor 0:05:13.167,0:05:17.567 inhibits those other kinds[br]of resolutions to argumentation. 0:05:17.600,0:05:20.033 And finally -- this is really[br]the worst thing -- 0:05:20.067,0:05:22.867 arguments don't seem to get us[br]anywhere; they're dead ends. 0:05:22.900,0:05:28.567 They are like roundabouts or traffic jams[br]or gridlock in conversation. 0:05:28.600,0:05:29.867 We don't get anywhere. 0:05:30.433,0:05:31.700 And one more thing. 0:05:31.733,0:05:34.633 And as an educator, this is the one[br]that really bothers me: 0:05:34.667,0:05:36.833 If argument is war, 0:05:36.867,0:05:41.867 then there's an implicit equation[br]of learning with losing. 0:05:41.900,0:05:43.500 And let me explain what I mean. 0:05:44.067,0:05:46.600 Suppose you and I have an argument. 0:05:46.633,0:05:49.633 You believe a proposition, P, and I don't. 0:05:50.500,0:05:52.400 And I say, "Well, why do you believe P?" 0:05:52.433,0:05:53.833 And you give me your reasons. 0:05:53.867,0:05:56.233 And I object and say,[br]"Well, what about ...?" 0:05:56.267,0:05:57.767 And you answer my objection. 0:05:57.800,0:06:00.200 And I have a question:[br]"Well, what do you mean? 0:06:00.233,0:06:01.667 How does it apply over here?" 0:06:02.133,0:06:03.767 And you answer my question. 0:06:03.800,0:06:05.500 Now, suppose at the end of the day, 0:06:05.533,0:06:07.633 I've objected, I've questioned, 0:06:07.667,0:06:10.267 I've raised all sorts of counter[br]counter-considerations 0:06:10.300,0:06:13.867 and in every case you've responded[br]to my satisfaction. 0:06:13.900,0:06:16.533 And so at the end of the day, I say, 0:06:16.567,0:06:19.900 "You know what? I guess you're right: P." 0:06:20.500,0:06:22.833 So, I have a new belief. 0:06:22.867,0:06:24.267 And it's not just any belief; 0:06:24.300,0:06:30.600 it's well-articulated, examined --[br]it's a battle-tested belief. 0:06:31.800,0:06:32.967 Great cognitive gain. 0:06:32.967,0:06:34.333 OK, who won that argument? 0:06:35.600,0:06:39.533 Well, the war metaphor[br]seems to force us into saying you won, 0:06:39.567,0:06:42.267 even though I'm the only one[br]who made any cognitive gain. 0:06:42.300,0:06:45.933 What did you gain, cognitively,[br]from convincing me? 0:06:45.967,0:06:48.933 Sure, you got some pleasure out of it,[br]maybe your ego stroked, 0:06:48.967,0:06:50.933 maybe you get some professional status 0:06:50.967,0:06:53.533 in the field --[br]"This guy's a good arguer." 0:06:53.567,0:06:56.533 But just from a cognitive point of view, 0:06:56.567,0:06:57.833 who was the winner? 0:06:57.867,0:07:02.667 The war metaphor forces us into thinking[br]that you're the winner and I lost, 0:07:02.700,0:07:04.733 even though I gained. 0:07:04.767,0:07:07.033 And there's something wrong[br]with that picture. 0:07:07.067,0:07:09.767 And that's the picture[br]I really want to change if we can. 0:07:09.800,0:07:13.133 So, how can we find ways 0:07:13.167,0:07:16.733 to make arguments[br]yield something positive? 0:07:17.633,0:07:20.633 What we need is new[br]exit strategies for arguments. 0:07:21.367,0:07:24.233 But we're not going to have[br]new exit strategies for arguments 0:07:24.267,0:07:27.567 until we have new entry[br]approaches to arguments. 0:07:27.600,0:07:30.600 We need to think[br]of new kinds of arguments. 0:07:31.267,0:07:33.833 In order to do that, well -- 0:07:33.867,0:07:35.533 I don't know how to do that. 0:07:36.100,0:07:37.433 That's the bad news. 0:07:37.467,0:07:40.467 The argument-as-war metaphor[br]is just ... it's a monster. 0:07:40.500,0:07:42.900 It's just taken up habitation in our mind, 0:07:42.933,0:07:45.367 and there's no magic bullet[br]that's going to kill it. 0:07:45.400,0:07:48.033 There's no magic wand[br]that's going to make it disappear. 0:07:48.033,0:07:49.300 I don't have an answer. 0:07:49.333,0:07:50.667 But I have some suggestions. 0:07:50.700,0:07:52.633 Here's my suggestion: 0:07:53.700,0:07:55.900 If we want to think[br]of new kinds of arguments, 0:07:55.900,0:07:59.567 what we need to do[br]is think of new kinds of arguers. 0:07:59.900,0:08:01.833 So try this: 0:08:02.767,0:08:07.200 Think of all the roles[br]that people play in arguments. 0:08:07.233,0:08:10.200 There's the proponent and the opponent 0:08:10.233,0:08:12.400 in an adversarial, dialectical argument. 0:08:12.433,0:08:14.567 There's the audience[br]in rhetorical arguments. 0:08:14.600,0:08:16.800 There's the reasoner[br]in arguments as proofs. 0:08:18.767,0:08:20.067 All these different roles. 0:08:20.100,0:08:23.933 Now, can you imagine an argument[br]in which you are the arguer, 0:08:23.967,0:08:27.333 but you're also in the audience,[br]watching yourself argue? 0:08:27.967,0:08:31.000 Can you imagine yourself[br]watching yourself argue, 0:08:31.033,0:08:35.500 losing the argument, and yet still,[br]at the end of the argument, saying, 0:08:35.533,0:08:38.033 "Wow, that was a good argument!" 0:08:39.133,0:08:40.467 Can you do that? 0:08:40.500,0:08:43.800 I think you can, and I think[br]if you can imagine that kind of argument, 0:08:43.833,0:08:47.600 where the loser says to the winner[br]and the audience and the jury can say, 0:08:47.633,0:08:49.567 "Yeah, that was a good argument," 0:08:49.600,0:08:51.467 then you have imagined a good argument. 0:08:51.467,0:08:52.633 And more than that, 0:08:52.667,0:08:54.633 I think you've imagined a good arguer, 0:08:54.667,0:08:59.133 an arguer that's worthy of the kind[br]of arguer you should try to be. 0:08:59.567,0:09:02.333 Now, I lose a lot of arguments. 0:09:02.367,0:09:04.767 It takes practice to become a good arguer, 0:09:04.800,0:09:07.967 in the sense of being able to benefit[br]from losing, but fortunately, 0:09:07.967,0:09:10.933 I've had many, many colleagues[br]who have been willing to step up 0:09:10.967,0:09:12.667 and provide that practice for me. 0:09:12.700,0:09:13.867 Thank you. 0:09:13.900,0:09:17.967 (Applause)