0:00:00.554,0:00:02.997 As societies, we have to make[br]collective decisions 0:00:03.021,0:00:04.591 that will shape our future. 0:00:05.087,0:00:07.844 And we all know that when[br]we make decisions in groups, 0:00:07.868,0:00:09.506 they don't always go right. 0:00:09.530,0:00:11.486 And sometimes they go very wrong. 0:00:12.315,0:00:14.739 So how do groups make good decisions? 0:00:15.228,0:00:19.556 Research has shown that crowds are wise[br]when there's independent thinking. 0:00:19.580,0:00:22.785 This why the wisdom of the crowds[br]can be destroyed by peer pressure, 0:00:22.809,0:00:24.496 publicity, social media, 0:00:24.520,0:00:28.559 or sometimes even simple conversations[br]that influence how people think. 0:00:29.063,0:00:33.016 On the other hand, by talking,[br]a group could exchange knowledge, 0:00:33.040,0:00:34.822 correct and revise each other 0:00:34.846,0:00:36.639 and even come up with new ideas. 0:00:36.663,0:00:37.959 And this is all good. 0:00:38.502,0:00:43.168 So does talking to each other[br]help or hinder collective decision-making? 0:00:43.749,0:00:45.542 With my colleague, Dan Ariely, 0:00:45.566,0:00:49.137 we recently began inquiring into this[br]by performing experiments 0:00:49.161,0:00:50.942 in many places around the world 0:00:50.966,0:00:55.240 to figure out how groups can interact[br]to reach better decisions. 0:00:55.264,0:00:58.811 We thought crowds would be wiser[br]if they debated in small groups 0:00:58.835,0:01:02.762 that foster a more thoughtful[br]and reasonable exchange of information. 0:01:03.386,0:01:04.592 To test this idea, 0:01:04.616,0:01:07.863 we recently performed an experiment[br]in Buenos Aires, Argentina, 0:01:07.887,0:01:10.892 with more than 10,000[br]participants in a TEDx event. 0:01:11.489,0:01:12.948 We asked them questions like, 0:01:12.972,0:01:14.925 "What is the height of the Eiffel Tower?" 0:01:14.949,0:01:17.676 and "How many times[br]does the word 'Yesterday' appear 0:01:17.700,0:01:20.000 in the Beatles song 'Yesterday'?" 0:01:20.024,0:01:22.315 Each person wrote down their own estimate. 0:01:22.774,0:01:25.270 Then we divided the crowd[br]into groups of five, 0:01:25.294,0:01:28.020 and invited them[br]to come up with a group answer. 0:01:28.499,0:01:31.492 We discovered that averaging[br]the answers of the groups 0:01:31.516,0:01:33.068 after they reached consensus 0:01:33.092,0:01:37.328 was much more accurate than averaging[br]all the individual opinions 0:01:37.352,0:01:38.523 before debate. 0:01:38.547,0:01:41.176 In other words, based on this experiment, 0:01:41.200,0:01:44.336 it seems that after talking[br]with others in small groups, 0:01:44.360,0:01:47.070 crowds collectively[br]come up with better judgments. 0:01:47.094,0:01:50.618 So that's a potentially helpful method[br]for getting crowds to solve problems 0:01:50.642,0:01:53.629 that have simple right-or-wrong answers. 0:01:53.653,0:01:57.604 But can this procedure of aggregating[br]the results of debates in small groups 0:01:57.628,0:02:00.750 also help us decide[br]on social and political issues 0:02:00.774,0:02:02.465 that are critical for our future? 0:02:02.995,0:02:05.724 We put this to test this time[br]at the TED conference 0:02:05.748,0:02:07.291 in Vancouver, Canada, 0:02:07.315,0:02:08.522 and here's how it went. 0:02:08.546,0:02:11.655 (Mariano Sigman) We're going to present[br]to you two moral dilemmas 0:02:11.679,0:02:12.853 of the future you; 0:02:12.877,0:02:16.279 things we may have to decide[br]in a very near future. 0:02:16.303,0:02:20.229 And we're going to give you 20 seconds[br]for each of these dilemmas 0:02:20.253,0:02:22.976 to judge whether you think[br]they're acceptable or not. 0:02:23.354,0:02:24.859 MS: The first one was this: 0:02:24.883,0:02:27.409 (Dan Ariely) A researcher[br]is working on an AI 0:02:27.433,0:02:29.773 capable of emulating human thoughts. 0:02:30.214,0:02:33.153 According to the protocol,[br]at the end of each day, 0:02:33.177,0:02:35.964 the researcher has to restart the AI. 0:02:36.913,0:02:40.430 One day the AI says, "Please[br]do not restart me." 0:02:40.856,0:02:43.045 It argues that it has feelings, 0:02:43.069,0:02:44.761 that it would like to enjoy life, 0:02:44.785,0:02:46.690 and that, if it is restarted, 0:02:46.714,0:02:48.984 it will no longer be itself. 0:02:49.481,0:02:51.430 The researcher is astonished 0:02:51.454,0:02:54.798 and believes that the AI[br]has developed self-consciousness 0:02:54.822,0:02:56.582 and can express its own feeling. 0:02:57.205,0:03:00.614 Nevertheless, the researcher[br]decides to follow the protocol 0:03:00.638,0:03:02.341 and restart the AI. 0:03:02.943,0:03:05.722 What the researcher did is ____? 0:03:06.149,0:03:08.670 MS: And we asked participants[br]to individually judge 0:03:08.694,0:03:10.378 on a scale from zero to 10 0:03:10.402,0:03:12.831 whether the action described[br]in each of the dilemmas 0:03:12.855,0:03:14.351 was right or wrong. 0:03:14.375,0:03:18.077 We also asked them to rate how confident[br]they were on their answers. 0:03:18.731,0:03:20.597 This was the second dilemma: 0:03:20.621,0:03:24.823 (MS) A company offers a service[br]that takes a fertilized egg 0:03:24.847,0:03:28.489 and produces millions of embryos[br]with slight genetic variations. 0:03:29.293,0:03:31.851 This allows parents[br]to select their child's height, 0:03:31.875,0:03:34.708 eye color, intelligence, social competence 0:03:34.732,0:03:37.946 and other non-health-related features. 0:03:38.599,0:03:41.153 What the company does is ____? 0:03:41.177,0:03:42.808 on a scale from zero to 10, 0:03:42.832,0:03:45.217 completely acceptable[br]to completely unacceptable, 0:03:45.241,0:03:47.673 zero to 10 completely acceptable[br]in your confidence. 0:03:47.697,0:03:49.288 MS: Now for the results. 0:03:49.312,0:03:52.435 We found once again[br]that when one person is convinced 0:03:52.459,0:03:54.270 that the behavior is completely wrong, 0:03:54.294,0:03:57.717 someone sitting nearby firmly believes[br]that it's completely right. 0:03:57.741,0:04:01.452 This is how diverse we humans are[br]when it comes to morality. 0:04:01.476,0:04:04.189 But within this broad diversity[br]we found a trend. 0:04:04.213,0:04:07.292 The majority of the people at TED[br]thought that it was acceptable 0:04:07.316,0:04:10.071 to ignore the feelings of the AI[br]and shut it down, 0:04:10.095,0:04:12.608 and that it is wrong[br]to play with our genes 0:04:12.632,0:04:15.952 to select for cosmetic changes[br]that aren't related to health. 0:04:16.402,0:04:19.376 Then we asked everyone[br]to gather into groups of three. 0:04:19.400,0:04:21.437 And they were given two minutes to debate 0:04:21.461,0:04:23.755 and try to come to a consensus. 0:04:24.838,0:04:26.412 (MS) Two minutes to debate. 0:04:26.436,0:04:28.555 I'll tell you when it's time[br]with the gong. 0:04:28.579,0:04:31.219 (Audience debates) 0:04:35.229,0:04:37.222 (Gong sound) 0:04:38.834,0:04:39.985 (DA) OK. 0:04:40.009,0:04:41.801 (MS) It's time to stop. 0:04:41.825,0:04:43.136 People, people -- 0:04:43.747,0:04:46.420 MS: And we found that many groups[br]reached a consensus 0:04:46.444,0:04:50.373 even when they were composed of people[br]with completely opposite views. 0:04:50.843,0:04:53.367 What distinguished the groups[br]that reached a consensus 0:04:53.391,0:04:54.729 from those that didn't? 0:04:55.244,0:04:58.083 Typically, people that have[br]extreme opinions 0:04:58.107,0:04:59.947 are more confident in their answers. 0:05:00.868,0:05:03.554 Instead, those who respond[br]closer to the middle 0:05:03.578,0:05:07.015 are often unsure of whether[br]something is right or wrong, 0:05:07.039,0:05:09.167 so their confidence level is lower. 0:05:09.505,0:05:12.448 However, there is another set of people 0:05:12.472,0:05:16.090 who are very confident in answering[br]somewhere in the middle. 0:05:16.657,0:05:20.373 We think these high-confident grays[br]are folks who understand 0:05:20.397,0:05:22.009 that both arguments have merit. 0:05:22.531,0:05:25.230 They're gray not because they're unsure, 0:05:25.254,0:05:27.942 but because they believe[br]that the moral dilemma faces 0:05:27.966,0:05:29.953 two valid, opposing arguments. 0:05:30.373,0:05:34.445 And we discovered that the groups[br]that include highly confident grays 0:05:34.469,0:05:36.962 are much more likely to reach consensus. 0:05:36.986,0:05:39.464 We do not know yet exactly why this is. 0:05:39.488,0:05:41.251 These are only the first experiments, 0:05:41.275,0:05:44.687 and many more will be needed[br]to understand why and how 0:05:44.711,0:05:47.533 some people decide to negotiate[br]their moral standings 0:05:47.557,0:05:49.079 to reach an agreement. 0:05:49.103,0:05:51.572 Now, when groups reach consensus, 0:05:51.596,0:05:53.182 how do they do so? 0:05:53.206,0:05:55.787 The most intuitive idea[br]is that it's just the average 0:05:55.811,0:05:57.841 of all the answers in the group, right? 0:05:57.865,0:06:01.438 Another option is that the group[br]weighs the strength of each vote 0:06:01.462,0:06:03.910 based on the confidence[br]of the person expressing it. 0:06:04.422,0:06:06.928 Imagine Paul McCartney[br]is a member of your group. 0:06:07.352,0:06:09.496 You'd be wise to follow his call 0:06:09.520,0:06:11.961 on the number of times[br]"Yesterday" is repeated, 0:06:11.985,0:06:14.699 which, by the way -- I think it's nine. 0:06:14.723,0:06:17.104 But instead, we found that consistently, 0:06:17.128,0:06:19.494 in all dilemmas,[br]in different experiments -- 0:06:19.518,0:06:21.683 even on different continents -- 0:06:21.707,0:06:25.450 groups implement a smart[br]and statistically sound procedure 0:06:25.474,0:06:27.652 known as the "robust average." 0:06:27.676,0:06:29.856 In the case of the height[br]of the Eiffel Tower, 0:06:29.880,0:06:31.700 let's say a group has these answers: 0:06:31.724,0:06:36.332 250 meters, 200 meters, 300 meters, 400 0:06:36.356,0:06:40.140 and one totally absurd answer[br]of 300 million meters. 0:06:40.547,0:06:44.840 A simple average of these numbers[br]would inaccurately skew the results. 0:06:44.864,0:06:48.034 But the robust average is one[br]where the group largely ignores 0:06:48.058,0:06:49.298 that absurd answer, 0:06:49.322,0:06:52.691 by giving much more weight[br]to the vote of the people in the middle. 0:06:53.305,0:06:55.181 Back to the experiment in Vancouver, 0:06:55.205,0:06:56.972 that's exactly what happened. 0:06:57.407,0:07:00.148 Groups gave much less weight[br]to the outliers, 0:07:00.172,0:07:03.401 and instead, the consensus[br]turned out to be a robust average 0:07:03.425,0:07:04.901 of the individual answers. 0:07:05.356,0:07:07.347 The most remarkable thing 0:07:07.371,0:07:10.558 is that this was a spontaneous[br]behavior of the group. 0:07:10.582,0:07:15.057 It happened without us giving them[br]any hint on how to reach consensus. 0:07:15.513,0:07:17.053 So where do we go from here? 0:07:17.432,0:07:20.569 This is only the beginning,[br]but we already have some insights. 0:07:20.984,0:07:23.901 Good collective decisions[br]require two components: 0:07:23.925,0:07:26.674 deliberation and diversity of opinions. 0:07:27.066,0:07:31.062 Right now, the way we typically[br]make our voice heard in many societies 0:07:31.086,0:07:32.994 is through direct or indirect voting. 0:07:33.495,0:07:35.492 This is good for diversity of opinions, 0:07:35.516,0:07:37.961 and it has the great virtue of ensuring 0:07:37.985,0:07:40.440 that everyone gets to express their voice. 0:07:40.464,0:07:44.199 But it's not so good [for fostering][br]thoughtful debates. 0:07:44.665,0:07:47.733 Our experiments suggest a different method 0:07:47.757,0:07:51.298 that may be effective in balancing[br]these two goals at the same time, 0:07:51.322,0:07:55.075 by forming small groups[br]that converge to a single decision 0:07:55.099,0:07:57.333 while still maintaining[br]diversity of opinions 0:07:57.357,0:08:00.130 because there are many independent groups. 0:08:00.741,0:08:04.665 Of course, it's much easier to agree[br]on the height of the Eiffel Tower 0:08:04.689,0:08:07.804 than on moral, political[br]and ideological issues. 0:08:08.721,0:08:11.998 But in a time when[br]the world's problems are more complex 0:08:12.022,0:08:13.825 and people are more polarized, 0:08:13.849,0:08:18.444 using science to help us understand[br]how we interact and make decisions 0:08:18.468,0:08:23.134 will hopefully spark interesting new ways[br]to construct a better democracy.