0:00:00.554,0:00:03.021 As societies, we have to make[br]collective decisions 0:00:03.021,0:00:04.591 that will shape our future. 0:00:05.087,0:00:07.868 And we all know that when[br]we make decisions in groups, 0:00:07.868,0:00:09.192 they don't always go right. 0:00:09.650,0:00:11.606 And sometimes they go very wrong. 0:00:12.419,0:00:14.843 So how do groups make good decisions? 0:00:15.300,0:00:19.296 Research has shown that crowds are wise[br]when there's independent thinking. 0:00:19.614,0:00:22.809 This why the wisdom of the crowds[br]can be destroyed by peer pressure, 0:00:22.809,0:00:23.808 publicity, 0:00:23.808,0:00:24.807 social media, 0:00:24.807,0:00:28.666 or sometimes even simple conversations[br]that influence how people think. 0:00:29.076,0:00:30.237 On the other hand, 0:00:30.237,0:00:31.235 by talking, 0:00:31.235,0:00:33.040 a group could exchange knowledge, 0:00:33.040,0:00:34.846 correct and revise each other, 0:00:34.846,0:00:36.629 and even come up with new ideas. 0:00:36.817,0:00:38.113 And this is all good. 0:00:38.574,0:00:43.240 So does talking to each other[br]help or hinder collective decision-making? 0:00:43.803,0:00:44.802 With my colleague, 0:00:44.802,0:00:45.803 Dan Ariely, 0:00:45.803,0:00:49.257 we recently began inquiring into this[br]by performing experiments 0:00:49.257,0:00:50.966 in many places around the world 0:00:50.966,0:00:55.122 to figure out how groups can interact[br]to reach better decisions. 0:00:55.432,0:00:58.913 We thought crowds would be wiser[br]if they debated in small groups 0:00:58.913,0:01:02.840 that foster a more thoughtful[br]and reasonable exchange of information. 0:01:03.467,0:01:04.616 To test this idea, 0:01:04.616,0:01:07.887 we recently performed an experiment[br]in Buenos Aires, Argentina 0:01:07.887,0:01:10.892 with more than 10,000[br]participants in a TEDx event. 0:01:11.489,0:01:12.972 We asked them questions like, 0:01:12.972,0:01:14.928 "What is the height of the Eiffel Tower?" 0:01:14.928,0:01:17.700 and "How many times[br]does the word 'Yesterday' appear 0:01:17.700,0:01:19.758 in the Beatles' song "Yesterday?" 0:01:20.024,0:01:22.315 Each person wrote down their own estimate. 0:01:22.774,0:01:25.399 Then we divided the crowd[br]into groups of five, 0:01:25.399,0:01:28.125 and invited them[br]to come up with a group answer. 0:01:28.555,0:01:33.211 We discovered that averaging the answers[br]of the groups after they reached consensus 0:01:33.211,0:01:38.111 was much more accurate than averaging[br]all the individual opinions before debate. 0:01:38.547,0:01:39.780 In other words, 0:01:39.780,0:01:41.200 based on this experiment, 0:01:41.200,0:01:44.360 it seems that after talking[br]with others in small groups, 0:01:44.360,0:01:46.910 crowds collectively[br]come up with better judgments. 0:01:47.094,0:01:50.615 So that's a potentially helpful method[br]for getting crowds to solve problems 0:01:50.615,0:01:53.372 that have simple right or wrong answers. 0:01:53.653,0:01:57.628 But can this procedure of aggregating[br]the results of debates in small groups 0:01:57.628,0:02:00.917 also help us decide[br]on social and political issues 0:02:00.917,0:02:02.608 that are critical for our future? 0:02:02.996,0:02:05.748 We put this to test this time[br]at the TED conference 0:02:05.748,0:02:07.315 in Vancouver, Canada, 0:02:07.315,0:02:08.546 and here's out it went. 0:02:08.546,0:02:12.546 We're going to present to you[br]two moral dilemmas of the future you; 0:02:12.546,0:02:15.956 things we may have to decide[br]in a very near future. 0:02:16.395,0:02:20.253 And we're going to give you 20 seconds[br]for each of these dilemmas 0:02:20.253,0:02:22.976 to judge whether you think[br]they're acceptable or not. 0:02:23.418,0:02:24.826 The first one was this. 0:02:24.946,0:02:29.654 DA: A researcher is working on an AI[br]capable of emulating human thoughts. 0:02:30.214,0:02:31.710 According to the protocol, 0:02:31.710,0:02:33.177 at the end of each day, 0:02:33.177,0:02:35.964 the researcher has to restart the AI. 0:02:36.913,0:02:40.610 One day the AI says, "Please[br]do not restart me." 0:02:40.856,0:02:42.801 It argues that it has feelings. 0:02:43.069,0:02:44.785 It would like to enjoy life 0:02:44.785,0:02:46.714 and that if it is restarted, 0:02:46.714,0:02:48.984 it will no longer be itself. 0:02:49.616,0:02:51.588 The researcher is astonished, 0:02:51.588,0:02:54.911 and believes that the AI[br]has developed self-consciousness 0:02:54.911,0:02:56.671 and can express its own feeling. 0:02:57.205,0:03:00.638 Nevertheless, the researcher[br]decides to follow the protocol 0:03:00.638,0:03:02.341 and restart the AI. 0:03:03.030,0:03:05.809 What the researcher did is ... 0:03:06.149,0:03:08.575 MS: And we asked participants[br]to individually judge 0:03:08.575,0:03:10.402 on a scale from zero to 10 0:03:10.402,0:03:12.843 whether the action described[br]in each of the dilemmas 0:03:12.843,0:03:14.261 was right or wrong. 0:03:14.497,0:03:17.865 We also asked them to rate how confident[br]they were on their answers. 0:03:18.731,0:03:20.390 This was the second dilemma. 0:03:20.732,0:03:24.957 A company offers a service[br]that takes a fertilized egg 0:03:24.957,0:03:28.599 and produces millions of embryos[br]with slight genetic variations. 0:03:29.373,0:03:31.954 This allows parents[br]to select their child's height, 0:03:31.954,0:03:34.859 eye color, intelligence, social competence 0:03:34.859,0:03:37.989 and other non-health related features. 0:03:38.599,0:03:41.076 What the company does is ... 0:03:41.076,0:03:42.832 on a scale from zero to 10, 0:03:42.832,0:03:45.241 competeley acceptable[br]to completely unacceptable, 0:03:45.241,0:03:47.690 zero to 10 completely acceptable[br]in your confidence. 0:03:47.760,0:03:49.055 Now for the results. 0:03:49.312,0:03:52.459 We found once again[br]that when one person is convinced 0:03:52.459,0:03:54.358 that the behavior is completely wrong, 0:03:54.358,0:03:57.650 someone sitting nearby firmly believes[br]that it's completely right. 0:03:57.804,0:04:01.353 This is how diverse we humans are[br]when it comes to morality. 0:04:01.538,0:04:03.837 But within this broad diversity[br]we found a trend. 0:04:04.213,0:04:07.168 A majority of the people at TED[br]thought that it was acceptable 0:04:07.168,0:04:10.231 to ignore the feelings of the AI[br]and shut it down, 0:04:10.231,0:04:12.767 and that it is wrong[br]to play with our genes 0:04:12.767,0:04:16.087 to select for cosmetic changes[br]that aren't related to health. 0:04:16.402,0:04:19.128 Then we asked everyone[br]to gather into groups of three. 0:04:19.432,0:04:21.461 And they were given two minutes to debate 0:04:21.461,0:04:23.755 and try to come up[br]with a consensus. 0:04:24.838,0:04:26.143 Two minutes to debate. 0:04:26.543,0:04:28.516 I'll tell you when it's time with a gong. 0:04:28.516,0:04:31.156 (Audience debates) 0:04:35.229,0:04:37.222 (Gong) 0:04:38.832,0:04:39.794 DA: OK. 0:04:39.794,0:04:41.465 MS: It's time to stop. 0:04:42.096,0:04:43.407 People, people -- 0:04:43.747,0:04:46.444 And we found that many groups[br]reached a consensus 0:04:46.444,0:04:50.373 even when they were composed of people[br]with completely opposite views. 0:04:50.869,0:04:53.391 What distinguished the groups[br]that reached a consensus 0:04:53.391,0:04:54.729 from those that didn't? 0:04:55.244,0:04:58.246 Typically, people that have[br]extreme opinions 0:04:58.246,0:05:00.088 are more confident in their answers. 0:05:00.868,0:05:03.656 Instead, those who respond[br]closer to the middle 0:05:03.656,0:05:07.116 are often unsure of whether[br]something is right or wrong, 0:05:07.116,0:05:09.244 so their confidence level is lower. 0:05:09.568,0:05:12.472 However, there is another set of people 0:05:12.472,0:05:16.090 who are very confident in answering[br]somewhere in the middle. 0:05:16.657,0:05:20.397 We think these high-confident grays[br]are folks who understand 0:05:20.397,0:05:22.009 that both arguments have merit. 0:05:22.613,0:05:25.254 They're gray not because they're unsure, 0:05:25.254,0:05:26.436 but because they believe 0:05:26.436,0:05:29.918 that the moral dilemma[br]faces two valid opposing arguments. 0:05:30.373,0:05:34.469 And we discovered that the groups[br]that include highly confident grays 0:05:34.469,0:05:36.745 are much more likely to reach consensus. 0:05:37.048,0:05:39.190 We do not know yet exactly why this is. 0:05:39.488,0:05:41.275 These are only the first experiments, 0:05:41.275,0:05:42.637 and many more will be needed 0:05:42.637,0:05:47.446 to understand why and how some people[br]decide to negotiate their moral standings 0:05:47.446,0:05:49.006 to reach an agreement. 0:05:49.212,0:05:51.596 Now, when groups reach consensus, 0:05:51.596,0:05:52.939 how do they do so? 0:05:53.206,0:05:54.567 The most intuitive idea 0:05:54.567,0:05:57.865 is that it's just the average[br]of all the answers in the group, right? 0:05:57.865,0:06:01.574 Another option is that the group[br]weighs the strength of each vote 0:06:01.574,0:06:04.022 based on the confidence[br]of the person expressing it. 0:06:04.455,0:06:06.961 Imagine Paul McCartney[br]is a member of your group. 0:06:07.412,0:06:09.520 You'd be wise to follow his call 0:06:09.520,0:06:12.057 on the number of times[br]"yesterday" is repeated -- 0:06:12.057,0:06:13.090 which by the way, 0:06:13.090,0:06:14.563 I think is nine. 0:06:14.723,0:06:17.128 But instead we found that consistently, 0:06:17.128,0:06:18.255 in all dilemmas, 0:06:18.255,0:06:19.518 in different experiments, 0:06:19.518,0:06:21.707 even on different continents, 0:06:21.707,0:06:25.560 groups implement a smart[br]and statistically-sound procedure 0:06:25.560,0:06:27.391 known as the robust average. 0:06:27.606,0:06:29.750 In the case of the height[br]of the Eiffel Tower, 0:06:29.750,0:06:31.724 let's say a group has these answers: 0:06:31.724,0:06:36.356 250 meters, 200 meters, 300 meters, 400, 0:06:36.356,0:06:40.140 and one totally absurd answer[br]of 300 million meters. 0:06:40.621,0:06:44.864 A simple average of these numbers[br]would inaccurately skew the results, 0:06:44.864,0:06:46.048 but the robust average 0:06:46.048,0:06:49.322 is one where the group[br]largely ignores that absurd answer 0:06:49.322,0:06:52.691 by giving much more weight to the vote[br]of the people in the middle. 0:06:53.362,0:06:55.261 Back to the experiment in Vancouver. 0:06:55.261,0:06:57.028 That's exactly what happened. 0:06:57.407,0:07:00.230 Groups gave much less weight[br]to the outliers, 0:07:00.230,0:07:03.482 and instead, the consensus[br]turned out to be a robust average 0:07:03.482,0:07:04.958 of the individual answers. 0:07:05.356,0:07:07.371 The most remarkable thing 0:07:07.371,0:07:10.273 is that this was a spontaneous[br]behavior of the group. 0:07:10.582,0:07:13.446 It happened without us[br]giving them any hint 0:07:13.446,0:07:15.057 on how to reach consensus. 0:07:15.513,0:07:17.053 So where do we go from here? 0:07:17.457,0:07:18.742 This is only the beginning, 0:07:18.742,0:07:20.569 but we already have some insights. 0:07:20.984,0:07:23.925 Good collective decisions[br]require two components: 0:07:23.925,0:07:26.424 deliberation and diversity of opinions. 0:07:27.066,0:07:31.086 Right now, the way we typically[br]make our voice heard in many societies 0:07:31.086,0:07:33.131 is through direct or indirect voting. 0:07:33.495,0:07:35.642 This is good for diversity of opinions, 0:07:35.642,0:07:40.464 and it has the great virtue of ensuring[br]that everyone gets to express their voice, 0:07:40.464,0:07:44.199 but it's not so good [to] foster[br]thoughtful debates. 0:07:44.705,0:07:47.854 Our experiments suggest a different method 0:07:47.854,0:07:51.418 that may be effective in balancing[br]these two goals at the same time 0:07:51.418,0:07:55.099 by forming small groups[br]that converge to a single decision 0:07:55.099,0:07:57.357 while still maintaining[br]diversity of opinions 0:07:57.357,0:08:00.130 because there are many independent groups. 0:08:00.741,0:08:04.689 Of course it's much easier to agree[br]on the height of the Eiffel Tower 0:08:04.689,0:08:07.804 than on moral, political[br]and ideological issues. 0:08:08.793,0:08:12.093 But in a time when[br]the world's problems are more complex 0:08:12.093,0:08:13.919 and people are more polarized, 0:08:13.919,0:08:18.468 using science to help us understand[br]how we interact and make decisions 0:08:18.468,0:08:22.805 will hopefully spark interesting new ways[br]to construct a better democracy.