1 00:00:01,548 --> 00:00:04,064 No matter who you are or where you live, 2 00:00:04,088 --> 00:00:06,444 I'm guessing that you have at least one relative 3 00:00:06,468 --> 00:00:08,802 that likes to forward those emails. 4 00:00:09,206 --> 00:00:11,106 You know the ones I'm talking about -- 5 00:00:11,130 --> 00:00:13,984 the ones with dubious claims or conspiracy videos. 6 00:00:14,315 --> 00:00:16,983 And you've probably already muted them on Facebook 7 00:00:17,007 --> 00:00:19,355 for sharing social posts like this one. 8 00:00:19,379 --> 00:00:20,783 It's an image of a banana 9 00:00:20,807 --> 00:00:23,474 with a strange red cross running through the center. 10 00:00:23,498 --> 00:00:25,637 And the text around it is warning people 11 00:00:25,661 --> 00:00:27,819 not to eat fruits that look like this, 12 00:00:27,843 --> 00:00:29,871 suggesting they've been injected with blood 13 00:00:29,895 --> 00:00:32,025 contaminated with the HIV virus. 14 00:00:32,049 --> 00:00:34,652 And the social share message above it simply says, 15 00:00:34,676 --> 00:00:36,857 "Please forward to save lives." 16 00:00:37,672 --> 00:00:40,966 Now, fact-checkers have been debunking this one for years, 17 00:00:40,990 --> 00:00:43,799 but it's one of those rumors that just won't die. 18 00:00:43,823 --> 00:00:45,094 A zombie rumor. 19 00:00:45,513 --> 00:00:47,606 And, of course, it's entirely false. 20 00:00:48,180 --> 00:00:51,139 It might be tempting to laugh at an example like this, to say, 21 00:00:51,163 --> 00:00:53,047 "Well, who would believe this, anyway?" 22 00:00:53,419 --> 00:00:55,045 But the reason it's a zombie rumor 23 00:00:55,069 --> 00:00:58,958 is because it taps into people's deepest fears about their own safety 24 00:00:58,982 --> 00:01:01,157 and that of the people they love. 25 00:01:01,783 --> 00:01:05,056 And if you spend as enough time as I have looking at misinformation, 26 00:01:05,080 --> 00:01:07,500 you know that this is just one example of many 27 00:01:07,524 --> 00:01:10,571 that taps into people's deepest fears and vulnerabilities. 28 00:01:11,214 --> 00:01:15,583 Every day, across the world, we see scores of new memes on Instagram 29 00:01:15,607 --> 00:01:18,646 encouraging parents not to vaccinate their children. 30 00:01:18,670 --> 00:01:23,202 We see new videos on YouTube explaining that climate change is a hoax. 31 00:01:23,226 --> 00:01:27,528 And across all platforms, we see endless posts designed to demonize others 32 00:01:27,552 --> 00:01:31,053 on the basis of their race, religion or sexuality. 33 00:01:32,314 --> 00:01:35,344 Welcome to one of the central challenges of our time. 34 00:01:35,647 --> 00:01:39,672 How can we maintain an internet with freedom of expression at the core, 35 00:01:39,696 --> 00:01:42,999 while also ensuring that the content that's being disseminated 36 00:01:43,023 --> 00:01:46,909 doesn't cause irreparable harms to our democracies, our communities 37 00:01:46,933 --> 00:01:49,171 and to our physical and mental well-being? 38 00:01:49,998 --> 00:01:52,085 Because we live in the information age, 39 00:01:52,109 --> 00:01:55,656 yet the central currency upon which we all depend -- information -- 40 00:01:55,680 --> 00:01:58,037 is no longer deemed entirely trustworthy 41 00:01:58,061 --> 00:02:00,389 and, at times, can appear downright dangerous. 42 00:02:00,811 --> 00:02:04,748 This is thanks in part to the runaway growth of social sharing platforms 43 00:02:04,772 --> 00:02:06,414 that allow us to scroll through, 44 00:02:06,438 --> 00:02:08,660 where lies and facts sit side by side, 45 00:02:08,684 --> 00:02:11,755 but with none of the traditional signals of trustworthiness. 46 00:02:12,268 --> 00:02:15,887 And goodness -- our language around this is horribly muddled. 47 00:02:15,911 --> 00:02:19,014 People are still obsessed with the phrase "fake news," 48 00:02:19,038 --> 00:02:21,569 despite the fact that it's extraordinarily unhelpful 49 00:02:21,593 --> 00:02:25,053 and used to describe a number of things that are actually very different: 50 00:02:25,077 --> 00:02:28,463 lies, rumors, hoaxes, conspiracies, propaganda. 51 00:02:28,911 --> 00:02:31,823 And I really wish we could stop using a phrase 52 00:02:31,847 --> 00:02:34,709 that's been co-opted by politicians right around the world, 53 00:02:34,733 --> 00:02:36,204 from the left and the right, 54 00:02:36,228 --> 00:02:39,450 used as a weapon to attack a free and independent press. 55 00:02:40,307 --> 00:02:45,009 (Applause) 56 00:02:45,033 --> 00:02:48,495 Because we need our professional news media now more than ever. 57 00:02:48,882 --> 00:02:52,255 And besides, most of this content doesn't even masquerade as news. 58 00:02:52,279 --> 00:02:54,921 It's memes, videos, social posts. 59 00:02:54,945 --> 00:02:58,398 And most of it is not fake; it's misleading. 60 00:02:58,422 --> 00:03:01,437 We tend to fixate on what's true or false. 61 00:03:01,461 --> 00:03:05,493 But the biggest concern is actually the weaponization of context. 62 00:03:06,855 --> 00:03:08,823 Because the most effective disinformation 63 00:03:08,847 --> 00:03:11,895 has always been that which has a kernel of truth to it. 64 00:03:11,919 --> 00:03:15,395 Let's take this example from London, from March 2017, 65 00:03:15,419 --> 00:03:16,959 a tweet that circulated widely 66 00:03:16,983 --> 00:03:20,570 in the aftermath of a terrorist incident on Westminster Bridge. 67 00:03:20,594 --> 00:03:23,022 This is a genuine image, not fake. 68 00:03:23,046 --> 00:03:26,215 The woman who appears in the photograph was interviewed afterwards, 69 00:03:26,239 --> 00:03:28,648 and she explained that she was utterly traumatized. 70 00:03:28,672 --> 00:03:30,410 She was on the phone to a loved one, 71 00:03:30,434 --> 00:03:33,052 and she wasn't looking at the victim out of respect. 72 00:03:33,076 --> 00:03:37,036 But it still was circulated widely with this Islamophobic framing, 73 00:03:37,060 --> 00:03:40,106 with multiple hashtags, including: #BanIslam. 74 00:03:40,425 --> 00:03:42,823 Now, if you worked at Twitter, what would you do? 75 00:03:42,847 --> 00:03:45,409 Would you take that down, or would you leave it up? 76 00:03:46,553 --> 00:03:49,982 My gut reaction, my emotional reaction, is to take this down. 77 00:03:50,006 --> 00:03:52,148 I hate the framing of this image. 78 00:03:52,585 --> 00:03:54,973 But freedom of expression is a human right, 79 00:03:54,997 --> 00:03:58,222 and if we start taking down speech that makes us feel uncomfortable, 80 00:03:58,246 --> 00:03:59,476 we're in trouble. 81 00:03:59,500 --> 00:04:01,794 And this might look like a clear-cut case, 82 00:04:01,818 --> 00:04:03,516 but, actually, most speech isn't. 83 00:04:03,540 --> 00:04:05,976 These lines are incredibly difficult to draw. 84 00:04:06,000 --> 00:04:08,281 What's a well-meaning decision by one person 85 00:04:08,305 --> 00:04:10,382 is outright censorship to the next. 86 00:04:10,759 --> 00:04:13,688 What we now know is that this account, Texas Lone Star, 87 00:04:13,712 --> 00:04:16,942 was part of a wider Russian disinformation campaign, 88 00:04:16,966 --> 00:04:19,117 one that has since been taken down. 89 00:04:19,141 --> 00:04:20,704 Would that change your view? 90 00:04:21,322 --> 00:04:22,481 It would mine, 91 00:04:22,505 --> 00:04:24,806 because now it's a case of a coordinated campaign 92 00:04:24,830 --> 00:04:26,045 to sow discord. 93 00:04:26,069 --> 00:04:28,030 And for those of you who'd like to think 94 00:04:28,054 --> 00:04:30,885 that artificial intelligence will solve all of our problems, 95 00:04:30,909 --> 00:04:33,134 I think we can agree that we're a long way away 96 00:04:33,158 --> 00:04:35,745 from AI that's able to make sense of posts like this. 97 00:04:36,856 --> 00:04:39,363 So I'd like to explain three interlocking issues 98 00:04:39,387 --> 00:04:41,760 that make this so complex 99 00:04:41,784 --> 00:04:44,906 and then think about some ways we can consider these challenges. 100 00:04:45,348 --> 00:04:49,238 First, we just don't have a rational relationship to information, 101 00:04:49,262 --> 00:04:50,730 we have an emotional one. 102 00:04:50,754 --> 00:04:54,548 It's just not true that more facts will make everything OK, 103 00:04:54,572 --> 00:04:57,672 because the algorithms that determine what content we see, 104 00:04:57,696 --> 00:05:00,823 well, they're designed to reward our emotional responses. 105 00:05:00,847 --> 00:05:02,228 And when we're fearful, 106 00:05:02,252 --> 00:05:05,426 oversimplified narratives, conspiratorial explanations 107 00:05:05,450 --> 00:05:08,868 and language that demonizes others is far more effective. 108 00:05:09,538 --> 00:05:11,412 And besides, many of these companies, 109 00:05:11,436 --> 00:05:13,982 their business model is attached to attention, 110 00:05:14,006 --> 00:05:17,696 which means these algorithms will always be skewed towards emotion. 111 00:05:18,371 --> 00:05:22,669 Second, most of the speech I'm talking about here is legal. 112 00:05:23,081 --> 00:05:24,527 It would be a different matter 113 00:05:24,551 --> 00:05:26,892 if I was talking about child sexual abuse imagery 114 00:05:26,916 --> 00:05:28,843 or content that incites violence. 115 00:05:28,867 --> 00:05:32,137 It can be perfectly legal to post an outright lie. 116 00:05:33,130 --> 00:05:37,164 But people keep talking about taking down "problematic" or "harmful" content, 117 00:05:37,188 --> 00:05:39,797 but with no clear definition of what they mean by that, 118 00:05:39,821 --> 00:05:41,085 including Mark Zuckerberg, 119 00:05:41,109 --> 00:05:44,521 who recently called for global regulation to moderate speech. 120 00:05:44,870 --> 00:05:47,085 And my concern is that we're seeing governments 121 00:05:47,109 --> 00:05:48,401 right around the world 122 00:05:48,425 --> 00:05:51,101 rolling out hasty policy decisions 123 00:05:51,125 --> 00:05:53,871 that might actually trigger much more serious consequences 124 00:05:53,895 --> 00:05:55,609 when it comes to our speech. 125 00:05:56,006 --> 00:05:59,712 And even if we could decide which speech to take up or take down, 126 00:05:59,736 --> 00:06:01,910 we've never had so much speech. 127 00:06:01,934 --> 00:06:04,065 Every second, millions of pieces of content 128 00:06:04,089 --> 00:06:06,196 are uploaded by people right around the world 129 00:06:06,220 --> 00:06:07,388 in different languages, 130 00:06:07,412 --> 00:06:10,180 drawing on thousands of different cultural contexts. 131 00:06:10,204 --> 00:06:12,736 We've simply never had effective mechanisms 132 00:06:12,760 --> 00:06:14,498 to moderate speech at this scale, 133 00:06:14,522 --> 00:06:17,323 whether powered by humans or by technology. 134 00:06:18,284 --> 00:06:22,228 And third, these companies -- Google, Twitter, Facebook, WhatsApp -- 135 00:06:22,252 --> 00:06:25,093 they're part of a wider information ecosystem. 136 00:06:25,117 --> 00:06:28,469 We like to lay all the blame at their feet, but the truth is, 137 00:06:28,493 --> 00:06:32,323 the mass media and elected officials can also play an equal role 138 00:06:32,347 --> 00:06:35,260 in amplifying rumors and conspiracies when they want to. 139 00:06:35,800 --> 00:06:40,744 As can we, when we mindlessly forward divisive or misleading content 140 00:06:40,768 --> 00:06:42,053 without trying. 141 00:06:42,077 --> 00:06:43,877 We're adding to the pollution. 142 00:06:45,236 --> 00:06:47,854 I know we're all looking for an easy fix. 143 00:06:47,878 --> 00:06:49,545 But there just isn't one. 144 00:06:49,950 --> 00:06:54,395 Any solution will have to be rolled out at a massive scale, internet scale, 145 00:06:54,419 --> 00:06:57,680 and yes, the platforms, they're used to operating at that level. 146 00:06:57,704 --> 00:07:01,176 But can and should we allow them to fix these problems? 147 00:07:01,668 --> 00:07:02,900 They're certainly trying. 148 00:07:02,924 --> 00:07:07,010 But most of us would agree that, actually, we don't want global corporations 149 00:07:07,034 --> 00:07:09,366 to be the guardians of truth and fairness online. 150 00:07:09,390 --> 00:07:11,927 And I also think the platforms would agree with that. 151 00:07:12,257 --> 00:07:15,138 And at the moment, they're marking their own homework. 152 00:07:15,162 --> 00:07:16,360 They like to tell us 153 00:07:16,384 --> 00:07:18,963 that the interventions they're rolling out are working, 154 00:07:18,987 --> 00:07:21,527 but because they write their own transparency reports, 155 00:07:21,551 --> 00:07:25,169 there's no way for us to independently verify what's actually happening. 156 00:07:26,431 --> 00:07:29,773 (Applause) 157 00:07:29,797 --> 00:07:32,749 And let's also be clear that most of the changes we see 158 00:07:32,773 --> 00:07:35,767 only happen after journalists undertake an investigation 159 00:07:35,791 --> 00:07:37,402 and find evidence of bias 160 00:07:37,426 --> 00:07:40,255 or content that breaks their community guidelines. 161 00:07:40,815 --> 00:07:45,410 So yes, these companies have to play a really important role in this process, 162 00:07:45,434 --> 00:07:46,994 but they can't control it. 163 00:07:47,855 --> 00:07:49,373 So what about governments? 164 00:07:49,863 --> 00:07:52,959 Many people believe that global regulation is our last hope 165 00:07:52,983 --> 00:07:55,863 in terms of cleaning up our information ecosystem. 166 00:07:55,887 --> 00:07:59,053 But what I see are lawmakers who are struggling to keep up to date 167 00:07:59,077 --> 00:08:01,418 with the rapid changes in technology. 168 00:08:01,442 --> 00:08:03,346 And worse, they're working in the dark, 169 00:08:03,370 --> 00:08:05,191 because they don't have access to data 170 00:08:05,215 --> 00:08:07,865 to understand what's happening on these platforms. 171 00:08:08,260 --> 00:08:11,331 And anyway, which governments would we trust to do this? 172 00:08:11,355 --> 00:08:14,125 We need a global response, not a national one. 173 00:08:15,419 --> 00:08:17,696 So the missing link is us. 174 00:08:17,720 --> 00:08:20,843 It's those people who use these technologies every day. 175 00:08:21,260 --> 00:08:25,851 Can we design a new infrastructure to support quality information? 176 00:08:26,371 --> 00:08:27,601 Well, I believe we can, 177 00:08:27,625 --> 00:08:30,982 and I've got a few ideas about what we might be able to actually do. 178 00:08:31,006 --> 00:08:34,109 So firstly, if we're serious about bringing the public into this, 179 00:08:34,133 --> 00:08:36,514 can we take some inspiration from Wikipedia? 180 00:08:36,538 --> 00:08:38,362 They've shown us what's possible. 181 00:08:38,386 --> 00:08:39,537 Yes, it's not perfect, 182 00:08:39,561 --> 00:08:42,195 but they've demonstrated that with the right structures, 183 00:08:42,219 --> 00:08:44,854 with a global outlook and lots and lots of transparency, 184 00:08:44,878 --> 00:08:47,974 you can build something that will earn the trust of most people. 185 00:08:47,998 --> 00:08:51,160 Because we have to find a way to tap into the collective wisdom 186 00:08:51,184 --> 00:08:53,493 and experience of all users. 187 00:08:53,517 --> 00:08:56,163 This is particularly the case for women, people of color 188 00:08:56,187 --> 00:08:57,533 and underrepresented groups. 189 00:08:57,557 --> 00:08:58,723 Because guess what? 190 00:08:58,747 --> 00:09:01,482 They are experts when it comes to hate and disinformation, 191 00:09:01,506 --> 00:09:05,022 because they have been the targets of these campaigns for so long. 192 00:09:05,046 --> 00:09:07,396 And over the years, they've been raising flags, 193 00:09:07,420 --> 00:09:09,085 and they haven't been listened to. 194 00:09:09,109 --> 00:09:10,389 This has got to change. 195 00:09:10,807 --> 00:09:15,133 So could we build a Wikipedia for trust? 196 00:09:15,157 --> 00:09:19,346 Could we find a way that users can actually provide insights? 197 00:09:19,370 --> 00:09:23,067 They could offer insights around difficult content-moderation decisions. 198 00:09:23,091 --> 00:09:24,554 They could provide feedback 199 00:09:24,578 --> 00:09:27,619 when platforms decide they want to roll out new changes. 200 00:09:28,241 --> 00:09:32,403 Second, people's experiences with the information is personalized. 201 00:09:32,427 --> 00:09:35,070 My Facebook news feed is very different to yours. 202 00:09:35,094 --> 00:09:37,839 Your YouTube recommendations are very different to mine. 203 00:09:37,863 --> 00:09:40,355 That makes it impossible for us to actually examine 204 00:09:40,379 --> 00:09:42,402 what information people are seeing. 205 00:09:42,815 --> 00:09:44,204 So could we imagine 206 00:09:44,228 --> 00:09:49,006 developing some kind of centralized open repository for anonymized data, 207 00:09:49,030 --> 00:09:51,894 with privacy and ethical concerns built in? 208 00:09:52,220 --> 00:09:53,998 Because imagine what we would learn 209 00:09:54,022 --> 00:09:57,283 if we built out a global network of concerned citizens 210 00:09:57,307 --> 00:10:00,601 who wanted to donate their social data to science. 211 00:10:01,141 --> 00:10:02,863 Because we actually know very little 212 00:10:02,887 --> 00:10:05,768 about the long-term consequences of hate and disinformation 213 00:10:05,792 --> 00:10:07,767 on people's attitudes and behaviors. 214 00:10:08,236 --> 00:10:09,403 And what we do know, 215 00:10:09,427 --> 00:10:11,620 most of that has been carried out in the US, 216 00:10:11,644 --> 00:10:14,025 despite the fact that this is a global problem. 217 00:10:14,049 --> 00:10:15,684 We need to work on that, too. 218 00:10:16,192 --> 00:10:17,342 And third, 219 00:10:17,366 --> 00:10:19,676 can we find a way to connect the dots? 220 00:10:19,700 --> 00:10:23,138 No one sector, let alone nonprofit, start-up or government, 221 00:10:23,162 --> 00:10:24,584 is going to solve this. 222 00:10:24,608 --> 00:10:27,172 But there are very smart people right around the world 223 00:10:27,196 --> 00:10:28,577 working on these challenges, 224 00:10:28,601 --> 00:10:32,177 from newsrooms, civil society, academia, activist groups. 225 00:10:32,201 --> 00:10:34,099 And you can see some of them here. 226 00:10:34,123 --> 00:10:37,050 Some are building out indicators of content credibility. 227 00:10:37,074 --> 00:10:38,320 Others are fact-checking, 228 00:10:38,344 --> 00:10:41,935 so that false claims, videos and images can be down-ranked by the platforms. 229 00:10:41,959 --> 00:10:44,172 A nonprofit I helped to found, First Draft, 230 00:10:44,196 --> 00:10:47,164 is working with normally competitive newsrooms around the world 231 00:10:47,188 --> 00:10:50,691 to help them build out investigative, collaborative programs. 232 00:10:51,231 --> 00:10:53,540 And Danny Hillis, a software architect, 233 00:10:53,564 --> 00:10:55,945 is designing a new system called The Underlay, 234 00:10:55,969 --> 00:10:58,744 which will be a record of all public statements of fact 235 00:10:58,768 --> 00:11:00,097 connected to their sources, 236 00:11:00,121 --> 00:11:03,775 so that people and algorithms can better judge what is credible. 237 00:11:04,800 --> 00:11:08,156 And educators around the world are testing different techniques 238 00:11:08,180 --> 00:11:11,664 for finding ways to make people critical of the content they consume. 239 00:11:12,633 --> 00:11:15,774 All of these efforts are wonderful, but they're working in silos, 240 00:11:15,798 --> 00:11:18,478 and many of them are woefully underfunded. 241 00:11:18,502 --> 00:11:20,555 There are also hundreds of very smart people 242 00:11:20,579 --> 00:11:22,231 working inside these companies, 243 00:11:22,255 --> 00:11:24,580 but again, these efforts can feel disjointed, 244 00:11:24,604 --> 00:11:28,541 because they're actually developing different solutions to the same problems. 245 00:11:29,205 --> 00:11:31,474 How can we find a way to bring people together 246 00:11:31,498 --> 00:11:34,776 in one physical location for days or weeks at a time, 247 00:11:34,800 --> 00:11:37,196 so they can actually tackle these problems together 248 00:11:37,220 --> 00:11:39,038 but from their different perspectives? 249 00:11:39,062 --> 00:11:40,402 So can we do this? 250 00:11:40,426 --> 00:11:43,665 Can we build out a coordinated, ambitious response, 251 00:11:43,689 --> 00:11:47,398 one that matches the scale and the complexity of the problem? 252 00:11:47,819 --> 00:11:49,192 I really think we can. 253 00:11:49,216 --> 00:11:52,175 Together, let's rebuild our information commons. 254 00:11:52,819 --> 00:11:54,009 Thank you. 255 00:11:54,033 --> 00:11:57,761 (Applause)