WEBVTT 00:00:01.548 --> 00:00:04.064 No matter who you are or where you live, 00:00:04.088 --> 00:00:06.444 I'm guessing that you have at least one relative 00:00:06.468 --> 00:00:08.802 that likes to forward those emails. 00:00:09.206 --> 00:00:11.106 You know the ones I'm talking about -- 00:00:11.130 --> 00:00:13.984 the ones with dubious claims or conspiracy videos. 00:00:14.315 --> 00:00:16.983 And you've probably already muted them on Facebook 00:00:17.007 --> 00:00:19.355 for sharing social posts like this one. NOTE Paragraph 00:00:19.379 --> 00:00:20.783 It's an image of a banana 00:00:20.807 --> 00:00:23.474 with a strange red cross running through the center. 00:00:23.498 --> 00:00:25.637 And the text around it is warning people 00:00:25.661 --> 00:00:27.819 not to eat fruits that look like this, 00:00:27.843 --> 00:00:29.871 suggesting they've been injected with blood 00:00:29.895 --> 00:00:32.025 contaminated with the HIV virus. 00:00:32.049 --> 00:00:34.652 And the social share message above it simply says, 00:00:34.676 --> 00:00:36.857 "Please forward to save lives." 00:00:37.672 --> 00:00:40.966 Now, fact-checkers have been debunking this one for years, 00:00:40.990 --> 00:00:43.799 but it's one of those rumors that just won't die. 00:00:43.823 --> 00:00:45.094 A zombie rumor. 00:00:45.513 --> 00:00:47.606 And, of course, it's entirely false. NOTE Paragraph 00:00:48.180 --> 00:00:51.139 It might be tempting to laugh at an example like this, to say, 00:00:51.163 --> 00:00:53.047 "Well, who would believe this, anyway?" 00:00:53.419 --> 00:00:55.045 But the reason it's a zombie rumor 00:00:55.069 --> 00:00:58.958 is because it taps into people's deepest fears about their own safety 00:00:58.982 --> 00:01:01.157 and that of the people they love. 00:01:01.783 --> 00:01:05.056 And if you spend as enough time as I have looking at misinformation, 00:01:05.080 --> 00:01:07.500 you know that this is just one example of many 00:01:07.524 --> 00:01:10.571 that taps into people's deepest fears and vulnerabilities. NOTE Paragraph 00:01:11.214 --> 00:01:15.583 Every day, across the world, we see scores of new memes on Instagram 00:01:15.607 --> 00:01:18.646 encouraging parents not to vaccinate their children. 00:01:18.670 --> 00:01:23.202 We see new videos on YouTube explaining that climate change is a hoax. 00:01:23.226 --> 00:01:27.528 And across all platforms, we see endless posts designed to demonize others 00:01:27.552 --> 00:01:31.053 on the basis of their race, religion or sexuality. NOTE Paragraph 00:01:32.314 --> 00:01:35.344 Welcome to one of the central challenges of our time. 00:01:35.647 --> 00:01:39.672 How can we maintain an internet with freedom of expression at the core, 00:01:39.696 --> 00:01:42.999 while also ensuring that the content that's being disseminated 00:01:43.023 --> 00:01:46.909 doesn't cause irreparable harms to our democracies, our communities 00:01:46.933 --> 00:01:49.171 and to our physical and mental well-being? 00:01:49.998 --> 00:01:52.085 Because we live in the information age, 00:01:52.109 --> 00:01:55.656 yet the central currency upon which we all depend -- information -- 00:01:55.680 --> 00:01:58.037 is no longer deemed entirely trustworthy 00:01:58.061 --> 00:02:00.389 and, at times, can appear downright dangerous. 00:02:00.811 --> 00:02:04.748 This is thanks in part to the runaway growth of social sharing platforms 00:02:04.772 --> 00:02:06.414 that allow us to scroll through, 00:02:06.438 --> 00:02:08.660 where lies and facts sit side by side, 00:02:08.684 --> 00:02:11.755 but with none of the traditional signals of trustworthiness. NOTE Paragraph 00:02:12.268 --> 00:02:15.887 And goodness -- our language around this is horribly muddled. 00:02:15.911 --> 00:02:19.014 People are still obsessed with the phrase "fake news," 00:02:19.038 --> 00:02:21.569 despite the fact that it's extraordinarily unhelpful 00:02:21.593 --> 00:02:25.053 and used to describe a number of things that are actually very different: 00:02:25.077 --> 00:02:28.463 lies, rumors, hoaxes, conspiracies, propaganda. 00:02:28.911 --> 00:02:31.823 And I really wish we could stop using a phrase 00:02:31.847 --> 00:02:34.709 that's been co-opted by politicians right around the world, 00:02:34.733 --> 00:02:36.204 from the left and the right, 00:02:36.228 --> 00:02:39.450 used as a weapon to attack a free and independent press. NOTE Paragraph 00:02:40.307 --> 00:02:45.009 (Applause) NOTE Paragraph 00:02:45.033 --> 00:02:48.495 Because we need our professional news media now more than ever. 00:02:48.882 --> 00:02:52.255 And besides, most of this content doesn't even masquerade as news. 00:02:52.279 --> 00:02:54.921 It's memes, videos, social posts. 00:02:54.945 --> 00:02:58.398 And most of it is not fake; it's misleading. 00:02:58.422 --> 00:03:01.437 We tend to fixate on what's true or false. 00:03:01.461 --> 00:03:05.493 But the biggest concern is actually the weaponization of context. 00:03:06.855 --> 00:03:08.823 Because the most effective disinformation 00:03:08.847 --> 00:03:11.895 has always been that which has a kernel of truth to it. NOTE Paragraph 00:03:11.919 --> 00:03:15.395 Let's take this example from London, from March 2017, 00:03:15.419 --> 00:03:16.959 a tweet that circulated widely 00:03:16.983 --> 00:03:20.570 in the aftermath of a terrorist incident on Westminster Bridge. 00:03:20.594 --> 00:03:23.022 This is a genuine image, not fake. 00:03:23.046 --> 00:03:26.215 The woman who appears in the photograph was interviewed afterwards, 00:03:26.239 --> 00:03:28.648 and she explained that she was utterly traumatized. 00:03:28.672 --> 00:03:30.410 She was on the phone to a loved one, 00:03:30.434 --> 00:03:33.052 and she wasn't looking at the victim out of respect. 00:03:33.076 --> 00:03:37.036 But it still was circulated widely with this Islamophobic framing, 00:03:37.060 --> 00:03:40.106 with multiple hashtags, including: #BanIslam. 00:03:40.425 --> 00:03:42.823 Now, if you worked at Twitter, what would you do? 00:03:42.847 --> 00:03:45.409 Would you take that down, or would you leave it up? 00:03:46.553 --> 00:03:49.982 My gut reaction, my emotional reaction, is to take this down. 00:03:50.006 --> 00:03:52.148 I hate the framing of this image. 00:03:52.585 --> 00:03:54.973 But freedom of expression is a human right, 00:03:54.997 --> 00:03:58.222 and if we start taking down speech that makes us feel uncomfortable, 00:03:58.246 --> 00:03:59.476 we're in trouble. NOTE Paragraph 00:03:59.500 --> 00:04:01.794 And this might look like a clear-cut case, 00:04:01.818 --> 00:04:03.516 but, actually, most speech isn't. 00:04:03.540 --> 00:04:05.976 These lines are incredibly difficult to draw. 00:04:06.000 --> 00:04:08.281 What's a well-meaning decision by one person 00:04:08.305 --> 00:04:10.382 is outright censorship to the next. 00:04:10.759 --> 00:04:13.688 What we now know is that this account, Texas Lone Star, 00:04:13.712 --> 00:04:16.942 was part of a wider Russian disinformation campaign, 00:04:16.966 --> 00:04:19.117 one that has since been taken down. 00:04:19.141 --> 00:04:20.704 Would that change your view? 00:04:21.322 --> 00:04:22.481 It would mine, 00:04:22.505 --> 00:04:24.806 because now it's a case of a coordinated campaign 00:04:24.830 --> 00:04:26.045 to sow discord. 00:04:26.069 --> 00:04:28.030 And for those of you who'd like to think 00:04:28.054 --> 00:04:30.885 that artificial intelligence will solve all of our problems, 00:04:30.909 --> 00:04:33.134 I think we can agree that we're a long way away 00:04:33.158 --> 00:04:35.745 from AI that's able to make sense of posts like this. NOTE Paragraph 00:04:36.856 --> 00:04:39.363 So I'd like to explain three interlocking issues 00:04:39.387 --> 00:04:41.760 that make this so complex 00:04:41.784 --> 00:04:44.906 and then think about some ways we can consider these challenges. 00:04:45.348 --> 00:04:49.238 First, we just don't have a rational relationship to information, 00:04:49.262 --> 00:04:50.730 we have an emotional one. 00:04:50.754 --> 00:04:54.548 It's just not true that more facts will make everything OK, 00:04:54.572 --> 00:04:57.672 because the algorithms that determine what content we see, 00:04:57.696 --> 00:05:00.823 well, they're designed to reward our emotional responses. 00:05:00.847 --> 00:05:02.228 And when we're fearful, 00:05:02.252 --> 00:05:05.426 oversimplified narratives, conspiratorial explanations 00:05:05.450 --> 00:05:08.868 and language that demonizes others is far more effective. 00:05:09.538 --> 00:05:11.412 And besides, many of these companies, 00:05:11.436 --> 00:05:13.982 their business model is attached to attention, 00:05:14.006 --> 00:05:17.696 which means these algorithms will always be skewed towards emotion. NOTE Paragraph 00:05:18.371 --> 00:05:22.669 Second, most of the speech I'm talking about here is legal. 00:05:23.081 --> 00:05:24.527 It would be a different matter 00:05:24.551 --> 00:05:26.892 if I was talking about child sexual abuse imagery 00:05:26.916 --> 00:05:28.843 or content that incites violence. 00:05:28.867 --> 00:05:32.137 It can be perfectly legal to post an outright lie. 00:05:33.130 --> 00:05:37.164 But people keep talking about taking down "problematic" or "harmful" content, 00:05:37.188 --> 00:05:39.797 but with no clear definition of what they mean by that, 00:05:39.821 --> 00:05:41.085 including Mark Zuckerberg, 00:05:41.109 --> 00:05:44.521 who recently called for global regulation to moderate speech. 00:05:44.870 --> 00:05:47.085 And my concern is that we're seeing governments 00:05:47.109 --> 00:05:48.401 right around the world 00:05:48.425 --> 00:05:51.101 rolling out hasty policy decisions 00:05:51.125 --> 00:05:53.871 that might actually trigger much more serious consequences 00:05:53.895 --> 00:05:55.609 when it comes to our speech. 00:05:56.006 --> 00:05:59.712 And even if we could decide which speech to take up or take down, 00:05:59.736 --> 00:06:01.910 we've never had so much speech. 00:06:01.934 --> 00:06:04.065 Every second, millions of pieces of content 00:06:04.089 --> 00:06:06.196 are uploaded by people right around the world 00:06:06.220 --> 00:06:07.388 in different languages, 00:06:07.412 --> 00:06:10.180 drawing on thousands of different cultural contexts. 00:06:10.204 --> 00:06:12.736 We've simply never had effective mechanisms 00:06:12.760 --> 00:06:14.498 to moderate speech at this scale, 00:06:14.522 --> 00:06:17.323 whether powered by humans or by technology. NOTE Paragraph 00:06:18.284 --> 00:06:22.228 And third, these companies -- Google, Twitter, Facebook, WhatsApp -- 00:06:22.252 --> 00:06:25.093 they're part of a wider information ecosystem. 00:06:25.117 --> 00:06:28.469 We like to lay all the blame at their feet, but the truth is, 00:06:28.493 --> 00:06:32.323 the mass media and elected officials can also play an equal role 00:06:32.347 --> 00:06:35.260 in amplifying rumors and conspiracies when they want to. 00:06:35.800 --> 00:06:40.744 As can we, when we mindlessly forward divisive or misleading content 00:06:40.768 --> 00:06:42.053 without trying. 00:06:42.077 --> 00:06:43.877 We're adding to the pollution. NOTE Paragraph 00:06:45.236 --> 00:06:47.854 I know we're all looking for an easy fix. 00:06:47.878 --> 00:06:49.545 But there just isn't one. 00:06:49.950 --> 00:06:54.395 Any solution will have to be rolled out at a massive scale, internet scale, 00:06:54.419 --> 00:06:57.680 and yes, the platforms, they're used to operating at that level. 00:06:57.704 --> 00:07:01.176 But can and should we allow them to fix these problems? 00:07:01.668 --> 00:07:02.900 They're certainly trying. 00:07:02.924 --> 00:07:07.010 But most of us would agree that, actually, we don't want global corporations 00:07:07.034 --> 00:07:09.366 to be the guardians of truth and fairness online. 00:07:09.390 --> 00:07:11.927 And I also think the platforms would agree with that. 00:07:12.257 --> 00:07:15.138 And at the moment, they're marking their own homework. 00:07:15.162 --> 00:07:16.360 They like to tell us 00:07:16.384 --> 00:07:18.963 that the interventions they're rolling out are working, 00:07:18.987 --> 00:07:21.527 but because they write their own transparency reports, 00:07:21.551 --> 00:07:25.169 there's no way for us to independently verify what's actually happening. NOTE Paragraph 00:07:26.431 --> 00:07:29.773 (Applause) NOTE Paragraph 00:07:29.797 --> 00:07:32.749 And let's also be clear that most of the changes we see 00:07:32.773 --> 00:07:35.767 only happen after journalists undertake an investigation 00:07:35.791 --> 00:07:37.402 and find evidence of bias 00:07:37.426 --> 00:07:40.255 or content that breaks their community guidelines. 00:07:40.815 --> 00:07:45.410 So yes, these companies have to play a really important role in this process, 00:07:45.434 --> 00:07:46.994 but they can't control it. NOTE Paragraph 00:07:47.855 --> 00:07:49.373 So what about governments? 00:07:49.863 --> 00:07:52.959 Many people believe that global regulation is our last hope 00:07:52.983 --> 00:07:55.863 in terms of cleaning up our information ecosystem. 00:07:55.887 --> 00:07:59.053 But what I see are lawmakers who are struggling to keep up to date 00:07:59.077 --> 00:08:01.418 with the rapid changes in technology. 00:08:01.442 --> 00:08:03.346 And worse, they're working in the dark, 00:08:03.370 --> 00:08:05.191 because they don't have access to data 00:08:05.215 --> 00:08:07.865 to understand what's happening on these platforms. 00:08:08.260 --> 00:08:11.331 And anyway, which governments would we trust to do this? 00:08:11.355 --> 00:08:14.125 We need a global response, not a national one. NOTE Paragraph 00:08:15.419 --> 00:08:17.696 So the missing link is us. 00:08:17.720 --> 00:08:20.843 It's those people who use these technologies every day. 00:08:21.260 --> 00:08:25.851 Can we design a new infrastructure to support quality information? 00:08:26.371 --> 00:08:27.601 Well, I believe we can, 00:08:27.625 --> 00:08:30.982 and I've got a few ideas about what we might be able to actually do. 00:08:31.006 --> 00:08:34.109 So firstly, if we're serious about bringing the public into this, 00:08:34.133 --> 00:08:36.514 can we take some inspiration from Wikipedia? 00:08:36.538 --> 00:08:38.362 They've shown us what's possible. 00:08:38.386 --> 00:08:39.537 Yes, it's not perfect, 00:08:39.561 --> 00:08:42.195 but they've demonstrated that with the right structures, 00:08:42.219 --> 00:08:44.854 with a global outlook and lots and lots of transparency, 00:08:44.878 --> 00:08:47.974 you can build something that will earn the trust of most people. 00:08:47.998 --> 00:08:51.160 Because we have to find a way to tap into the collective wisdom 00:08:51.184 --> 00:08:53.493 and experience of all users. 00:08:53.517 --> 00:08:56.163 This is particularly the case for women, people of color 00:08:56.187 --> 00:08:57.533 and underrepresented groups. 00:08:57.557 --> 00:08:58.723 Because guess what? 00:08:58.747 --> 00:09:01.482 They are experts when it comes to hate and disinformation, 00:09:01.506 --> 00:09:05.022 because they have been the targets of these campaigns for so long. 00:09:05.046 --> 00:09:07.396 And over the years, they've been raising flags, 00:09:07.420 --> 00:09:09.085 and they haven't been listened to. 00:09:09.109 --> 00:09:10.389 This has got to change. 00:09:10.807 --> 00:09:15.133 So could we build a Wikipedia for trust? 00:09:15.157 --> 00:09:19.346 Could we find a way that users can actually provide insights? 00:09:19.370 --> 00:09:23.067 They could offer insights around difficult content-moderation decisions. 00:09:23.091 --> 00:09:24.554 They could provide feedback 00:09:24.578 --> 00:09:27.619 when platforms decide they want to roll out new changes. NOTE Paragraph 00:09:28.241 --> 00:09:32.403 Second, people's experiences with the information is personalized. 00:09:32.427 --> 00:09:35.070 My Facebook news feed is very different to yours. 00:09:35.094 --> 00:09:37.839 Your YouTube recommendations are very different to mine. 00:09:37.863 --> 00:09:40.355 That makes it impossible for us to actually examine 00:09:40.379 --> 00:09:42.402 what information people are seeing. 00:09:42.815 --> 00:09:44.204 So could we imagine 00:09:44.228 --> 00:09:49.006 developing some kind of centralized open repository for anonymized data, 00:09:49.030 --> 00:09:51.894 with privacy and ethical concerns built in? 00:09:52.220 --> 00:09:53.998 Because imagine what we would learn 00:09:54.022 --> 00:09:57.283 if we built out a global network of concerned citizens 00:09:57.307 --> 00:10:00.601 who wanted to donate their social data to science. 00:10:01.141 --> 00:10:02.863 Because we actually know very little 00:10:02.887 --> 00:10:05.768 about the long-term consequences of hate and disinformation 00:10:05.792 --> 00:10:07.767 on people's attitudes and behaviors. 00:10:08.236 --> 00:10:09.403 And what we do know, 00:10:09.427 --> 00:10:11.620 most of that has been carried out in the US, 00:10:11.644 --> 00:10:14.025 despite the fact that this is a global problem. 00:10:14.049 --> 00:10:15.684 We need to work on that, too. NOTE Paragraph 00:10:16.192 --> 00:10:17.342 And third, 00:10:17.366 --> 00:10:19.676 can we find a way to connect the dots? 00:10:19.700 --> 00:10:23.138 No one sector, let alone nonprofit, start-up or government, 00:10:23.162 --> 00:10:24.584 is going to solve this. 00:10:24.608 --> 00:10:27.172 But there are very smart people right around the world 00:10:27.196 --> 00:10:28.577 working on these challenges, 00:10:28.601 --> 00:10:32.177 from newsrooms, civil society, academia, activist groups. 00:10:32.201 --> 00:10:34.099 And you can see some of them here. 00:10:34.123 --> 00:10:37.050 Some are building out indicators of content credibility. 00:10:37.074 --> 00:10:38.320 Others are fact-checking, 00:10:38.344 --> 00:10:41.935 so that false claims, videos and images can be down-ranked by the platforms. NOTE Paragraph 00:10:41.959 --> 00:10:44.172 A nonprofit I helped to found, First Draft, 00:10:44.196 --> 00:10:47.164 is working with normally competitive newsrooms around the world 00:10:47.188 --> 00:10:50.691 to help them build out investigative, collaborative programs. 00:10:51.231 --> 00:10:53.540 And Danny Hillis, a software architect, 00:10:53.564 --> 00:10:55.945 is designing a new system called The Underlay, 00:10:55.969 --> 00:10:58.744 which will be a record of all public statements of fact 00:10:58.768 --> 00:11:00.097 connected to their sources, 00:11:00.121 --> 00:11:03.775 so that people and algorithms can better judge what is credible. 00:11:04.800 --> 00:11:08.156 And educators around the world are testing different techniques 00:11:08.180 --> 00:11:11.664 for finding ways to make people critical of the content they consume. 00:11:12.633 --> 00:11:15.774 All of these efforts are wonderful, but they're working in silos, 00:11:15.798 --> 00:11:18.478 and many of them are woefully underfunded. NOTE Paragraph 00:11:18.502 --> 00:11:20.555 There are also hundreds of very smart people 00:11:20.579 --> 00:11:22.231 working inside these companies, 00:11:22.255 --> 00:11:24.580 but again, these efforts can feel disjointed, 00:11:24.604 --> 00:11:28.541 because they're actually developing different solutions to the same problems. NOTE Paragraph 00:11:29.205 --> 00:11:31.474 How can we find a way to bring people together 00:11:31.498 --> 00:11:34.776 in one physical location for days or weeks at a time, 00:11:34.800 --> 00:11:37.196 so they can actually tackle these problems together 00:11:37.220 --> 00:11:39.038 but from their different perspectives? 00:11:39.062 --> 00:11:40.402 So can we do this? 00:11:40.426 --> 00:11:43.665 Can we build out a coordinated, ambitious response, 00:11:43.689 --> 00:11:47.398 one that matches the scale and the complexity of the problem? 00:11:47.819 --> 00:11:49.192 I really think we can. 00:11:49.216 --> 00:11:52.175 Together, let's rebuild our information commons. NOTE Paragraph 00:11:52.819 --> 00:11:54.009 Thank you. NOTE Paragraph 00:11:54.033 --> 00:11:57.761 (Applause)