Return to Video

How you can help transform the internet into a place of trust

  • 0:02 - 0:04
    No matter who you are or where you live,
  • 0:04 - 0:06
    I'm guessing that you have
    at least one relative
  • 0:06 - 0:09
    that likes to forward those emails.
  • 0:09 - 0:11
    You know the ones I'm talking about --
  • 0:11 - 0:14
    the ones with dubious claims
    or conspiracy videos.
  • 0:14 - 0:17
    And you've probably
    already muted them on Facebook
  • 0:17 - 0:19
    for sharing social posts like this one.
  • 0:19 - 0:21
    It's an image of a banana
  • 0:21 - 0:23
    with a strange red cross
    running through the center.
  • 0:23 - 0:26
    And the text around it is warning people
  • 0:26 - 0:28
    not to eat fruits that look like this,
  • 0:28 - 0:30
    suggesting they've been
    injected with blood
  • 0:30 - 0:32
    contaminated with the HIV virus.
  • 0:32 - 0:35
    And the social share message
    above it simply says,
  • 0:35 - 0:37
    "Please forward to save lives."
  • 0:38 - 0:41
    Now, fact-checkers have been debunking
    this one for years,
  • 0:41 - 0:44
    but it's one of those rumors
    that just won't die.
  • 0:44 - 0:45
    A zombie rumor.
  • 0:46 - 0:48
    And, of course, it's entirely false.
  • 0:48 - 0:51
    It might be tempting to laugh
    at an example like this, to say,
  • 0:51 - 0:53
    "Well, who would believe this, anyway?"
  • 0:53 - 0:55
    But the reason it's a zombie rumor
  • 0:55 - 0:59
    is because it taps into people's
    deepest fears about their own safety
  • 0:59 - 1:01
    and that of the people they love.
  • 1:02 - 1:05
    And if you spend as enough time
    as I have looking at misinformation,
  • 1:05 - 1:08
    you know that this is just
    one example of many
  • 1:08 - 1:11
    that taps into people's deepest
    fears and vulnerabilities.
  • 1:11 - 1:16
    Every day, across the world,
    we see scores of new memes on Instagram
  • 1:16 - 1:19
    encouraging parents
    not to vaccinate their children.
  • 1:19 - 1:23
    We see new videos on YouTube
    explaining that climate change is a hoax.
  • 1:23 - 1:28
    And across all platforms, we see
    endless posts designed to demonize others
  • 1:28 - 1:31
    on the basis of their race,
    religion or sexuality.
  • 1:32 - 1:35
    Welcome to one of the central
    challenges of our time.
  • 1:36 - 1:40
    How can we maintain an internet
    with freedom of expression at the core,
  • 1:40 - 1:43
    while also ensuring that the content
    that's being disseminated
  • 1:43 - 1:47
    doesn't cause irreparable harms
    to our democracies, our communities
  • 1:47 - 1:49
    and to our physical and mental well-being?
  • 1:50 - 1:52
    Because we live in the information age,
  • 1:52 - 1:56
    yet the central currency
    upon which we all depend -- information --
  • 1:56 - 1:58
    is no longer deemed entirely trustworthy
  • 1:58 - 2:00
    and, at times, can appear
    downright dangerous.
  • 2:01 - 2:05
    This is thanks in part to the runaway
    growth of social sharing platforms
  • 2:05 - 2:06
    that allow us to scroll through,
  • 2:06 - 2:09
    where lies and facts sit side by side,
  • 2:09 - 2:12
    but with none of the traditional
    signals of trustworthiness.
  • 2:12 - 2:16
    And goodness -- our language around this
    is horribly muddled.
  • 2:16 - 2:19
    People are still obsessed
    with the phrase "fake news,"
  • 2:19 - 2:22
    despite the fact that
    it's extraordinarily unhelpful
  • 2:22 - 2:25
    and used to describe a number of things
    that are actually very different:
  • 2:25 - 2:28
    lies, rumors, hoaxes,
    conspiracies, propaganda.
  • 2:29 - 2:32
    And I really wish
    we could stop using a phrase
  • 2:32 - 2:35
    that's been co-opted by politicians
    right around the world,
  • 2:35 - 2:36
    from the left and the right,
  • 2:36 - 2:39
    used as a weapon to attack
    a free and independent press.
  • 2:40 - 2:45
    (Applause)
  • 2:45 - 2:48
    Because we need our professional
    news media now more than ever.
  • 2:49 - 2:52
    And besides, most of this content
    doesn't even masquerade as news.
  • 2:52 - 2:55
    It's memes, videos, social posts.
  • 2:55 - 2:58
    And most of it is not fake;
    it's misleading.
  • 2:58 - 3:01
    We tend to fixate on what's true or false.
  • 3:01 - 3:05
    But the biggest concern is actually
    the weaponization of context.
  • 3:07 - 3:09
    Because the most effective disinformation
  • 3:09 - 3:12
    has always been that
    which has a kernel of truth to it.
  • 3:12 - 3:15
    Let's take this example
    from London, from March 2017,
  • 3:15 - 3:17
    a tweet that circulated widely
  • 3:17 - 3:21
    in the aftermath of a terrorist incident
    on Westminster Bridge.
  • 3:21 - 3:23
    This is a genuine image, not fake.
  • 3:23 - 3:26
    The woman who appears in the photograph
    was interviewed afterwards,
  • 3:26 - 3:29
    and she explained that
    she was utterly traumatized.
  • 3:29 - 3:30
    She was on the phone to a loved one,
  • 3:30 - 3:33
    and she wasn't looking
    at the victim out of respect.
  • 3:33 - 3:37
    But it still was circulated widely
    with this Islamophobic framing,
  • 3:37 - 3:40
    with multiple hashtags,
    including: #BanIslam.
  • 3:40 - 3:43
    Now, if you worked at Twitter,
    what would you do?
  • 3:43 - 3:45
    Would you take that down,
    or would you leave it up?
  • 3:47 - 3:50
    My gut reaction, my emotional reaction,
    is to take this down.
  • 3:50 - 3:52
    I hate the framing of this image.
  • 3:53 - 3:55
    But freedom of expression
    is a human right,
  • 3:55 - 3:58
    and if we start taking down speech
    that makes us feel uncomfortable,
  • 3:58 - 3:59
    we're in trouble.
  • 4:00 - 4:02
    And this might look like a clear-cut case,
  • 4:02 - 4:04
    but, actually, most speech isn't.
  • 4:04 - 4:06
    These lines are incredibly
    difficult to draw.
  • 4:06 - 4:08
    What's a well-meaning
    decision by one person
  • 4:08 - 4:10
    is outright censorship to the next.
  • 4:11 - 4:14
    What we now know is that
    this account, Texas Lone Star,
  • 4:14 - 4:17
    was part of a wider Russian
    disinformation campaign,
  • 4:17 - 4:19
    one that has since been taken down.
  • 4:19 - 4:21
    Would that change your view?
  • 4:21 - 4:22
    It would mine,
  • 4:23 - 4:25
    because now it's a case
    of a coordinated campaign
  • 4:25 - 4:26
    to sow discord.
  • 4:26 - 4:28
    And for those of you who'd like to think
  • 4:28 - 4:31
    that artificial intelligence
    will solve all of our problems,
  • 4:31 - 4:33
    I think we can agree
    that we're a long way away
  • 4:33 - 4:36
    from AI that's able to make sense
    of posts like this.
  • 4:37 - 4:39
    So I'd like to explain
    three interlocking issues
  • 4:39 - 4:42
    that make this so complex
  • 4:42 - 4:45
    and then think about some ways
    we can consider these challenges.
  • 4:45 - 4:49
    First, we just don't have
    a rational relationship to information,
  • 4:49 - 4:51
    we have an emotional one.
  • 4:51 - 4:55
    It's just not true that more facts
    will make everything OK,
  • 4:55 - 4:58
    because the algorithms that determine
    what content we see,
  • 4:58 - 5:01
    well, they're designed to reward
    our emotional responses.
  • 5:01 - 5:02
    And when we're fearful,
  • 5:02 - 5:05
    oversimplified narratives,
    conspiratorial explanations
  • 5:05 - 5:09
    and language that demonizes others
    is far more effective.
  • 5:10 - 5:11
    And besides, many of these companies,
  • 5:11 - 5:14
    their business model
    is attached to attention,
  • 5:14 - 5:18
    which means these algorithms
    will always be skewed towards emotion.
  • 5:18 - 5:23
    Second, most of the speech
    I'm talking about here is legal.
  • 5:23 - 5:25
    It would be a different matter
  • 5:25 - 5:27
    if I was talking about
    child sexual abuse imagery
  • 5:27 - 5:29
    or content that incites violence.
  • 5:29 - 5:32
    It can be perfectly legal
    to post an outright lie.
  • 5:33 - 5:37
    But people keep talking about taking down
    "problematic" or "harmful" content,
  • 5:37 - 5:40
    but with no clear definition
    of what they mean by that,
  • 5:40 - 5:41
    including Mark Zuckerberg,
  • 5:41 - 5:45
    who recently called for global
    regulation to moderate speech.
  • 5:45 - 5:47
    And my concern is that
    we're seeing governments
  • 5:47 - 5:48
    right around the world
  • 5:48 - 5:51
    rolling out hasty policy decisions
  • 5:51 - 5:54
    that might actually trigger
    much more serious consequences
  • 5:54 - 5:56
    when it comes to our speech.
  • 5:56 - 6:00
    And even if we could decide
    which speech to take up or take down,
  • 6:00 - 6:02
    we've never had so much speech.
  • 6:02 - 6:04
    Every second, millions
    of pieces of content
  • 6:04 - 6:06
    are uploaded by people
    right around the world
  • 6:06 - 6:07
    in different languages,
  • 6:07 - 6:10
    drawing on thousands
    of different cultural contexts.
  • 6:10 - 6:13
    We've simply never had
    effective mechanisms
  • 6:13 - 6:14
    to moderate speech at this scale,
  • 6:15 - 6:17
    whether powered by humans
    or by technology.
  • 6:18 - 6:22
    And third, these companies --
    Google, Twitter, Facebook, WhatsApp --
  • 6:22 - 6:25
    they're part of a wider
    information ecosystem.
  • 6:25 - 6:28
    We like to lay all the blame
    at their feet, but the truth is,
  • 6:28 - 6:32
    the mass media and elected officials
    can also play an equal role
  • 6:32 - 6:35
    in amplifying rumors and conspiracies
    when they want to.
  • 6:36 - 6:41
    As can we, when we mindlessly forward
    divisive or misleading content
  • 6:41 - 6:42
    without trying.
  • 6:42 - 6:44
    We're adding to the pollution.
  • 6:45 - 6:48
    I know we're all looking for an easy fix.
  • 6:48 - 6:50
    But there just isn't one.
  • 6:50 - 6:54
    Any solution will have to be rolled out
    at a massive scale, internet scale,
  • 6:54 - 6:58
    and yes, the platforms,
    they're used to operating at that level.
  • 6:58 - 7:01
    But can and should we allow them
    to fix these problems?
  • 7:02 - 7:03
    They're certainly trying.
  • 7:03 - 7:07
    But most of us would agree that, actually,
    we don't want global corporations
  • 7:07 - 7:09
    to be the guardians of truth
    and fairness online.
  • 7:09 - 7:12
    And I also think the platforms
    would agree with that.
  • 7:12 - 7:15
    And at the moment,
    they're marking their own homework.
  • 7:15 - 7:16
    They like to tell us
  • 7:16 - 7:19
    that the interventions
    they're rolling out are working,
  • 7:19 - 7:22
    but because they write
    their own transparency reports,
  • 7:22 - 7:25
    there's no way for us to independently
    verify what's actually happening.
  • 7:26 - 7:30
    (Applause)
  • 7:30 - 7:33
    And let's also be clear
    that most of the changes we see
  • 7:33 - 7:36
    only happen after journalists
    undertake an investigation
  • 7:36 - 7:37
    and find evidence of bias
  • 7:37 - 7:40
    or content that breaks
    their community guidelines.
  • 7:41 - 7:45
    So yes, these companies have to play
    a really important role in this process,
  • 7:45 - 7:47
    but they can't control it.
  • 7:48 - 7:49
    So what about governments?
  • 7:50 - 7:53
    Many people believe
    that global regulation is our last hope
  • 7:53 - 7:56
    in terms of cleaning up
    our information ecosystem.
  • 7:56 - 7:59
    But what I see are lawmakers
    who are struggling to keep up to date
  • 7:59 - 8:01
    with the rapid changes in technology.
  • 8:01 - 8:03
    And worse, they're working in the dark,
  • 8:03 - 8:05
    because they don't have access to data
  • 8:05 - 8:08
    to understand what's happening
    on these platforms.
  • 8:08 - 8:11
    And anyway, which governments
    would we trust to do this?
  • 8:11 - 8:14
    We need a global response,
    not a national one.
  • 8:15 - 8:18
    So the missing link is us.
  • 8:18 - 8:21
    It's those people who use
    these technologies every day.
  • 8:21 - 8:26
    Can we design a new infrastructure
    to support quality information?
  • 8:26 - 8:28
    Well, I believe we can,
  • 8:28 - 8:31
    and I've got a few ideas about
    what we might be able to actually do.
  • 8:31 - 8:34
    So firstly, if we're serious
    about bringing the public into this,
  • 8:34 - 8:37
    can we take some inspiration
    from Wikipedia?
  • 8:37 - 8:38
    They've shown us what's possible.
  • 8:38 - 8:40
    Yes, it's not perfect,
  • 8:40 - 8:42
    but they've demonstrated
    that with the right structures,
  • 8:42 - 8:45
    with a global outlook
    and lots and lots of transparency,
  • 8:45 - 8:48
    you can build something
    that will earn the trust of most people.
  • 8:48 - 8:51
    Because we have to find a way
    to tap into the collective wisdom
  • 8:51 - 8:53
    and experience of all users.
  • 8:54 - 8:56
    This is particularly the case
    for women, people of color
  • 8:56 - 8:58
    and underrepresented groups.
  • 8:58 - 8:59
    Because guess what?
  • 8:59 - 9:01
    They are experts when it comes
    to hate and disinformation,
  • 9:02 - 9:05
    because they have been the targets
    of these campaigns for so long.
  • 9:05 - 9:07
    And over the years,
    they've been raising flags,
  • 9:07 - 9:09
    and they haven't been listened to.
  • 9:09 - 9:10
    This has got to change.
  • 9:11 - 9:15
    So could we build a Wikipedia for trust?
  • 9:15 - 9:19
    Could we find a way that users
    can actually provide insights?
  • 9:19 - 9:23
    They could offer insights around
    difficult content-moderation decisions.
  • 9:23 - 9:25
    They could provide feedback
  • 9:25 - 9:28
    when platforms decide
    they want to roll out new changes.
  • 9:28 - 9:32
    Second, people's experiences
    with the information is personalized.
  • 9:32 - 9:35
    My Facebook news feed
    is very different to yours.
  • 9:35 - 9:38
    Your YouTube recommendations
    are very different to mine.
  • 9:38 - 9:40
    That makes it impossible for us
    to actually examine
  • 9:40 - 9:42
    what information people are seeing.
  • 9:43 - 9:44
    So could we imagine
  • 9:44 - 9:49
    developing some kind of centralized
    open repository for anonymized data,
  • 9:49 - 9:52
    with privacy and ethical
    concerns built in?
  • 9:52 - 9:54
    Because imagine what we would learn
  • 9:54 - 9:57
    if we built out a global network
    of concerned citizens
  • 9:57 - 10:01
    who wanted to donate
    their social data to science.
  • 10:01 - 10:03
    Because we actually know very little
  • 10:03 - 10:06
    about the long-term consequences
    of hate and disinformation
  • 10:06 - 10:08
    on people's attitudes and behaviors.
  • 10:08 - 10:09
    And what we do know,
  • 10:09 - 10:12
    most of that has been
    carried out in the US,
  • 10:12 - 10:14
    despite the fact that
    this is a global problem.
  • 10:14 - 10:16
    We need to work on that, too.
  • 10:16 - 10:17
    And third,
  • 10:17 - 10:20
    can we find a way to connect the dots?
  • 10:20 - 10:23
    No one sector, let alone nonprofit,
    start-up or government,
  • 10:23 - 10:25
    is going to solve this.
  • 10:25 - 10:27
    But there are very smart people
    right around the world
  • 10:27 - 10:29
    working on these challenges,
  • 10:29 - 10:32
    from newsrooms, civil society,
    academia, activist groups.
  • 10:32 - 10:34
    And you can see some of them here.
  • 10:34 - 10:37
    Some are building out indicators
    of content credibility.
  • 10:37 - 10:38
    Others are fact-checking,
  • 10:38 - 10:42
    so that false claims, videos and images
    can be down-ranked by the platforms.
  • 10:42 - 10:44
    A nonprofit I helped
    to found, First Draft,
  • 10:44 - 10:47
    is working with normally competitive
    newsrooms around the world
  • 10:47 - 10:51
    to help them build out investigative,
    collaborative programs.
  • 10:51 - 10:54
    And Danny Hillis, a software architect,
  • 10:54 - 10:56
    is designing a new system
    called The Underlay,
  • 10:56 - 10:59
    which will be a record
    of all public statements of fact
  • 10:59 - 11:00
    connected to their sources,
  • 11:00 - 11:04
    so that people and algorithms
    can better judge what is credible.
  • 11:05 - 11:08
    And educators around the world
    are testing different techniques
  • 11:08 - 11:12
    for finding ways to make people
    critical of the content they consume.
  • 11:13 - 11:16
    All of these efforts are wonderful,
    but they're working in silos,
  • 11:16 - 11:18
    and many of them are woefully underfunded.
  • 11:19 - 11:21
    There are also hundreds
    of very smart people
  • 11:21 - 11:22
    working inside these companies,
  • 11:22 - 11:25
    but again, these efforts
    can feel disjointed,
  • 11:25 - 11:29
    because they're actually developing
    different solutions to the same problems.
  • 11:29 - 11:31
    How can we find a way
    to bring people together
  • 11:31 - 11:35
    in one physical location
    for days or weeks at a time,
  • 11:35 - 11:37
    so they can actually tackle
    these problems together
  • 11:37 - 11:39
    but from their different perspectives?
  • 11:39 - 11:40
    So can we do this?
  • 11:40 - 11:44
    Can we build out a coordinated,
    ambitious response,
  • 11:44 - 11:47
    one that matches the scale
    and the complexity of the problem?
  • 11:48 - 11:49
    I really think we can.
  • 11:49 - 11:52
    Together, let's rebuild
    our information commons.
  • 11:53 - 11:54
    Thank you.
  • 11:54 - 11:58
    (Applause)
Title:
How you can help transform the internet into a place of trust
Speaker:
Claire Wardle
Description:

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
12:33

English subtitles

Revisions Compare revisions