Return to Video

How we can protect truth in the age of misinformation

  • 0:01 - 0:07
    So, on April 23 of 2013,
  • 0:07 - 0:12
    the Associated Press
    put out the following tweet on Twitter.
  • 0:12 - 0:15
    It said, "Breaking news:
  • 0:15 - 0:17
    Two explosions at the White House
  • 0:17 - 0:20
    and Barack Obama has been injured."
  • 0:20 - 0:26
    This tweet was retweeted 4,000 times
    in less than five minutes,
  • 0:26 - 0:28
    and it went viral thereafter.
  • 0:29 - 0:33
    Now, this tweet wasn't real news
    put out by the Associated Press.
  • 0:33 - 0:36
    In fact it was false news, or fake news,
  • 0:36 - 0:39
    that was propagated by Syrian hackers
  • 0:39 - 0:44
    that had infiltrated
    the Associated Press Twitter handle.
  • 0:44 - 0:48
    Their purpose was to disrupt society,
    but they disrupted much more.
  • 0:48 - 0:51
    Because automated trading algorithms
  • 0:51 - 0:54
    immediately seized
    on the sentiment on this tweet,
  • 0:54 - 0:57
    and began trading based on the potential
  • 0:57 - 1:01
    that the president of the United States
    had been injured or killed
  • 1:01 - 1:02
    in this explosion.
  • 1:02 - 1:04
    And as they started tweeting,
  • 1:04 - 1:08
    they immediately sent
    the stock market crashing,
  • 1:08 - 1:13
    wiping out 140 billion dollars
    in equity value in a single day.
  • 1:13 - 1:18
    Robert Mueller, special counsel
    prosecutor in the United States,
  • 1:18 - 1:21
    issued indictments
    against three Russian companies
  • 1:21 - 1:24
    and 13 Russian individuals
  • 1:24 - 1:27
    on a conspiracy to defraud
    the United States
  • 1:27 - 1:31
    by meddling in the 2016
    presidential election.
  • 1:32 - 1:35
    And what this indictment tells as a story
  • 1:35 - 1:39
    is the story of the Internet
    Research Agency,
  • 1:39 - 1:42
    the shadowy arm of the Kremlin
    on social media.
  • 1:43 - 1:46
    During the presidential election alone,
  • 1:46 - 1:48
    the Internet Agency's efforts
  • 1:48 - 1:53
    reached 126 million people
    on Facebook in the United States,
  • 1:53 - 1:56
    issued three million individual tweets
  • 1:56 - 2:00
    and 43 hours' worth of YouTube content.
  • 2:00 - 2:02
    All of which was fake --
  • 2:02 - 2:08
    misinformation designed to sow discord
    in the US presidential election.
  • 2:09 - 2:12
    A recent study by Oxford University
  • 2:12 - 2:15
    showed that in the recent
    Swedish elections,
  • 2:15 - 2:19
    one third of all of the information
    spreading on social media
  • 2:19 - 2:21
    about the election
  • 2:21 - 2:23
    was fake or misinformation.
  • 2:23 - 2:28
    In addition, these types
    of social-media misinformation campaigns
  • 2:28 - 2:32
    can spread what has been called
    "genocidal propaganda,"
  • 2:32 - 2:35
    for instance against
    the Rohingya in Burma,
  • 2:35 - 2:38
    triggering mob killings in India.
  • 2:38 - 2:39
    We studied fake news
  • 2:39 - 2:43
    and began studying it
    before it was a popular term.
  • 2:43 - 2:48
    And we recently published
    the largest-ever longitudinal study
  • 2:48 - 2:50
    of the spread of fake news online
  • 2:50 - 2:54
    on the cover of "Science"
    in March of this year.
  • 2:55 - 2:59
    We studied all of the verified
    true and false news stories
  • 2:59 - 3:00
    that ever spread on Twitter,
  • 3:00 - 3:04
    from its inception in 2006 to 2017.
  • 3:05 - 3:07
    And when we studied this information,
  • 3:07 - 3:10
    we studied verified news stories
  • 3:10 - 3:14
    that were verified by six
    independent fact-checking organizations.
  • 3:14 - 3:17
    So we knew which stories were true
  • 3:17 - 3:19
    and which stories were false.
  • 3:19 - 3:21
    We can measure their diffusion,
  • 3:21 - 3:22
    the speed of their diffusion,
  • 3:22 - 3:24
    the depth and breadth of their diffusion,
  • 3:24 - 3:29
    how many people become entangled
    in this information cascade and so on.
  • 3:29 - 3:30
    And what we did in this paper
  • 3:30 - 3:34
    was we compared the spread of true news
    to the spread of false news.
  • 3:34 - 3:36
    And here's what we found.
  • 3:36 - 3:40
    We found that false news
    diffused further, faster, deeper
  • 3:40 - 3:42
    and more broadly than the truth
  • 3:42 - 3:45
    in every category of information
    that we studied,
  • 3:45 - 3:47
    sometimes by an order of magnitude.
  • 3:48 - 3:51
    And in fact, false political news
    was the most viral.
  • 3:51 - 3:55
    It diffused further, faster,
    deeper and more broadly
  • 3:55 - 3:57
    than any other type of false news.
  • 3:57 - 3:59
    When we saw this,
  • 3:59 - 4:02
    we were at once worried but also curious.
  • 4:02 - 4:03
    Why?
  • 4:03 - 4:06
    Why does false news travel
    so much further, faster, deeper
  • 4:06 - 4:08
    and more broadly than the truth?
  • 4:08 - 4:11
    The first hypothesis
    that we came up with was,
  • 4:11 - 4:16
    "Well, maybe people who spread false news
    have more followers or follow more people,
  • 4:16 - 4:18
    or tweet more often,
  • 4:18 - 4:22
    or maybe they're more often 'verified'
    users of Twitter, with more credibility,
  • 4:22 - 4:24
    or maybe they've been on Twitter longer."
  • 4:24 - 4:26
    So we checked each one of these in turn.
  • 4:27 - 4:30
    And what we found
    was exactly the opposite.
  • 4:30 - 4:32
    False-news spreaders had fewer followers,
  • 4:32 - 4:34
    followed fewer people, were less active,
  • 4:34 - 4:36
    less often "verified"
  • 4:36 - 4:39
    and had been on Twitter
    for a shorter period of time.
  • 4:39 - 4:40
    And yet,
  • 4:40 - 4:45
    false news was 70 percent more likely
    to be retweeted than the truth,
  • 4:45 - 4:48
    controlling for all of these
    and many other factors.
  • 4:48 - 4:51
    So we had to come up
    with other explanations.
  • 4:51 - 4:55
    And we devised what we called
    a "novelty hypothesis."
  • 4:55 - 4:57
    So if you read the literature,
  • 4:57 - 5:01
    it is well known that human attention
    is drawn to novelty,
  • 5:01 - 5:03
    things that are new in the environment.
  • 5:03 - 5:05
    And if you read the sociology literature,
  • 5:05 - 5:10
    you know that we like to share
    novel information.
  • 5:10 - 5:14
    It makes us seem like we have access
    to inside information,
  • 5:14 - 5:17
    and we gain in status
    by spreading this kind of information.
  • 5:18 - 5:24
    So what we did was we measured the novelty
    of an incoming true or false tweet,
  • 5:24 - 5:28
    compared to the corpus
    of what that individual had seen
  • 5:28 - 5:31
    in the 60 days prior on Twitter.
  • 5:31 - 5:34
    But that wasn't enough,
    because we thought to ourselves,
  • 5:34 - 5:38
    "Well, maybe false news is more novel
    in an information-theoretic sense,
  • 5:38 - 5:41
    but maybe people
    don't perceive it as more novel."
  • 5:42 - 5:46
    So to understand people's
    perceptions of false news,
  • 5:46 - 5:49
    we looked at the information
    and the sentiment
  • 5:50 - 5:54
    contained in the replies
    to true and false tweets.
  • 5:54 - 5:55
    And what we found
  • 5:55 - 5:59
    was that across a bunch
    of different measures of sentiment --
  • 5:59 - 6:03
    surprise, disgust, fear, sadness,
  • 6:03 - 6:05
    anticipation, joy and trust --
  • 6:05 - 6:11
    false news exhibited significantly more
    surprise and disgust
  • 6:11 - 6:14
    in the replies to false tweets.
  • 6:14 - 6:18
    And true news exhibited
    significantly more anticipation,
  • 6:18 - 6:20
    joy and trust
  • 6:20 - 6:22
    in reply to true tweets.
  • 6:22 - 6:26
    The surprise corroborates
    our novelty hypothesis.
  • 6:26 - 6:31
    This is new and surprising,
    and so we're more likely to share it.
  • 6:31 - 6:34
    At the same time,
    there was congressional testimony
  • 6:34 - 6:37
    in front of both houses of Congress
    in the United States,
  • 6:37 - 6:41
    looking at the role of bots
    in the spread of misinformation.
  • 6:41 - 6:42
    So we looked at this too --
  • 6:42 - 6:46
    we used multiple sophisticated
    bot-detection algorithms
  • 6:46 - 6:49
    to find the bots in our data
    and to pull them out.
  • 6:49 - 6:52
    So we pulled them out,
    we put them back in
  • 6:52 - 6:55
    and we compared what happens
    to our measurement.
  • 6:55 - 6:57
    And what we found was that, yes indeed,
  • 6:57 - 7:01
    bots were accelerating
    the spread of false news online,
  • 7:01 - 7:04
    but they were accelerating
    the spread of true news
  • 7:04 - 7:06
    at approximately the same rate.
  • 7:06 - 7:09
    Which means bots are not responsible
  • 7:09 - 7:14
    for the differential diffusion
    of truth and falsity online.
  • 7:14 - 7:17
    We can't abdicate that responsibility,
  • 7:17 - 7:21
    because we, humans,
    are responsible for that spread.
  • 7:22 - 7:26
    Now, everything
    that I have told you so far,
  • 7:26 - 7:28
    unfortunately for all of us,
  • 7:28 - 7:29
    is the good news.
  • 7:31 - 7:35
    The reason is because
    it's about to get a whole lot worse.
  • 7:36 - 7:40
    And two specific technologies
    are going to make it worse.
  • 7:40 - 7:45
    We are going to see the rise
    of a tremendous wave of synthetic media.
  • 7:45 - 7:51
    Fake video, fake audio
    that is very convincing to the human eye.
  • 7:51 - 7:54
    And this will powered by two technologies.
  • 7:54 - 7:58
    The first of these is known
    as "generative adversarial networks."
  • 7:58 - 8:01
    This is a machine-learning model
    with two networks:
  • 8:01 - 8:02
    a discriminator,
  • 8:02 - 8:06
    whose job it is to determine
    whether something is true or false,
  • 8:06 - 8:08
    and a generator,
  • 8:08 - 8:11
    whose job it is to generate
    synthetic media.
  • 8:11 - 8:16
    So the synthetic generator
    generates synthetic video or audio,
  • 8:16 - 8:21
    and the discriminator tries to tell,
    "Is this real or is this fake?"
  • 8:21 - 8:24
    And in fact, it is the job
    of the generator
  • 8:24 - 8:28
    to maximize the likelihood
    that it will fool the discriminator
  • 8:28 - 8:32
    into thinking the synthetic
    video and audio that it is creating
  • 8:32 - 8:33
    is actually true.
  • 8:33 - 8:36
    Imagine a machine in a hyperloop,
  • 8:36 - 8:39
    trying to get better
    and better at fooling us.
  • 8:39 - 8:42
    This, combined with the second technology,
  • 8:42 - 8:47
    which is essentially the democratization
    of artificial intelligence to the people,
  • 8:47 - 8:50
    the ability for anyone,
  • 8:50 - 8:52
    without any background
    in artificial intelligence
  • 8:52 - 8:54
    or machine learning,
  • 8:54 - 8:58
    to deploy these kinds of algorithms
    to generate synthetic media
  • 8:58 - 9:02
    makes it ultimately so much easier
    to create videos.
  • 9:02 - 9:07
    The White House issued
    a false, doctored video
  • 9:07 - 9:11
    of a journalist interacting with an intern
    who was trying to take his microphone.
  • 9:11 - 9:13
    They removed frames from this video
  • 9:13 - 9:17
    in order to make his actions
    seem more punchy.
  • 9:17 - 9:21
    And when videographers
    and stuntmen and women
  • 9:21 - 9:23
    were interviewed
    about this type of technique,
  • 9:23 - 9:27
    they said, "Yes, we use this
    in the movies all the time
  • 9:27 - 9:32
    to make our punches and kicks
    look more choppy and more aggressive."
  • 9:32 - 9:34
    They then put out this video
  • 9:34 - 9:37
    and partly used it as justification
  • 9:37 - 9:41
    to revoke Jim Acosta,
    the reporter's, press pass
  • 9:41 - 9:42
    from the White House.
  • 9:42 - 9:47
    And CNN had to sue
    to have that press pass reinstated.
  • 9:49 - 9:54
    There are about five different paths
    that I can think of that we can follow
  • 9:54 - 9:58
    to try and address some
    of these very difficult problems today.
  • 9:58 - 10:00
    Each one of them has promise,
  • 10:00 - 10:03
    but each one of them
    has its own challenges.
  • 10:03 - 10:05
    The first one is labeling.
  • 10:05 - 10:07
    Think about it this way:
  • 10:07 - 10:10
    when you go to the grocery store
    to buy food to consume,
  • 10:10 - 10:12
    it's extensively labeled.
  • 10:12 - 10:14
    You know how many calories it has,
  • 10:14 - 10:16
    how much fat it contains --
  • 10:16 - 10:20
    and yet when we consume information,
    we have no labels whatsoever.
  • 10:20 - 10:22
    What is contained in this information?
  • 10:22 - 10:24
    Is the source credible?
  • 10:24 - 10:26
    Where is this information gathered from?
  • 10:26 - 10:28
    We have none of that information
  • 10:28 - 10:30
    when we are consuming information.
  • 10:30 - 10:33
    That is a potential avenue,
    but it comes with its challenges.
  • 10:33 - 10:40
    For instance, who gets to decide,
    in society, what's true and what's false?
  • 10:40 - 10:42
    Is it the governments?
  • 10:42 - 10:43
    Is it Facebook?
  • 10:44 - 10:47
    Is it an independent
    consortium of fact-checkers?
  • 10:47 - 10:50
    And who's checking the fact-checkers?
  • 10:50 - 10:54
    Another potential avenue is incentives.
  • 10:54 - 10:56
    We know that during
    the US presidential election
  • 10:56 - 11:00
    there was a wave of misinformation
    that came from Macedonia
  • 11:00 - 11:02
    that didn't have any political motive
  • 11:02 - 11:05
    but instead had an economic motive.
  • 11:05 - 11:07
    And this economic motive existed,
  • 11:07 - 11:10
    because false news travels
    so much farther, faster
  • 11:10 - 11:12
    and more deeply than the truth,
  • 11:13 - 11:17
    and you can earn advertising dollars
    as you garner eyeballs and attention
  • 11:17 - 11:19
    with this type of information.
  • 11:19 - 11:23
    But if we can depress the spread
    of this information,
  • 11:23 - 11:26
    perhaps it would reduce
    the economic incentive
  • 11:26 - 11:29
    to produce it at all in the first place.
  • 11:29 - 11:31
    Third, we can think about regulation,
  • 11:31 - 11:34
    and certainly, we should think
    about this option.
  • 11:34 - 11:35
    In the United States, currently,
  • 11:35 - 11:40
    we are exploring what might happen
    if Facebook and others are regulated.
  • 11:40 - 11:44
    While we should consider things
    like regulating political speech,
  • 11:44 - 11:47
    labeling the fact
    that it's political speech,
  • 11:47 - 11:51
    making sure foreign actors
    can't fund political speech,
  • 11:51 - 11:53
    it also has its own dangers.
  • 11:54 - 11:58
    For instance, Malaysia just instituted
    a six-year prison sentence
  • 11:58 - 12:01
    for anyone found spreading misinformation.
  • 12:02 - 12:04
    And in authoritarian regimes,
  • 12:04 - 12:08
    these kinds of policies can be used
    to suppress minority opinions
  • 12:08 - 12:12
    and to continue to extend repression.
  • 12:13 - 12:16
    The fourth possible option
    is transparency.
  • 12:17 - 12:21
    We want to know
    how do Facebook's algorithms work.
  • 12:21 - 12:23
    How does the data
    combine with the algorithms
  • 12:23 - 12:26
    to produce the outcomes that we see?
  • 12:26 - 12:29
    We want them to open the kimono
  • 12:29 - 12:33
    and show us exactly the inner workings
    of how Facebook is working.
  • 12:33 - 12:36
    And if we want to know
    social media's effect on society,
  • 12:36 - 12:38
    we need scientists, researchers
  • 12:38 - 12:41
    and others to have access
    to this kind of information.
  • 12:41 - 12:43
    But at the same time,
  • 12:43 - 12:46
    we are asking Facebook
    to lock everything down,
  • 12:46 - 12:49
    to keep all of the data secure.
  • 12:49 - 12:52
    So, Facebook and the other
    social media platforms
  • 12:52 - 12:55
    are facing what I call
    a transparency paradox.
  • 12:55 - 12:58
    We are asking them, at the same time,
  • 12:58 - 13:03
    to be open and transparent
    and, simultaneously secure.
  • 13:03 - 13:05
    This is a very difficult needle to thread,
  • 13:06 - 13:07
    but they will need to thread this needle
  • 13:07 - 13:11
    if we are to achieve the promise
    of social technologies
  • 13:11 - 13:13
    while avoiding their peril.
  • 13:13 - 13:18
    The final thing that we could think about
    is algorithms and machine learning.
  • 13:18 - 13:23
    Technology devised to root out
    and understand fake news, how it spreads,
  • 13:23 - 13:25
    and to try and dampen its flow.
  • 13:26 - 13:29
    Humans have to be in the loop
    of this technology,
  • 13:29 - 13:31
    because we can never escape
  • 13:31 - 13:35
    that underlying any technological
    solution or approach
  • 13:35 - 13:39
    is a fundamental ethical
    and philosophical question
  • 13:39 - 13:42
    about how do we define truth and falsity,
  • 13:42 - 13:46
    to whom do we give the power
    to define truth and falsity
  • 13:46 - 13:48
    and which opinions are legitimate,
  • 13:48 - 13:52
    which type of speech
    should be allowed and so on.
  • 13:52 - 13:54
    Technology is not a solution for that.
  • 13:54 - 13:58
    Ethics and philosophy
    is a solution for that.
  • 13:59 - 14:02
    Nearly every theory
    of human decision making,
  • 14:02 - 14:05
    human cooperation and human coordination
  • 14:05 - 14:09
    has some sense of the truth at its core.
  • 14:09 - 14:11
    But with the rise of fake news,
  • 14:11 - 14:13
    the rise of fake video,
  • 14:13 - 14:15
    the rise of fake audio,
  • 14:15 - 14:19
    we are teetering on the brink
    of the end of reality,
  • 14:19 - 14:23
    where we cannot tell
    what is real from what is fake.
  • 14:23 - 14:26
    And that's potentially
    incredibly dangerous.
  • 14:27 - 14:31
    We have to be vigilant
    in defending the truth
  • 14:31 - 14:32
    against misinformation.
  • 14:33 - 14:36
    With our technologies, with our policies
  • 14:36 - 14:38
    and, perhaps most importantly,
  • 14:38 - 14:42
    with our own individual responsibilities,
  • 14:42 - 14:45
    decisions, behaviors and actions.
  • 14:46 - 14:47
    Thank you very much.
  • 14:47 - 14:51
    (Applause)
Title:
How we can protect truth in the age of misinformation
Speaker:
Sinan Aral
Description:

Fake news can sway elections, tank economies and sow discord in everyday life. Data scientist Sinan Aral demystifies how and why it spreads so quickly -- citing one of the largest studies on misinformation -- and identifies five strategies to help us unweave the tangled web between true and false.

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
15:03

English subtitles

Revisions Compare revisions