< Return to Video

The Turing test: Can a computer pass for a human? - Alex Gendler

  • 0:07 - 0:09
    What is consciousness?
  • 0:09 - 0:12
    Can an artificial machine really think?
  • 0:12 - 0:15
    Does the mind just consist of neurons
    in the brain,
  • 0:15 - 0:19
    or is there some intangible spark
    at its core?
  • 0:19 - 0:21
    For many, these have been
    vital considerations
  • 0:21 - 0:24
    for the future of artificial intelligence.
  • 0:24 - 0:30
    But British computer scientist Alan Turing
    decided to disregard all these questions
  • 0:30 - 0:32
    in favor of a much simpler one:
  • 0:32 - 0:35
    can a computer talk like a human?
  • 0:35 - 0:39
    This question led to an idea for measuring
    aritificial intelligence
  • 0:39 - 0:43
    that would famously come to be known
    as the Turing Test.
  • 0:43 - 0:47
    In a 1950 paper, "Computing Machinery
    and Intelligence,"
  • 0:47 - 0:50
    Turing proposed the following game.
  • 0:50 - 0:54
    A human judge has a text conversation
    with unseen players
  • 0:54 - 0:56
    and evaluates their responses.
  • 0:56 - 1:00
    To pass the test, a computer must
    be able to replace one of the players
  • 1:00 - 1:04
    without substantially
    changing the results.
  • 1:04 - 1:07
    In other words, a computer would be
    considered intelligent
  • 1:07 - 1:13
    if its conversation couldn't be easily
    distinguished from a human's.
  • 1:13 - 1:15
    Turing predicted that by the year 2000,
  • 1:15 - 1:21
    machines with 100 megabytes of memory
    would be able to easily pass his test.
  • 1:21 - 1:23
    But he may have jumped the gun.
  • 1:23 - 1:26
    Even though today's computers
    have far more memory than that,
  • 1:26 - 1:28
    few have succeeded,
  • 1:28 - 1:29
    and those that have done well
  • 1:29 - 1:33
    focused more on finding clever ways
    to fool judges
  • 1:33 - 1:36
    than using overwhelming computing power.
  • 1:36 - 1:39
    Though it was never subjected
    to a real test,
  • 1:39 - 1:44
    the first program with
    some claim to success was called Eliza.
  • 1:44 - 1:46
    With only a fairly short
    and simple script,
  • 1:46 - 1:50
    it managed to mislead many people
    by mimicking a psychologist,
  • 1:50 - 1:52
    encouraging them to talk more
  • 1:52 - 1:56
    and reflecting their own questions
    back at them.
  • 1:56 - 1:59
    Another early script, Parry,
    took the opposite approach
  • 1:59 - 2:02
    by imitating a paranoid schizophrenic
  • 2:02 - 2:08
    who kept steering the conversation
    back to his own preprogrammed obsessions.
  • 2:08 - 2:13
    There success in fooling people
    highlighted one weakness of the test.
  • 2:13 - 2:17
    Humans regularly attribute intelligence
    to a whole range of things
  • 2:17 - 2:21
    that are not actually intelligent.
  • 2:21 - 2:24
    Nonetheless, annual competitions
    like the Loebner Prize,
  • 2:24 - 2:26
    have made the test more formal
  • 2:26 - 2:28
    with judges knowig ahead of time
  • 2:28 - 2:32
    that some of their conversation partners
    are machines.
  • 2:32 - 2:34
    But while the quality has improved,
  • 2:34 - 2:39
    many chatbot programmers have used
    similar strategies to Eliza and Parry.
  • 2:39 - 2:41
    1997's winner, Catherine,
  • 2:41 - 2:45
    could carry on amazingly focused
    and intelligent conversation,
  • 2:45 - 2:49
    but mostly if the judge wanted
    to talk about Bill Clinton.
  • 2:49 - 2:52
    And the more recent winner,
    Eugene Goostman,
  • 2:52 - 2:56
    was given the persona of a
    13-year-old Ukranian boy,
  • 2:56 - 3:00
    so judges interpreted its nonsequiturs
    and awkward grammar
  • 3:00 - 3:03
    as language and culture barriers.
  • 3:03 - 3:07
    Meanwhile, other programs like Cleverbot
    have taken a different approach
  • 3:07 - 3:12
    by statistically analyzing huge databases
    of real conversations
  • 3:12 - 3:14
    to determine the best responses.
  • 3:14 - 3:18
    Some also store memories
    of previous conversations
  • 3:18 - 3:21
    in order to improve over time.
  • 3:21 - 3:25
    But while Cleverbot's individual responses
    can sound incredibly human,
  • 3:25 - 3:27
    its lack of a consistent personality
  • 3:27 - 3:30
    and inability to deal
    with brand new topics
  • 3:30 - 3:33
    are a dead giveaway.
  • 3:33 - 3:36
    Who in Turing's day could have predicted
    that today's computers
  • 3:36 - 3:38
    would be able to pilot spacecraft,
  • 3:38 - 3:41
    perform delicate surgeries,
  • 3:41 - 3:43
    and solve massive equations,
  • 3:43 - 3:46
    but still struggle with
    the most basic small talk?
  • 3:46 - 3:50
    Human language turns out to be
    an amazingly complex phenomenon
  • 3:50 - 3:54
    that can't be captured by even
    the largest dictionary.
  • 3:54 - 3:58
    Chatbots can be baffled by simple pauses,
    like "umm..."
  • 3:58 - 4:00
    or questions with no correct answer.
  • 4:00 - 4:02
    And a simple conversational sentence,
  • 4:02 - 4:06
    like, "I took the juice out of the fridge
    and gave it to him,
  • 4:06 - 4:07
    but forgot to check the date,"
  • 4:07 - 4:13
    requires a wealth of underlying knowledge
    and intuition to parse.
  • 4:13 - 4:16
    It turns out that simulating
    a human conversation
  • 4:16 - 4:19
    takes more than just increasing
    memory and processing power,
  • 4:19 - 4:22
    and as we get closer to Turing's goal,
  • 4:22 - 4:26
    we may have to deal with all those big
    questions about consciousness after all.
Title:
The Turing test: Can a computer pass for a human? - Alex Gendler
Description:

more » « less
Video Language:
English
Team:
closed TED
Project:
TED-Ed
Duration:
04:43

English subtitles

Revisions Compare revisions