< Return to Video

Steven Pinker: Linguistics as a Window to Understanding the Brain

  • 0:12 - 0:17
    My name is Steve Pinker, and I’m Professor
    of Psychology at Harvard University.  And
  • 0:17 - 0:23
    today I’m going to speak to you about language.
    I’m actually not a linguist, but a
  • 0:23 - 0:28
    cognitive scientist.  I’m not so much interested
    as language as an object in its own right,
  • 0:28 - 0:32
    but as a window to the human mind.
    Language is one of the fundamental topics
  • 0:32 - 0:38
    in the human sciences.  It’s the trait
    that most conspicuously distinguishes humans
  • 0:38 - 0:45
    from other species, it’s essential to human
    cooperation; we accomplish amazing things
  • 0:45 - 0:52
    by sharing our knowledge or coordinating our
    actions by means of words.  It poses profound
  • 0:52 - 0:59
    scientific mysteries such as, how did language
    evolve in this particular species?  How does
  • 0:59 - 1:06
    the brain compute language? But also, language
    has many practical applications not surprisingly
  • 1:06 - 1:11
    given how central it is to human life.
    Language comes so naturally to us that we’re
  • 1:11 - 1:16
    apt to forget what a strange and miraculous
    gift it is.  But think about what you’re
  • 1:16 - 1:21
    doing for the next hour.   You’re going
    to be listening patiently as a guy makes noise
  • 1:21 - 1:26
    as he exhales.  Now, why would you do something
    like that?  It’s not that I can claim that
  • 1:26 - 1:32
    the sounds I’m going to make are particularly
    mellifluous, but rather I’ve coded information
  • 1:32 - 1:38
    into the exact sequences of hisses and hums
    and squeaks and pops that I’ll be making.
  • 1:38 - 1:45
    You have the ability to recover the information
    from that stream of noises allowing us to
  • 1:45 - 1:50
    share ideas.
    Now, the ideas we are going to share are about
  • 1:50 - 1:55
    this talent, language, but with a slightly
    different sequence of hisses and squeaks,
  • 1:55 - 2:01
    I could cause you to be thinking thoughts
    about a vast array of topics, anything from
  • 2:01 - 2:07
    the latest developments in your favorite reality
    show to theories of the origin of the universe.
  • 2:07 - 2:14
    This is what I think of as the miracle of
    language, its vast expressive power, and it’s
  • 2:14 - 2:19
    a phenomenon that still fills me with wonder,
    even after having studied language for 35
  • 2:19 - 2:32
    years.  And it is the prime phenomenon that
    the science of language aims to explain.  
  • 2:32 - 2:35
    Not surprisingly, language is central to human
    life.  The Biblical story of the Tower of
  • 2:35 - 2:40
    Babel reminds us that humans accomplish great
    things because they can exchange information
  • 2:40 - 2:46
    about their knowledge and intentions via the
    medium of language.  Language, moreover,
  • 2:46 - 2:53
    is not a peculiarity of one culture, but it
    has been found in every society ever studied
  • 2:53 - 2:59
    by anthropologists.
    There’s some 6,000 languages spoken on Earth,
  • 2:59 - 3:05
    all of them complex, and no one has ever discovered
    a human society that lacks complex language.
  • 3:05 - 3:11
    For this and other reasons, Charles Darwin
    wrote, “Man has an instinctive tendency
  • 3:11 - 3:17
    to speak as we see in the babble of our young
    children while no child has an instinctive
  • 3:17 - 3:21
    tendency to bake, brew or write.”
  • 3:21 - 3:25
    Language is an intricate talent and it’s
    not surprising that the science of language
  • 3:25 - 3:30
    should be a complex discipline.
    It includes the study of how language itself
  • 3:30 - 3:38
    works including:  grammar, the assembly of
    words, phrases and sentences; phonology, the
  • 3:38 - 3:45
    study of sound; semantics, the study of meaning;
    and pragmatics, the study of the use of language
  • 3:45 - 3:49
    in conversation.
    Scientists interested in language also
  • 3:49 - 3:55
    study how it is processed in real time, a
    field called psycholinguistics; how is it
  • 3:55 - 4:00
    acquired by children, the study of language
    acquisition.  And how it is computed in the
  • 4:00 - 4:02
    brain, the discipline called neurolinguistics.
  • 4:02 - 4:12
    Now, before we begin, it’s important to
    not to confuse language with three other things
  • 4:12 - 4:18
    that are closely related to language.  One
    of them is written language.  Unlike spoken
  • 4:18 - 4:24
    language, which is found in all human cultures
    throughout history, writing was invented a
  • 4:24 - 4:30
    very small number of times in human history,
    about 5,000 years ago.  
  • 4:30 - 4:35
    And alphabetic writing where each mark on
    the page stands for a vowel or a consonant,
  • 4:35 - 4:40
    appears to have been invented only once in
    all of human history by the Canaanites about
  • 4:40 - 4:48
    3,700 years ago.  And as Darwin pointed out,
    children have no instinctive tendency to write,
  • 4:48 - 4:52
    but have to learn it through construction
    and schooling.
  • 4:52 - 4:58
    A second thing not to confuse language with
    is proper grammar.  Linguists distinguish
  • 4:58 - 5:05
    between descriptive grammar - the rules, that
    characterize how people to speak - and prescriptive
  • 5:05 - 5:11
    grammar - rules that characterize how people
    ought to speak if they are writing careful
  • 5:11 - 5:16
    written prose.  
    A dirty secret from linguistics is that not
  • 5:16 - 5:22
    only are these not the same kinds of rules,
    but many of the prescriptive rules of language
  • 5:22 - 5:28
    make no sense whatsoever.  Take one of the
    most famous of these rules, the rule not to
  • 5:28 - 5:32
    split infinitives.  
    According to this rule, Captain Kirk made
  • 5:32 - 5:37
    a grievous grammatical error when he said
    that the mission of the Enterprise was “to
  • 5:37 - 5:42
    boldly go where no man has gone before.”
    He should have said, according to these
  • 5:42 - 5:49
    editors, “to go boldly where no man has
    gone before,” which immediately clashes
  • 5:49 - 5:55
    with the rhythm and structure of ordinary
    English.  In fact, this prescriptive rule
  • 5:55 - 6:01
    was based on a clumsy analogy with Latin where
    you can’t splint an infinitive because it’s
  • 6:01 - 6:06
    a single word, as in facary[ph] to do.  Julius
    Caesar couldn’t have split an infinitive
  • 6:06 - 6:13
    if he wanted to.  That rule was translated
    literally over into English where it really
  • 6:13 - 6:17
    should not apply.  
    Another famous prescriptive rule is that,
  • 6:17 - 6:23
    one should never use a so-called double negative.
    Mick Jagger should not have sung, “I can’t
  • 6:23 - 6:28
    get no satisfaction,” he really should have
    sung, “I can’t get any satisfaction.”
  • 6:28 - 6:35
    Now, this is often promoted as a rule of
    logical speaking, but “can’t” and “any”
  • 6:35 - 6:40
    is just as much of a double negative as “can’t”
    and “no.”  The only reason that “can’t
  • 6:40 - 6:45
    get any satisfaction” is deemed correct
    and “can’t get no satisfaction” is deemed
  • 6:45 - 6:50
    ungrammatical is that the dialect of English
    spoken in the south of England in the 17th
  • 6:50 - 6:54
    century used “can’t” “any” rather
    than “can’t” “no.”  
  • 6:54 - 6:57
    If the capital of England had been in the
    north of the country instead of the south
  • 6:57 - 7:01
    of the country, then “can’t get no,”
    would have been correct and “can’t get
  • 7:01 - 7:03
    any,” would have been deemed incorrect.
  • 7:03 - 7:08
    There’s nothing special about a language
    that happens to be chosen as the standard
  • 7:08 - 7:15
    for a given country.  In fact, if you compare
    the rules of languages and so-called dialects,
  • 7:15 - 7:21
    each one is complex in different ways.  Take
    for example, African-American vernacular English,
  • 7:21 - 7:28
    also called Black English or Ebonics.  There
    is a construction in African-American where
  • 7:28 - 7:33
    you can say, “He be workin,” which is
    not an error or bastardization or a corruption
  • 7:33 - 7:39
    of Standard English, but in fact conveys a
    subtle distinction, one that’s different
  • 7:39 - 7:46
    than simply, “He workin.”  “He be workin,”
    means that he is employed; he has a job, “He
  • 7:46 - 7:51
    workin,” means that he happens to be working
    at the moment that you and I are speaking.
  • 7:51 - 7:53

    Now, this is a tense difference that can be
  • 7:53 - 7:59
    made in African-American English that is not
    made in Standard English, one of many examples
  • 7:59 - 8:06
    in which the dialects have their own set of
    rules that is just as sophisticated and complex
  • 8:06 - 8:11
    as the one in the standard language.  
    Now, a third thing, not to confuse language
  • 8:11 - 8:17
    with is thought.  Many people report that
    they think in language, but commune of psychologists
  • 8:17 - 8:23
    have shown that there are many kinds of thought
    that don’t actually take place in the form
  • 8:23 - 8:24
    of sentences.  
  • 8:24 - 8:26
    (1.) Babies (and other mammals) communicate
    without speech
  • 8:26 - 8:32
    For example, we know from ingenious experiments
    that non-linguistic creatures, such as babies
  • 8:32 - 8:39
    before they’ve learned to speak, or other
    kinds of animals, have sophisticated kinds
  • 8:39 - 8:45
    of cognition, they register cause and effect
    and objects and the intentions of other people,
  • 8:45 - 8:48
    all without the benefit of speech.  
    (2.) Types of thinking go on without language--visual
  • 8:48 - 8:50
    thinking
    We also know that even in creatures that do
  • 8:50 - 8:57
    have language, namely adults, a lot of thinking
    goes on in forms other than language, for
  • 8:57 - 9:03
    example, visual imagery.  If you look at
    the top two three-dimensional figures in this
  • 9:03 - 9:09
    display, and I would ask you, do they have
    the same shape or a different shape?  People
  • 9:09 - 9:15
    don’t solve that problem by describing those
    strings of cubes in words, but rather by taking
  • 9:15 - 9:22
    an image of one and mentally rotating it into
    the orientation of the other, a form of non-linguistic
  • 9:22 - 9:22
    thinking.  
    (3.) We use tacit knowledge to understand
  • 9:22 - 9:25
    language and remember the gist
    For that matter, even when you understand
  • 9:25 - 9:31
    language, what you come away with is not in
    itself the actual language that you hear.
  • 9:31 - 9:38
    Another important finding in cognitive psychology
    is that long-term memory for verbal material
  • 9:38 - 9:45
    records the gist or the meaning or the content
    of the words rather than the exact form of
  • 9:45 - 9:48
    the words.  
    For example, I like to think that you retain
  • 9:48 - 9:54
    some memory of what I have been saying for
    the last 10 minutes.  But I suspect that
  • 9:54 - 10:00
    if I were to ask you to reproduce any sentence
    that I have uttered, you would be incapable
  • 10:00 - 10:07
    of doing so.  What sticks in memory is far
    more abstract than the actual sentences, something
  • 10:07 - 10:12
    that we can call meaning or content or semantics.
  • 10:12 - 10:18
    In fact, when it even comes to   understanding
    a sentence, the actual words are the tip of
  • 10:18 - 10:25
    a vast iceberg of a very rapid, unconscious,
    non-linguistic processing that’s necessary
  • 10:25 - 10:30
    even to make sense of the language itself.
    And I’ll illustrate this with a classic
  • 10:30 - 10:37
    bit of poetry, the lines from the shampoo
    bottle.  “Wet hair, lather, rinse, repeat.”
  • 10:37 - 10:40

    Now, in understanding that very simple snatch
  • 10:40 - 10:45
    of language, you have to know, for example,
    that when you repeat, you don’t wet your
  • 10:45 - 10:49
    hair a second time because its already wet,
    and when you get to the end of it and you
  • 10:49 - 10:54
    see “repeat,” you don’t keep repeating
    over and over in infinite loop, repeat here
  • 10:54 - 11:00
    means, “repeat just once.”  Now this
    tacit knowledge of what the writers ** of
  • 11:00 - 11:06
    language had in mind is necessary to understand
    language, but it, itself, is not language.
  • 11:06 - 11:08

    (4.) If language is thinking, then where did
  • 11:08 - 11:11
    it come from?
    Finally, if language were really thought,
  • 11:11 - 11:15
    it would raise the question of where language
    would come from if it were incapable of thinking
  • 11:15 - 11:21
    without language.  After all, the English
    language was not designed by some committee
  • 11:21 - 11:27
    of Martians who came down to Earth and gave
    it to us.  Rather, language is a grassroots
  • 11:27 - 11:33
    phenomenon.  It’s the original wiki, which
    aggregates the contributions of hundreds of
  • 11:33 - 11:40
    thousands of people who invent jargon and
    slang and new constructions, some of them
  • 11:40 - 11:46
    get accumulated into the language as people
    seek out new ways of expressing their thoughts,
  • 11:46 - 11:51
    and that’s how we get a language in the
    first place.  
  • 11:51 - 11:59
    Now, this not to deny that language can affect
    thought and linguistics has long been interested
  • 11:59 - 12:06
    in what has sometimes been called, the linguistic
    relativity hypothesis or the Sapir-Whorf Hypothesis
  • 12:06 - 12:10
    (note correct spelling, named after the two
    linguists who first formulated it, namely
  • 12:10 - 12:15
    that language can affect thought.  There’s
    a lot of controversy over the status of the
  • 12:15 - 12:21
    linguistic relativity hypothesis, but no one
    believes that language is the same thing as
  • 12:21 - 12:27
    thought and that all of our mental life consists
    of reciting sentences.  
  • 12:27 - 12:33
    Now that we have set aside what language is
    not, let’s turn to what language is beginning
  • 12:33 - 12:39
    with the question of how language works.
    In a nutshell, you can divide language into
  • 12:39 - 12:46
    three topics.  
    There are the words that are the basic components
  • 12:46 - 12:50
    of sentences that are stored in a part of
    long-term memory that we can call the mental
  • 12:50 - 12:57
    lexicon or the mental dictionary.  There
    are rules, the recipes or algorithms that
  • 12:57 - 13:05
    we use to assemble bits of language into more
    complex stretches of language including syntax,
  • 13:05 - 13:11
    the rules that allow us to assemble words
    into phrases and sentences; Morphology, the
  • 13:11 - 13:17
    rules that allow us to assemble bits of words,
    like prefixes and suffixes into complex words;
  • 13:17 - 13:24
    Phonology, the rules that allow us to combine
    vowels and consonants into the smallest words.
  • 13:24 - 13:31
    And then all of this knowledge of language
    has to connect to the world through interfaces
  • 13:31 - 13:36
    that allow us to understand language coming
    from others to produce language that others
  • 13:36 - 13:39
    can understand us, the language interfaces.
  • 13:39 - 13:45
    Let’s start with words.
    The basic principle of a word was identified
  • 13:45 - 13:52
    by the Swiss linguist, Ferdinand de Saussure,
    more than 100 years ago when he called attention
  • 13:52 - 13:58
    to the arbitrariness of the sign.  Take for
    example the word, “duck.”  The word,
  • 13:58 - 14:03
    “duck” doesn’t look like a duck or walk
    like a duck or quack like a duck, but I can
  • 14:03 - 14:07
    use it to get you to think the thought of
    a duck because all of us at some point in
  • 14:07 - 14:15
    our lives have memorized that brute force
    association between that sound and that meaning,
  • 14:15 - 14:20
    which means that it has to be stored in memory
    in some format, in a very simplified form
  • 14:20 - 14:24
    and an entry in the mental lexicon might look
    something like this.  There is a symbol for
  • 14:24 - 14:32
    the word itself, there is some kind of specification
    of its sound and there’s some kind of specification
  • 14:32 - 14:37
    of its meaning.  
    Now, one of the remarkable facts about the
  • 14:37 - 14:44
    mental lexicon is how capacious it is.  Using
    dictionary sampling techniques where you say,
  • 14:44 - 14:49
    take the top left-hand word on every 20th
    page of the dictionary, give it to people
  • 14:49 - 14:55
    in a multiple choice test, correct for guessing,
    and multiply by the size of the dictionary,
  • 14:55 - 15:00
    you can estimate that a typical high school
    graduate has a vocabulary of around 60,000
  • 15:00 - 15:07
    words, which works out to a rate of learning
    of about one new word every two hours starting
  • 15:07 - 15:13
    from the age of one.  When you think that
    every one of these words is arbitrary as a
  • 15:13 - 15:19
    telephone number of a date in history, you’re
    reminded about the remarkable capacity of
  • 15:19 - 15:24
    human long-term memory to store the meanings
    and sounds of words.  
  • 15:24 - 15:32
    But of course, we don’t just blurt out individual
    words, we combine them into phrases and sentences.
  • 15:32 - 15:39
    And that brings up the second major component
    of language; namely, grammar.  
  • 15:39 - 15:46
    Now the modern study of grammar is inseparable
    to the contributions of one linguist, the
  • 15:46 - 15:52
    famous scholar, Noam Chomsky, who set the
    agenda for the field of linguistics for the
  • 15:52 - 15:57
    last 60 years.
    To begin with, Chomsky noted that the main
  • 15:57 - 16:03
    puzzle that we have to explain in understanding
    language is creativity or as linguists often
  • 16:03 - 16:09
    call it productivity, the ability to produce
    and understand new sentences.  
  • 16:09 - 16:16
    Except for a small number of clichéd formulas,
    just about any sentence that you produce or
  • 16:16 - 16:23
    understand is a brand new combination produced
    for the first time perhaps in your life, perhaps
  • 16:23 - 16:29
    even in the history of the species.  We have
    to explain how people are capable of doing
  • 16:29 - 16:35
    it.  It shows that when we know a language,
    we haven’t just memorized a very long list
  • 16:35 - 16:43
    of sentences, but rather have internalized
    a grammar or algorithm or recipe for combining
  • 16:43 - 16:49
    elements into brand new assemblies.  For
    that reason, Chomsky has insisted that linguistics
  • 16:49 - 16:56
    is really properly a branch of psychology
    and is a window into the human mind.
  • 16:56 - 17:02
    A second insight is that languages have a
    syntax which can’t be identified with their
  • 17:02 - 17:07
    meaning.  Now, the only quotation that I
    know of, of a linguist that has actually made
  • 17:07 - 17:13
    it into Bartlett’s Familiar Quotations,
    is the following sentence from Chomsky, from
  • 17:13 - 17:20
    1956, “Colorless, green ideas sleep furiously.”
    Well, what’s the point of that sentence?
  • 17:20 - 17:25
    The point is that it is very close to meaningless.
    On the other hand, any English speaker can
  • 17:25 - 17:31
    instantly recognize that it conforms to the
    patterns of English syntax.  Compare, for
  • 17:31 - 17:38
    example, “furiously sleep ideas dream colorless,”
    which is also meaningless, but we perceive
  • 17:38 - 17:44
    as a word salad.  
    A third insight is that syntax doesn’t consist
  • 17:44 - 17:51
    of a string of word by word associations as
    in stimulus response theories in psychology
  • 17:51 - 17:56
    where producing a word is a response which
    you then hear and it becomes a stimulus to
  • 17:56 - 18:02
    producing the next word, and so on.  Again,
    the sentence, “colorless green ideas sleep
  • 18:02 - 18:08
    furiously,” can help make this point.  Because
    if you look at the word by word transition
  • 18:08 - 18:14
    probabilities in that sentence, for example,
    colorless and then green; how often have you
  • 18:14 - 18:21
    heard colorless and green in succession.  Probably
    zero times.  Green and ideas, those two words
  • 18:21 - 18:28
    never occur together, ideas and sleep, sleep
    and furiously.  Every one of the transition
  • 18:28 - 18:34
    probabilities is very close to zero, nonetheless,
    the sentence as a whole can be perceived as
  • 18:34 - 18:39
    a well-formed English sentence.  
    Language in general has long distance dependencies.
  • 18:39 - 18:45
    The word in one position in a sentence can
    dictate the choice of the word several positions
  • 18:45 - 18:51
    downstream.  For example, if you begin a
    sentence with “either,” somewhere down
  • 18:51 - 18:56
    the line, there has to be an “or.”  If
    you have an “if,” generally, you expect
  • 18:56 - 19:00
    somewhere down the line there to be a “then.”
    There’s a story about a child who says
  • 19:00 - 19:05
    to his father, “Daddy, why did you bring
    that book that I don’t want to be read to
  • 19:05 - 19:12
    out of, up for?”  Where you have a set
    of nested or embedded long distance dependencies.
  • 19:12 - 19:16

    Indeed, one of the applications of linguistics
  • 19:16 - 19:25
    to the study of good prose style is that sentences
    can be rendered difficult to understand if
  • 19:25 - 19:30
    they have too many long distance dependencies
    because that could put a strain on the short-term
  • 19:30 - 19:36
    memory of the reader or listener while trying
    to understand them.  
  • 19:36 - 19:41
    Rather than a set of word by word associations,
    sentences are assembled in a hierarchical
  • 19:41 - 19:46
    structure that looks like an upside down tree.
    Let me give you an example of how that works
  • 19:46 - 19:51
    in the case of English.  One of the basic
    rules of English is that a sentence consists
  • 19:51 - 19:57
    of a noun phrase, the subject, followed by
    a verb phrase, the predicate.
  • 19:57 - 20:04
    A second rule in turn expands the verb phrase.
    A very phrase consists of a verb followed
  • 20:04 - 20:10
    by a noun phrase, the object, followed by
    a sentence, the complement as, “I told him
  • 20:10 - 20:12
    that it was sunny outside.”  
  • 20:12 - 20:26
    Now, why do linguists insist that language
    must be composed out of  phrase structural
  • 20:26 - 20:27
    rules?  
    (1.) Rules allow for open-ended creativity
  • 20:27 - 20:32
    Well for one thing, that helps explain
    the main phenomenon that we want to explain,
  • 20:32 - 20:35
    mainly the open-ended creativity of language.
  • 20:35 - 20:36
    (2.) Rules allow for expression of unfamiliar
    meaning
  • 20:36 - 20:42
    It allows us to express unfamiliar meanings.
    There’s a cliché in journalism for example,
  • 20:42 - 20:47
    that when a dog bites a man, that isn’t
    news, but when a man bites a dog, that is
  • 20:47 - 20:55
    news.  The beauty of grammar is that it allows
    us to convey news by assembling into familiar
  • 20:55 - 21:02
    word in brand new combinations.  Also, because
    of the way phrase structure rules work, they
  • 21:02 - 21:05
    produce a vast number of possible combinations.
  • 21:05 - 21:06
    (3.) Rules allow for production of vast numbers
    of combinations
  • 21:06 - 21:10
    Moreover, the number of different thoughts
    that we can express through the combinatorial
  • 21:10 - 21:15
    power of grammar is not just humongous, but
    in a technical sense, it’s infinite.  Now
  • 21:15 - 21:20
    of course, no one lives an infinite number
    of years, and therefore can shell off their
  • 21:20 - 21:25
    ability to understand an infinite number of
    sentences, but you can make the point in the
  • 21:25 - 21:31
    same way that a mathematician can say that
    someone who understands the rules of arithmetic
  • 21:31 - 21:35
    knows that there are an infinite number of
    numbers, namely if anyone ever claimed to
  • 21:35 - 21:40
    have found the longest one, you can always
    come up with one that’s even bigger by adding
  • 21:40 - 21:45
    a one to it.  And you can do the same thing
    with language.  
  • 21:45 - 21:50
    Let me illustrate it in the following way.
    As a matter of fact, there has been a claim
  • 21:50 - 21:53
    that there is a world’s longest sentence.
  • 21:53 - 21:57
    Who would make such a claim?  Well, who else?
    The Guinness Book of World Records.  You
  • 21:57 - 22:03
    can look it up.  There is an entry for the
    World’s Longest Sentence.  It is 1,300
  • 22:03 - 22:08
    words long.  And it comes from a novel by
    William Faulkner.  Now I won’t read all
  • 22:08 - 22:12
    1,300 words, but I’ll just tell you how
    it begins.  
  • 22:12 - 22:17
    “They both bore it as though in deliberate
    flatulent exaltation…” and it runs on
  • 22:17 - 22:20
    from there.
    But I’m here to tell you that in fact, this
  • 22:20 - 22:25
    is not the world’s longest sentence.  And
    I’ve been tempted to obtain immortality
  • 22:25 - 22:30
    in Guinness by submitting the following record
    breaker.  "Faulkner wrote, they both bore
  • 22:30 - 22:36
    it as though in deliberate flatulent exaltation.”
    But sadly, this would not be immortality
  • 22:36 - 22:43
    after all but only the proverbial 15 minutes
    of fame because based on what you now know,
  • 22:43 - 22:48
    you could submit a record breaker for the
    record breaker namely, "Guinness noted that
  • 22:48 - 22:54
    Faulkner wrote" or "Pinker mentioned that
    Guinness noted that Faulkner wrote", or "who
  • 22:54 - 23:09
    cares that Pinker mentioned that Guinness
    noted that Faulkner wrote…"  
  • 23:09 - 23:14
    Take for example, the following wonderfully
    ambiguous sentence that appeared in TV Guide.
  • 23:14 - 23:19
    “On tonight’s program, Conan will discuss
    sex with Dr. Ruth.”  
  • 23:19 - 23:24
    Now this has a perfectly innocent meaning
    in which the verb, “discuss” involves
  • 23:24 - 23:30
    two things, namely the topic of discussion,
    “sex” and the person with who it’s being
  • 23:30 - 23:36
    discussed, in this case, with Dr. Ruth.  But
    is has a somewhat naughtier meaning if you
  • 23:36 - 23:40
    rearrange the words into phrases according
    to a different structure in which case “sex
  • 23:40 - 23:47
    with Dr. Ruth” is the topic of conversation,
    and that’s what’s being discussed.  
  • 23:47 - 23:51
    Now, phrase structure not only can account
    for our ability to produce so many sentences,
  • 23:51 - 23:57
    but it’s also necessary for us to understand
    what they mean.  The geometry of branches
  • 23:57 - 24:03
    in a phrase structure is essential to figuring
    out who did what to whom.
  • 24:03 - 24:08
    Another important contribution of Chomsky
    to the science of language is the focus on
  • 24:08 - 24:18
    language acquisition by children. Now, children
    can’t memorize sentences because knowledge
  • 24:18 - 24:23
    of language isn’t just one long list of
    memorized sentences, but somehow they must
  • 24:23 - 24:31
    distill out or abstract out the rules that
    goes into assembling sentences based on what
  • 24:31 - 24:36
    they hear coming out of their parent’s mouths
    when they were little.  And the talent of
  • 24:36 - 24:43
    using rules to produce combinations is in
    evidence from the moment that kids begin to
  • 24:43 - 24:48
    speak.  
    Children create sentences unheard from adults
  • 24:48 - 24:53
    At the two-word stage, which you typically
    see in children who are 18 months or a bit
  • 24:53 - 24:59
    older, kids are producing the smallest sentences
    that deserve to be counted as sentences, namely
  • 24:59 - 25:03
    two words long.  But already it’s clear
    that they are putting them together using
  • 25:03 - 25:10
    rules in their own mind.  To take an example,
    a child might say, “more outside,” meaning,
  • 25:10 - 25:15
    take them outside or let them stay outside.
    Now, adults don’t say, “more outside.”
  • 25:15 - 25:21
    So it’s not a phrase that the child simply
    memorized by rote, but it shows that already
  • 25:21 - 25:26
    children are using these rules to put together
    new combinations.  
  • 25:26 - 25:33
    Another example, a child having jam washed
    from his fingers said to his mother 'all gone
  • 25:33 - 25:40
    sticky'. Again, not a phrase that you
    could ever have copied from a parent, but
  • 25:40 - 25:43
    one that shows the child producing new combinations.
  • 25:43 - 25:51
    Past tense rule
    An easy way of showing that children assimilate
  • 25:51 - 25:57
    rules of grammar unconsciously from the moment
    they begin to speak, is the use of the past
  • 25:57 - 26:01
    tense rule.
    For example, children go through a long stage
  • 26:01 - 26:06
    in which they make errors like, “We holded
    the baby rabbits” or “He teared the paper
  • 26:06 - 26:12
    and then he sticked it.”  Cases in which
    they over generalize the regular rule of forming
  • 26:12 - 26:17
    the past tense, add ‘ed’ to irregular
    verbs like “hold,” “stick” or “tear.”
  • 26:17 - 26:21
    And it’s easy to show… it’s easy to
    get children to flaunt this ability to apply
  • 26:21 - 26:28
    rules productively in a laboratory demonstration
    called the Wug Test.  You bring a kid into
  • 26:28 - 26:34
    a lab.  You show them a picture of a little
    bird and you say, “This is a wug.”  And
  • 26:34 - 26:37
    you show them another picture and you say,
    “Well, now there are two of them.”  There
  • 26:37 - 26:42
    are two and children will fill in the gap
    by saying “wugs.”  Again, a form they
  • 26:42 - 26:48
    could not have memorize because it’s invented
    for the experiment, but it shows that they
  • 26:48 - 26:53
    have productive mastery of the regular plural
    rule in English.  
  • 26:53 - 26:58
    And famously, Chomsky claimed that children
    solved the problem of language acquisition
  • 26:58 - 27:05
    by having the general design of language already
    wired into them in the form of a universal
  • 27:05 - 27:10
    grammar.  
    A spec sheet for what the rules of any language
  • 27:10 - 27:15
    have to look like.  
  • 27:15 - 27:19
    What is the evidence that children are born
    with a universal grammar?  Well, surprisingly,
  • 27:19 - 27:25
    Chomsky didn’t propose this by actually
    studying kids in the lab or kids in the home,
  • 27:25 - 27:30
    but through a more abstract argument called,
    “The poverty of the input.”  Namely,
  • 27:30 - 27:37
    if you look at what goes into the ears of
    a child and look at the talent they end up
  • 27:37 - 27:44
    with as adults, there is a big chasm between
    them that can only be filled in by assuming
  • 27:44 - 27:48
    that the child has a lot of knowledge of the
    way that language works already built in.
  • 27:48 - 27:52

    Here’s how the argument works.  One of
  • 27:52 - 27:56
    the things that children have to learn when
    they learn English is how to form a question.
  • 27:56 - 28:03
    Now, children will get evidence from parent’s
    speech to how the question rule works, such
  • 28:03 - 28:09
    as sentences like, “The man is here,”
    and the corresponding question, “Is the
  • 28:09 - 28:14
    man here?” 
    Now, logically speaking, a child getting
  • 28:14 - 28:19
    that kind of input could posit two different
    kinds of rules. There’s a simple word
  • 28:19 - 28:25
    by word linear rule.  In this case, find
    the first “is” in the sentence and move
  • 28:25 - 28:30
    it to the front.  “The man is here,”
    “Is the man here?” Now there’s a more
  • 28:30 - 28:36
    complex rule that the child could posit called
    a structure dependent rule, one that looks
  • 28:36 - 28:41
    at the geometry of the phrase structure tree.
    In this case, the rule would be:  find
  • 28:41 - 28:47
    the first “is” after the subject noun
    phrase and move that to the front of the sentence.
  • 28:47 - 28:48
    A diagram of what that rule would look like
    is as follows:  you look for the “is”
  • 28:48 - 28:49
    that occurs after the subject noun phrase
    and that’s what gets moved to the front
  • 28:49 - 28:51
    of the sentence. 
    Now, what’s the difference between the simple
  • 28:51 - 28:56
    word-by-word rule and the more complex structured
    dependent rule?  Well, you can see the difference
  • 28:56 - 29:02
    when it comes to performing the question from
    a slightly more complex sentence like, “The
  • 29:02 - 29:27
    man who is tall is
    in the room.”  
    But how is the child supposed to learn that?
  • 29:27 - 29:33
    How did all of us end up with the correct
    structured dependent of the rule rather than
  • 29:33 - 29:37
    the far simpler word-by-word version of the
    rule?
  • 29:37 - 29:42
    “Well,” Chomsky argues, “if you were
    actually to look at the kind of language that
  • 29:42 - 29:47
    all of us hear, it’s actually quite rare
    to hear a sentence like, “Is the man who
  • 29:47 - 29:54
    is tall in the room?  The kind of input that
    would logically inform you that the word-by-word
  • 29:54 - 30:00
    rule is wrong and the structure dependent
    rule is right.  Nonetheless, we all grow
  • 30:00 - 30:06
    up into adults who unconsciously use the structure
    dependent rule rather than the word-by-word
  • 30:06 - 30:13
    rule.  Moreover, children don’t make errors
    like, “is the man who tall is in the room,”
  • 30:13 - 30:19
    as soon as they begin to form complex questions,
    they use the structure dependent rule.  And
  • 30:19 - 30:26
    that,” Chomsky argues, “is evidence that
    structure dependent rules are part of the
  • 30:26 - 30:32
    definition of universal grammar that children
    are born with.”  
  • 30:32 - 30:41
    Now, though Chomsky has been fantastically
    influential in the science of language that
  • 30:41 - 30:45
    does not mean that all language scientists
    agree with him.  And there have been a number
  • 30:45 - 30:51
    of critiques of Chomsky over the years.  For
    one thing, the critics point out, Chomsky
  • 30:51 - 30:58
    hasn’t really shown principles of universal
    grammar that are specific to language itself
  • 30:58 - 31:06
    as opposed to general ways in which the human
    mind works across multiple domains, language
  • 31:06 - 31:12
    and vision and control of motion and memory
    and so on.  We don’t really know that universal
  • 31:12 - 31:15
    grammar is specific to language, according
    to this critique.
  • 31:15 - 31:20
    Secondly, Chomsky and the linguists working
    with him have not examined all 6,000 of the
  • 31:20 - 31:28
    world’s languages and shown that the principles
    of universal grammar apply to all 6,000.  They’ve
  • 31:28 - 31:34
    posited it based on a small number of languages
    and the logic of the poverty of the input,
  • 31:34 - 31:39
    but haven’t actually come through with the
    data that would be necessary to prove that
  • 31:39 - 31:44
    universal grammar is really universal.  
    Finally, the critics argue, Chomsky has not
  • 31:44 - 31:53
    shown that more general purpose learning models,
    such as neuro network models, are incapable
  • 31:53 - 31:57
    of learning language together with all the
    other things that children learn, and therefore
  • 31:57 - 32:03
    has not proven that there has to be specific
    knowledge how grammar works in order for the
  • 32:03 - 32:05
    child to learn grammar.   
  • 32:05 - 32:14
    Another component of language governs the
    sound pattern of language, the ways that the
  • 32:14 - 32:21
    vowels and consonants can be assembled into
    the minimal units that go into words.  Phonology,
  • 32:21 - 32:28
    as this branch of linguistics is called, consists
    of formation rules that capture what is a
  • 32:28 - 32:34
    possible word in a language according to the
    way that it sounds.   To give you an example,
  • 32:34 - 32:40
    the sequence, bluk, is not an English word,
    but you get a sense that it could be an English
  • 32:40 - 32:43
    word that someone could coin a new form…
    that someone could coin a new term of English
  • 32:43 - 32:50
    that we pronounce “bluk.”  But when you
    hear the sound **, you instantly know thatthat
  • 32:50 - 32:55
    not only isn’t it an English word, but it
    really couldn’t be an English word.  **, by
  • 32:55 - 33:02
    the way, comes from Yiddish and it means kind
    of to sigh or to moan.  Oi.  That’s to
  • 33:02 - 33:06
    **.  
    The reason that we recognize that it’s not
  • 33:06 - 33:12
    English is because it has sounds like ** and
    sequences like **, which aren’t part of
  • 33:12 - 33:18
    the formation rules of English phonology.
    But together with the rules that define
  • 33:18 - 33:23
    the basic words of a language, there are also
    phonological rules that make adjustments to
  • 33:23 - 33:30
    the sounds, depending on what the other words
    the word appears with.  Very few of us realize,
  • 33:30 - 33:35
    for example, in English, that the past tense
    suffix “ed” is actually pronounced
  • 33:35 - 33:42
    in three different ways.  When we say, “He
    walked,” we pronounce the “ed” like
  • 33:42 - 33:48
    a “ta,” walked.  When we say “jogged,”
    we pronounce it as a “d,” jogged.  And
  • 33:48 - 33:55
    when we say “patted,” we stick in
    a vowel, pat-ted, showing that the same suffix,
  • 33:55 - 34:01
    “ed” can be readjusted in its pronunciation
    according to the rules of English phonology.
  • 34:01 - 34:05

    Now, when someone acquires English as a foreign
  • 34:05 - 34:11
    language or acquires a foreign language in
    general, they carry over the rules of phonology
  • 34:11 - 34:15
    of their first language and apply it to their
    second language.  We have a word for it;
  • 34:15 - 34:21
    we call it an “accent.”  When a language
    user deliberately manipulates the rules of
  • 34:21 - 34:26
    phonology, that is, when they don’t just
    speak in order to convey content, they pay
  • 34:26 - 34:38
    attention as to what phonological structures
    are being used; we call it poetry and rhetoric.
  • 34:38 - 34:38
  • 34:38 - 34:43
    So far, I’ve been talking about knowledge
    of language, the rules that go into defining
  • 34:43 - 34:49
    what are possible sequences of language.  But
    those sequences have to get into the brain
  • 34:49 - 34:53
    during speech comprehension and they have
    to get out during speech production.  And
  • 34:53 - 34:56
    that takes us to the topic of language interfaces.
  • 34:56 - 35:02
    And let’s start with production.  
  • 35:02 - 35:10
    This diagram here is literally a human cadaver
    that has been sawn in half.  An anatomist
  • 35:10 - 35:17
    took a saw and [sound] allowing it to see
    in cross section the human vocal tract.  And
  • 35:17 - 35:23
    that can illustrate how we get out knowledge
    of language out into the world as a sequence
  • 35:23 - 35:27
    of sounds.  
    Now, each of us has at the top of our windpipe
  • 35:27 - 35:34
    or trachea, a complex structure called the
    larynx or voice box; it’s behind your Adam’s
  • 35:34 - 35:42
    Apple.  And the air coming out of your lungs
    have to go passed two cartilaginous flaps
  • 35:42 - 35:50
    that vibrate and produce a rich, buzzy sound
    source, full of harmonics.  Before that vibrating
  • 35:50 - 35:56
    sound gets out to the world, it has to pass
    through a gauntlet or chambers of the vocal
  • 35:56 - 36:04
    tract.  The throat behind the tongue, the
    cavity above the tongue, the cavity formed
  • 36:04 - 36:10
    by the lips, and when you block off airflow
    through the mouth, it can come out through
  • 36:10 - 36:14
    the nose.  
    Now, each one of those cavities has a shape
  • 36:14 - 36:19
    that, thanks to the laws of physics, will
    amplify some of the harmonics in that buzzy
  • 36:19 - 36:26
    sound source and suppress others.  We can
    change the shape of those cavities when we
  • 36:26 - 36:31
    move our tongue around.  When we move our
    tongue forward and backward, for example,
  • 36:31 - 36:38
    as in “eh,” “aa,” “eh,” “aa,”
    we change the shape of the cavity behind the
  • 36:38 - 36:44
    tongue, change the frequencies that are amplified
    or suppressed and the listener hears them
  • 36:44 - 36:49
    as two different vowels.  
    Likewise, when we raise or lower the tongue,
  • 36:49 - 36:55
    we change the shape of the resonant cavity
    above the tongue as in say, “eh,” “ah,”
  • 36:55 - 37:02
    “eh,” “ah.”  Once again, the change
    in the mixture of harmonics is perceived as
  • 37:02 - 37:09
    a change in the nature of the vowel.  
    When we stop the flow of air and then release
  • 37:09 - 37:16
    it as in, “t,” “ca,” “ba.”  Then
    we hear a consonant rather than a vowel or
  • 37:16 - 37:23
    even when we restrict the flow of air as in
    “f,” “ss” producing a chaotic noisy
  • 37:23 - 37:31
    sound.  Each one of those sounds that gets
    sculpted by different articulators is perceived
  • 37:31 - 37:35
    by the brain as a qualitatively different
    vowel or consonant.  
  • 37:35 - 37:42
    Now, an interesting peculiarity of the human
    vocal track is that it obviously co-ops structures
  • 37:42 - 37:48
    that evolved for different purposes for breathing
    and for swallowing and so on.  And it’s
  • 37:48 - 37:53
    an… And it’s an interesting fact first
    noted by Darwin that the larynx over the course
  • 37:53 - 37:59
    of evolution has descended in the throat so
    that every particle of food going from the
  • 37:59 - 38:05
    mouth through the esophagus to the stomach
    has to pass over the opening into the larynx
  • 38:05 - 38:11
    with some probability of being inhaled leading
    to the danger of death by choking.  And in
  • 38:11 - 38:17
    fact, until the invention of the Heimlich
    Maneuver, several thousand people every year
  • 38:17 - 38:24
    died of choking because of this maladaptive
    of the human vocal tract.
  • 38:24 - 38:29
    Why did we evolve a mouth and throat that
    leaves us vulnerable to choking?  Well, a
  • 38:29 - 38:34
    plausible hypothesis is that it’s a compromise
    that was made in the course of evolution to
  • 38:34 - 38:41
    allow us to speak.  By giving range to a
    variety of possibilities for alternating the
  • 38:41 - 38:48
    resonant cavities, for moving the tongue back
    and forth and up and down, we expanded the
  • 38:48 - 38:54
    range of speech sounds we could make, improve
    the efficiency of language, but suffered the
  • 38:54 - 39:00
    compromise of an increased risk of choking
    showing that language presumably had some
  • 39:00 - 39:06
    survival advantage that compensated for the
    disadvantage in choking.  
  • 39:06 - 39:11
    What about the flow of information in the
    other direction, that is from the world into
  • 39:11 - 39:17
    the brain, the process of speech comprehension?
  • 39:17 - 39:24
    Speech comprehension turns out to be an extraordinarily
    complex computational process, which we're
  • 39:24 - 39:32
    reminded of every time we interact with a
    voicemail menu on a telephone or you use a
  • 39:32 - 39:39
    dictation on our computers.  For example,
    One writer, using the state-of-the-art speech-to-text
  • 39:39 - 39:47
    systems dictated the following words into
    his computer.  He dictated “book tour,”
  • 39:47 - 39:52
    and it came out on the screen as “back to
    work.”  Another example, he said, “I
  • 39:52 - 39:58
    truly couldn’t see,” and it came out on
    the screen as, “a cruelly good MC.”  Even
  • 39:58 - 40:04
    more disconcertingly, he started a letter
    to his parents by saying, “Dear mom and
  • 40:04 - 40:08
    dad,” and what came out on the screen, “The
    man is dead.”   
  • 40:08 - 40:13
    Now, dictation systems have gotten better
    and better, but they still have a way to go
  • 40:13 - 40:17
    before they can duplicate a human stenographer.
  • 40:17 - 40:22
    What is it about the problem of speech understanding
    that makes it so easy for a human, but
  • 40:22 - 40:28
    so hard for a computer? Well, there are two
    main contributors.  One of them is the fact
  • 40:28 - 40:35
    that each phony, each vowel or consonant actually
    comes out very differently, depending on what
  • 40:35 - 40:39
    comes before and what comes after.  A phenomenon
    sometimes called co-articulation.  
  • 40:39 - 40:46
    Let me give you an example.  The place called
    Cape Cod has two “c” sounds.  
  • 40:46 - 40:52
    Each of them symbolized by the letter “C,”
    the hard “C.”  Nonetheless, when you
  • 40:52 - 40:56
    pay attention to the way you pronounce them,
    you notice that in fact, you pronounce them
  • 40:56 - 41:02
    in very different parts of the mouth.  Try
    it.  Cape Cod, Cape Cod… “c,” “c”.
  • 41:02 - 41:09
    In one case, the “c” is produced way
    back in the mouth; the other it’s produced
  • 41:09 - 41:14
    much farther forward.  We don’t notice
    that we pronounce “c” in two different
  • 41:14 - 41:20
    ways depending whether it comes before an
    “a” or an “ah,” but that difference
  • 41:20 - 41:25
    forms a difference in the shape of the resonant
    cavity in our mouth which produces a very
  • 41:25 - 41:32
    different wave form.  And unless a computer
    is specifically programmed to take that variability
  • 41:32 - 41:38
    into account, it will perceive those two different
    “c’s,” as a different sound that objectively
  • 41:38 - 41:45
    speaking, they really are:  “c-eh” “c-oa”.
    They really are different sounds, but our
  • 41:45 - 41:49
    brain lumps them together.  
    The other reason that speech recognition is
  • 41:49 - 41:55
    such a difficult problem is because of the
    absence of segmentation.  Now we have an
  • 41:55 - 42:02
    illusion when we listen to speech that consists
    of a sequence to sounds corresponding to words.
  • 42:02 - 42:07
    But if you actually were to look at the
    wave form of a sentence on a oscilloscope,
  • 42:07 - 42:11
    there would not be little silences between
    the words the way there are little bits of
  • 42:11 - 42:17
    white space in printed words on a page, but
    rather a continuous ribbon in which the end
  • 42:17 - 42:21
    of one word leads right to the beginning of
    the next.  
  • 42:21 - 42:23
    It’s something that we’re aware of…
    It’s something that we’re aware of when
  • 42:23 - 42:28
    we listen to speech in a foreign language
    when we have no idea where one word ends and
  • 42:28 - 42:33
    the other one begins.  In our own language,
    we detect the word boundaries simply because
  • 42:33 - 42:39
    in our mental lexicon, we have stretches of
    sound that correspond to one word that tell
  • 42:39 - 42:44
    us where it ends.  But you can’t get that
    information from the wave form itself.  
  • 42:44 - 42:49
    In fact, there’s a whole genre of wordplay
    that takes advantage of the fact that word
  • 42:49 - 42:56
    boundaries are not physically present in the
    speech wave.  Novelty songs like Mairzy doats
  • 42:56 - 43:01
    and dozy doats and liddle lamzy divey 
A
    kiddley divey too, wooden shoe? 

Now,
  • 43:01 - 43:07
    it turns out that this is actually a grammatical
    sequence in words in English… Mares eat
  • 43:07 - 43:16
    oats and does eat oats and little lambs eat
    ivy, a kid'll eat ivy too, wouldn’t you?
  • 43:16 - 43:24
    When it is spoken or sung normally, the boundaries
    between words are obliterated and so the same
  • 43:24 - 43:30
    sequence of sounds can be perceived either
    as nonsense or if you know what they’re
  • 43:30 - 43:35
    meant to convey, as sentences.  
    Another example familiar to most children,
  • 43:35 - 43:41
    Fuzzy Wuzzy was a bear, Fuzzy Wuzzy had
    no hair.  Fuzzy Wuzzy wasn’t very fuzzy,
  • 43:41 - 43:49
    was he?  And the famous dogroll, I scream,
    you scream, we all scream for ice cream.
  • 43:49 - 43:56
    We are generally unaware of how unambiguous
    language is.  In context, we effortlessly
  • 43:56 - 44:02
    and unconsciously derive the intended meaning
    of a sentence, but a poor computer not equipped
  • 44:02 - 44:09
    with all of our common sense and human abilities
    and just going by the words and the rules
  • 44:09 - 44:14
    is often flabbergasted by all the different
    possibilities.  Take a sentence as simple
  • 44:14 - 44:20
    as “Mary had a little lamb,” you might
    think that that’s a perfectly simple unambiguous
  • 44:20 - 44:25
    sentence.  But now imagine that it was continued
    with “with mint sauce.”  You realize
  • 44:25 - 44:30
    that “have” is actually a highly ambiguous
    word. As a result, the computer translations
  • 44:30 - 44:35
    can often deliver comically incorrect results.
  • 44:35 - 44:40
    According to legend, one of the first computer
    systems that was designed to translate from
  • 44:40 - 44:45
    English to Russian and back again did the
    following given the sentence, “The spirit
  • 44:45 - 44:51
    is willing, but the flesh is weak,” it translated
    it back as “The vodka is agreeable, but
  • 44:51 - 45:00
    the meat is rotten.”
    So why do people understand language so much
  • 45:00 - 45:05
    better than computers?  What is the knowledge
    that we have that has been so hard to program
  • 45:05 - 45:11
    into our machines?  Well, there’s a third
    interface between language and the rest of
  • 45:11 - 45:18
    the mind, and that is the subject matter of
    the branch of linguistics called Pragmatics,
  • 45:18 - 45:25
    namely, how people understand language in
    context using their knowledge of the world
  • 45:25 - 45:29
    and their expectation about how other speakers
    communicate.  
  • 45:29 - 45:34
    The most important principle of Pragmatics
    is called “the cooperative principle,”
  • 45:34 - 45:40
    namely; assume that your conversational partner
    is working with you to try to get a meaning
  • 45:40 - 45:46
    across truthfully and clearly.  And our knowledge
    of Pragmatics, like our knowledge of syntax
  • 45:46 - 45:54
    and phonology and so on, is deployed effortlessly,
    but involves many intricate computations.
  • 45:54 - 45:59
    For example, if I were to say, “If you
    could pass the guacamole, that would be awesome.”
  • 45:59 - 46:05
    You understand that as a polite request
    meaning, give me the guacamole.  You don’t
  • 46:05 - 46:13
    interpret it literally as a rumination about
    a hypothetical affair, you just assume that
  • 46:13 - 46:18
    the person wanted something and was using
    that string of words to convey the request
  • 46:18 - 46:24
    politely.  
    Often comedies will use the absence of pragmatics
  • 46:24 - 46:30
    in robots as a source of humor.  As in the
    old “Get Smart” situation comedy, which
  • 46:30 - 46:36
    had a robot named, Hymie, and a recurring
    joke in the series would be that Maxwell Smart
  • 46:36 - 46:42
    would say to Hymie, “Hymie, can you give
    me a hand?”  And then Hymie would go, {sound},
  • 46:42 - 46:48
    remove his hand and pass it over to Maxwell
    Smart not understanding that “give me a
  • 46:48 - 46:54
    hand,” in context means, help me rather
    than literally transfer the hand over to me.
  • 46:54 - 46:56

    Or take the following example of Pragmatics
  • 46:56 - 47:01
    in action.  Consider the following dialogue,
    Martha says, “I’m leaving you.”  John
  • 47:01 - 47:08
    says, “Who is he?”  Now, understanding
    language requires finding the antecedents
  • 47:08 - 47:15
    pronouns, in this case who the “he” refers
    to, and any competent English speaker knows
  • 47:15 - 47:21
    exactly who the “he” is, presumably John’s
    romantic rival even though it was never stated
  • 47:21 - 47:28
    explicitly in any part of the dialogue.  This
    shows how we bring to bear on language understanding
  • 47:28 - 47:36
    a vast store of knowledge about human behavior,
    human interactions, human relationships.  And
  • 47:36 - 47:44
    we often have to use that background knowledge
    even to solve mechanical problems like, who
  • 47:44 - 47:50
    does a pronoun like “he” refer to.  It’s
    that knowledge that’s extraordinarily difficult,
  • 47:50 - 47:58
    to say the least to program into a computer.
  • 47:58 - 48:03
    Language is a miracle of the natural world
    because it allows us to exchange an unlimited
  • 48:03 - 48:12
    number of ideas using a finite set of mental
    tools.  Those mental tools comprise a large
  • 48:12 - 48:20
    lexicon of memorized words and a powerful
    mental grammar that can combine them.  Language
  • 48:20 - 48:26
    thought of in this way should not be confused
    with writing, with the prescriptive rules
  • 48:26 - 48:32
    of proper grammar or style or with thought
    itself.  
  • 48:32 - 48:37
    Modern linguistics is guided by the questions,
    though not always the answers suggested by
  • 48:37 - 48:43
    the linguist known as Noam Chomsky, namely
    how is the unlimited creativity of language
  • 48:43 - 48:51
    possible?  What are the abstract mental structures
    that relate word to one another? How do children
  • 48:51 - 48:55
    acquire them?  
    What is universal across languages?  And
  • 48:55 - 49:04
    what does that say about the human mind?  
    The study of language has many practical applications
  • 49:04 - 49:10
    including computers that understand and speak,
    the diagnosis and treatment of language disorders,
  • 49:10 - 49:17
    the teaching of reading, writing, and foreign
    languages, the interpreting of the language
  • 49:17 - 49:23
    of law, politics and literature.
    But for someone like me, language is eternally
  • 49:23 - 49:29
    fascinating because it speaks to such fundamental
    questions of the human condition.  [Language]
  • 49:29 - 49:35
    is really at the center of a number of different
    concerns of thought, of social relationships,
  • 49:35 - 49:41
    of human biology, of human evolution, that
    all speak to what’s special about the human
  • 49:41 - 49:45
    species.
    Language is the most distinctively human talent.
  • 49:45 - 49:50
    Language is a window into human nature,
    and most significantly, the vast expressive
  • 49:50 -
    power of language is one of the wonders of
    the natural world.  Thank you.
Title:
Steven Pinker: Linguistics as a Window to Understanding the Brain
Description:

Steven Pinker - Psychologist, Cognitive Scientist, and Linguist at Harvard University

How did humans acquire language? In this lecture, best-selling author Steven Pinker introduces you to linguistics, the evolution of spoken language, and the debate over the existence of an innate universal grammar. He also explores why language is such a fundamental part of social relationships, human biology, and human evolution. Finally, Pinker touches on the wide variety of applications for linguistics, from improving how we teach reading and writing to how we interpret law, politics, and literature.

The Floating University
Originally released September, 2011.

Additional Lectures:
Michio Kaku: The Universe in a Nutshell http://www.youtube.com/watch?v=0NbBjNiw4tk

Joel Cohen: Joel Cohen: An Introduction to Demography (Malthus Miffed: Are People the Problem?) http://www.youtube.com/watch?v=2vr44C_G0-o

more » « less
Video Language:
English
Team:
Captions Requested
Duration:
50:01

English subtitles

Revisions