< Return to Video

Machine intelligence makes human morals more important

  • Not Synced
    So I started my first job
    as a computer programmer
  • Not Synced
    in my very first year of college,
  • Not Synced
    basically as a teenager.
  • Not Synced
    Soon after I started working,
  • Not Synced
    writing software in a company,
  • Not Synced
    a manager who worked at the company
    came down to where I was,
  • Not Synced
    and he whispered to me,
  • Not Synced
    "Can he tell if I'm lying?"
  • Not Synced
    There was nobody else in the room.
  • Not Synced
    "Can who tell if you're lying,
    and why are we whispering?"
  • Not Synced
    The manager pointed
    at the computer in the room.
  • Not Synced
    "Can he tell if I'm lying?"
  • Not Synced
    Well, that manager was having
    an affair with the receptionist,
  • Not Synced
    and I was still a teenager,
  • Not Synced
    so I whisper-shouted back to him,
  • Not Synced
    "Yes, the computer can tell
    if you're lying."
  • Not Synced
    (Laughter)
  • Not Synced
    Well, I laughed, but actually
    the laugh's on me.
  • Not Synced
    Nowadays, there are computational systems
    that can suss out emotional states,
  • Not Synced
    and even lying,
  • Not Synced
    from processing human faces.
  • Not Synced
    Advertisers and even governments
    are very interested.
  • Not Synced
    I had become a computer programmer
    because I was one of those kids
  • Not Synced
    crazed about math and science,
  • Not Synced
    but somewhere along the line
    I'd learned about nuclear weapons,
  • Not Synced
    and I'd gotten really concerned
    with the ethics of science.
  • Not Synced
    I was troubled.
  • Not Synced
    However, because of family circumstances,
  • Not Synced
    I also needed to start working
    as soon as possible.
  • Not Synced
    So I thought to myself, hey,
    let me pick a technical field
  • Not Synced
    where I can get a job easily
  • Not Synced
    and where I don't have to deal
    with any troublesome questions of ethics.
  • Not Synced
    So I picked computer.
  • Not Synced
    Well ha ha ha, all the laughs are on me.
  • Not Synced
    Nowadays, computer scientists
    are building platforms that control
  • Not Synced
    what a billion people see every day.
  • Not Synced
    They're developing cars that
    could decide who to run over.
  • Not Synced
    They're even building machines, weapons,
    that might kill human beings in war.
  • Not Synced
    It's ethics all the way down.
  • Not Synced
    Machine intelligence is here.
  • Not Synced
    We're now using computation
    to make all sort of decisions,
  • Not Synced
    but also new kinds of decisions.
  • Not Synced
    We're asking questions to computation
    that have no single right answers,
  • Not Synced
    that are subjective and open-ended
    and value-laden.
  • Not Synced
    We're asking questions like,
  • Not Synced
    "Who should the company hire?"
  • Not Synced
    "Which update from which friend
    should you be shown?"
  • Not Synced
    "Which convict is more
    likely to re-offend?"
  • Not Synced
    "Which news item or movie
    should be recommended to people?"
  • Not Synced
    Look, yes, we've been using
    computers for a while,
  • Not Synced
    but this is different.
  • Not Synced
    This is a historical twist,
  • Not Synced
    because we cannot anchor
    computation for such subjective decisions
  • Not Synced
    the way we can anchor computation
  • Not Synced
    for flying airplanes, building bridges,
    going to the moon.
  • Not Synced
    Are airplanes safer?
    Did the bridge sway and fall?
  • Not Synced
    There we have agreed-upon,
    fairly clear benchmarks,
  • Not Synced
    and we have laws of nature to guide us.
  • Not Synced
    We have no such anchors and benchmarks
  • Not Synced
    for decisions in messy human affairs.
  • Not Synced
    To make things more complicated,
    our software is getting more powerful,
  • Not Synced
    but it's also getting less transparent
    and more complex.
  • Not Synced
    Recently, in the past decade,
  • Not Synced
    complex algorithms have made
    great strides.
  • Not Synced
    They can recognize human faces.
  • Not Synced
    They can decipher handwriting.
  • Not Synced
    They can detect credit card fraud
    and block spam
  • Not Synced
    and they can translate between languages.
  • Not Synced
    They can detect tumors in medical imaging.
  • Not Synced
    They can beat humans in chess and go.
  • Not Synced
    Much of this progress comes from
    a method called machine learning.
  • Not Synced
    Machine learning is different
    than traditional programming,
  • Not Synced
    where you give the computer
    detailed, exact, painstaking instructions.
  • Not Synced
    It's more like you take the system
    and you feed it lot of data,
  • Not Synced
    including unstructured data,
  • Not Synced
    like the kind we generate
    in our digital lives.
  • Not Synced
    And the system learns
    by churning through this data.
  • Not Synced
    And also crucially, these systems
    don't operate under a single answer logic.
  • Not Synced
    They don't produce a simple answer.
  • Not Synced
    It's more probabilistic.
  • Not Synced
    This one is probably more
    like what you're looking for.
  • Not Synced
    Now the upside is, this method
    is really powerful.
  • Not Synced
    The head of Google's AI systems called it
    the unreasonable effectiveness of data.
  • Not Synced
    The downside is we don't really
    understand what the system learned.
  • Not Synced
    In fact, that's its power.
  • Not Synced
    This is less like giving instructions
    to a computer.
  • Not Synced
    It's more like training a puppy
    machine creature
  • Not Synced
    we don't really understand or control.
  • Not Synced
    So this is our problem.
  • Not Synced
    It's a problem when this artificial
    intelligence system gets things wrong.
  • Not Synced
    It's also a problem
  • Not Synced
    when it gets things right,
  • Not Synced
    because we don't even know which is which
    when it's a subjective problem.
  • Not Synced
    We don't know what this thing is thinking.
  • Not Synced
    So consider a hiring algorithm,
  • Not Synced
    a system used to hire people
    using machine learning systems.
  • Not Synced
    Such a system would have been trained
    on previous employees' data
  • Not Synced
    an instructed to find and hire
    people like the existing
  • Not Synced
    high performers in the company.
  • Not Synced
    Sounds good.
  • Not Synced
    I once attended a conference
    that brought together
  • Not Synced
    human resources, managers,
    and executives, high-level people,
  • Not Synced
    using such systems in hiring.
  • Not Synced
    They were super-excited.
  • Not Synced
    They thought that this would make hiring
    more objective, less biased,
  • Not Synced
    and give women
    and minorities a better shot
  • Not Synced
    against biased human managers.
  • Not Synced
    And look, human hiring is biased.
  • Not Synced
    I know.
  • Not Synced
    In one of my early jobs as a programmer,
  • Not Synced
    my immediate manager would sometimes
    come down to where I was
  • Not Synced
    really early in the morning
  • Not Synced
    or really late in the afternoon,
  • Not Synced
    and she'd say, "Zeynep,
    let's go to lunch!"
  • Not Synced
    I'd be puzzled by the weird timing.
  • Not Synced
    It's 4 pm? Lunch?
  • Not Synced
    I was broke, so free lunch. I always went.
  • Not Synced
    I later realized what was happening.
  • Not Synced
    My immediate managers
    had not confessed to their higher-ups
  • Not Synced
    that the programmer they hired
    for a serious job was a teen girl
  • Not Synced
    who wore jeans and sneakers to work.
  • Not Synced
    I was doing a good job.
  • Not Synced
    I just looked wrong and was
    the wrong age and gender.
  • Not Synced
    So hiring in a gender- and race-blind way
  • Not Synced
    certainly sounds good to me.
  • Not Synced
    But with these systems,
    it is more complicated, and here's why.
  • Not Synced
    Currently, computational systems
    can infer all sorts of things about you
  • Not Synced
    from your digital crumbs,
  • Not Synced
    even if you have not
    disclosed those things.
  • Not Synced
    They can infer your sexual orientation,
  • Not Synced
    your personality traits,
  • Not Synced
    your political leanings.
  • Not Synced
    They have predictive power
    with high levels of accuracy,
  • Not Synced
    remember for things
    you haven't even disclosed.
  • Not Synced
    This is inference.
  • Not Synced
    I have a friend who developed
    such computational systems
  • Not Synced
    to predict the likelihood of clinical
    or post-partum depression
  • Not Synced
    from social media data.
  • Not Synced
    The results were impressive.
  • Not Synced
    Her system can predict
    the likelihood of depression
  • Not Synced
    months before the onset of any symptoms,
  • Not Synced
    months before.
  • Not Synced
    No symptoms, there's prediction.
  • Not Synced
    She hopes it will be used
    for early intervention.
  • Not Synced
    Great.
  • Not Synced
    But now put this in the context of hiring.
  • Not Synced
    So at this human resources
    manager's conference,
  • Not Synced
    I approached a high-level manager
  • Not Synced
    in a very large company,
  • Not Synced
    and I said to her, "Look,
    what if, unbeknownst to you,
  • Not Synced
    your system is weeding out people with
    high future likelihood of depression?
  • Not Synced
    They're not depressed now,
    just maybe in the future, more likely.
  • Not Synced
    What if it's weeding out women
    more likely to be pregnant
  • Not Synced
    in the next year or two
    but aren't pregnant now?
  • Not Synced
    What if it's hiring aggressive people
    because that's your workplace culture?"
  • Not Synced
    You can't tell this by looking
    at gender breakdowns.
  • Not Synced
    Those may be balanced.
  • Not Synced
    And since this is machine learning,
    not traditional coding,
  • Not Synced
    there is no variable there
  • Not Synced
    labeled "higher risk of depression,"
  • Not Synced
    "higher risk of pregnancy,"
  • Not Synced
    "aggressive guy scale."
  • Not Synced
    Not only do you not know
    what your system is selecting on,
  • Not Synced
    you don't even know
    where to begin to look.
  • Not Synced
    It's a black box.
  • Not Synced
    It has predictive power,
    but you don't understand it.
  • Not Synced
    "What safeguards," I asked, "do you have
    to make sure that your black box
  • Not Synced
    isn't doing something shady?"
  • Not Synced
    So she looked at me
  • Not Synced
    as if I had just stepped
    on 10 puppy tails.
  • Not Synced
    She stared at me and she said,
  • Not Synced
    "I don't want to hear
    another word about this."
  • Not Synced
    And she turned around and walked away.
  • Not Synced
    Mind you, she wasn't rude.
  • Not Synced
    It was clearly what I don't know
    isn't my problem, go away, death stare.
  • Not Synced
    Look, such a system may even be less
    biased than human managers in some ways,
  • Not Synced
    and it could make monetary sense,
  • Not Synced
    but it could also lead to a steady but
    stealthy shutting out of the job market
  • Not Synced
    of people with higher risk of depression.
  • Not Synced
    Is this the kind of society
    we want to build
  • Not Synced
    without even knowing we've done this
  • Not Synced
    because we turned decision-making
    to machines we don't totally understand?
  • Not Synced
    Another problem is this:
  • Not Synced
    these systems are often trained
    on data generated by our actions,
  • Not Synced
    human imprints.
  • Not Synced
    Well, they could just be
    reflecting our biases,
  • Not Synced
    and these systems could be
    picking up on our biases
  • Not Synced
    and amplifying them
  • Not Synced
    and showing them back to us,
  • Not Synced
    while we're telling ourselves, "We're just
    doing objective neutral computation."
  • Not Synced
    Researchers found that on Google,
  • Not Synced
    women are less likely than men
  • Not Synced
    to be shown job ads for high-paying jobs,
  • Not Synced
    and searching for African-American names
  • Not Synced
    is more likely to bring up ads
    suggesting criminal history,
  • Not Synced
    even when there is none.
  • Not Synced
    Such hidden biases
    and black box algorithms
  • Not Synced
    that researchers uncover sometimes
  • Not Synced
    but sometimes we don't know
  • Not Synced
    can have life-altering consequences.
Title:
Machine intelligence makes human morals more important
Speaker:
Zeynep Tufekci
Description:

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
17:42

English subtitles

Revisions Compare revisions