< Return to Video

How to get empowered, not overpowered, by AI

  • Not Synced
    After 13.8 billion years
    of cosmic history,
  • Not Synced
    our universe has woken up
  • Not Synced
    and become aware of itself.
  • Not Synced
    From a small blue planet,
  • Not Synced
    tiny, concious parts of our universe
    have begun gazing out into the cosmos
  • Not Synced
    with telescopes,
  • Not Synced
    discovering something humbling.
  • Not Synced
    We've discovered that our universe
    is vastly grander
  • Not Synced
    than our ancestor's imagined,
  • Not Synced
    and that life seems to be an almost
    imprectibly small [protobation]
  • Not Synced
    on an otherwise dead universe.
  • Not Synced
    But we've also discovered
    something inspiring,
  • Not Synced
    which is that the technology
    we're developing has the potential
  • Not Synced
    to help life flourish like never before,
  • Not Synced
    not just for centuries
    but for billions of years,
  • Not Synced
    and not just on Earth but throughout
    much of this amazing cosmos.
  • Not Synced
    I think of the earliest life as life 1.0
  • Not Synced
    because it was really dumb,
  • Not Synced
    like bacteria unable to learn
    anything during its lifetime.
  • Not Synced
    I think of us humans as Life 2.0
    because we can learn,
  • Not Synced
    which in nerdy geek speak
  • Not Synced
    might think of as installing
    new software into our brains,
  • Not Synced
    like languages and job skills.
  • Not Synced
    Life 3.0, which can design not only
    its software but also its hardware
  • Not Synced
    of course doesn't exist yet.
  • Not Synced
    But perhaps our technology
    has already made us life 2.1,
  • Not Synced
    with our artificial knees, pacemakers
    and cochlear implants.
  • Not Synced
    So let's take a closer look
    at our relationship with technology, OK?
  • Not Synced
    As an example,
  • Not Synced
    the Apollo 11 Moon Mission was both
    successful and inspiring,
  • Not Synced
    showing that when we humans
    use technology wisely,
  • Not Synced
    we can accomplish things
    that our ancestors could only dream of.
  • Not Synced
    But there's an even more inspiring journey
  • Not Synced
    propelled by something
    more powerful than rocket engines ...
  • Not Synced
    with passengers who
    aren't just three astronauts
  • Not Synced
    but all of humanity.
  • Not Synced
    Let's talk about our collective
    journey into the future
  • Not Synced
    with artificial intelligence.
  • Not Synced
    My friend [Yan Tallan] likes to point out
    that just as with rocketry,
  • Not Synced
    it's not enough to make
    our technology powerful.
  • Not Synced
    We also have to figure out,
  • Not Synced
    if we're going to be really ambitious,
  • Not Synced
    how to steer it and where
    we want to go with it.
  • Not Synced
    So let's talk about all three
    artificial intelligences:
  • Not Synced
    the power, the steering
    and the destination.
  • Not Synced
    Let's start with the power.
  • Not Synced
    I define intelligence very inclusively --
  • Not Synced
    simply as our ability
    to accomplish complex goals
  • Not Synced
    because I want to include both
    biological and artificial intelligence
  • Not Synced
    and I want to avoid the silly
    [carbon]-chauvenism idea
  • Not Synced
    that you can only be smart
    if you're made of meat.
  • Not Synced
    It's really amazing how the power
    of AI has grown recently.
  • Not Synced
    Just think about it.
  • Not Synced
    Not long ago,
  • Not Synced
    robots couldn't walk.
  • Not Synced
    Now, they can do backflips.
  • Not Synced
    Not long ago,
  • Not Synced
    we didn't have self-driving cars.
  • Not Synced
    Now, we have self-flying rockets.
  • Not Synced
    Not long ago,
  • Not Synced
    AI couldn't do face recognition.
  • Not Synced
    Now, AI can generate fake faces
  • Not Synced
    and simulate your face saying stuff
    that you never said.
  • Not Synced
    Not long ago,
  • Not Synced
    AI couldn't beat us at the game of Go.
  • Not Synced
    Then, Google DeepMind's Alpha Zero AI
    took 3,000 years of human Go games
  • Not Synced
    and Go wisdom,
  • Not Synced
    ignored it all and became the world's best
    player by just playing against itself.
  • Not Synced
    And the most impressive feat here
    wasn't that it crushed human gamers,
  • Not Synced
    but that it crushed human AI researchers
  • Not Synced
    who had spent decades hand-crafting
    gameplaying software.
  • Not Synced
    And Alphazero crushed human AI researchers
    not just in GO but even at chess,
  • Not Synced
    which we have been working on since 1950.
  • Not Synced
    So all this amazing recent progress in AI
    really begs the question:
  • Not Synced
    how far will it go?
  • Not Synced
    I like to think about this question
  • Not Synced
    in terms of this abstract
    landscape of tasks,
  • Not Synced
    where the elevation represents
    how hard it is for AI to do each task
  • Not Synced
    at human level,
  • Not Synced
    and the sea level represents
    what AI can do today.
  • Not Synced
    The seal level is rising
    as the AI improves,
  • Not Synced
    so there's a kind of global warming
    going on here in the task landscape.
  • Not Synced
    And the obvious takeaway is to avoid
    careers at the waterfront --
  • Not Synced
    (Laughter)
  • Not Synced
    which will soon be
    automated and disrupted.
  • Not Synced
    But there's a much
    bigger question as well.
  • Not Synced
    How high will the water end up rising?
  • Not Synced
    Will it eventually rise
    to flood everything?
  • Not Synced
    Imagine human intelligence at all tasks.
  • Not Synced
    This is the definition
    of artificial general intelligence --
  • Not Synced
    AGI,
  • Not Synced
    which has been the holy grail
    of AI research since its inception,
  • Not Synced
    but this definition,
  • Not Synced
    people will say, "Ah,
    there will always be jobs
  • Not Synced
    that humans can do better than machines,
  • Not Synced
    are simply saying
    that we'll never get AGI.
  • Not Synced
    Sure, we might still choose to have
    some human jobs
  • Not Synced
    or to give humans income
    and purpose with our jobs,
  • Not Synced
    but AGI will in any case transform
    life as we know it
  • Not Synced
    with humans no longer being
    the most intelligent.
  • Not Synced
    Now if the water level does reach AGI,
  • Not Synced
    then further AI progress will be driven
    mainly not by humans but by AI,
  • Not Synced
    which means that there's a possiblity
    that further AI progress
  • Not Synced
    could be way faster than the typical
    human research and development
  • Not Synced
    time scale of years,
  • Not Synced
    raising the controversial possibility
    of an intelligence explosion
  • Not Synced
    where recursively self-improving AI
  • Not Synced
    rapidly leaves human
    intelligence far behind,
  • Not Synced
    creating what's known
    as super intelligence.
  • Not Synced
    All right, reality check:
  • Not Synced
    are we going to get AGI any time soon?
  • Not Synced
    Some famous AI researchers
    like Rodney Brooks think
  • Not Synced
    it won't happen for hundreds of years.
  • Not Synced
    But others, like Google DeepMind
    founder Demis Hassabis,
  • Not Synced
    are more optimistic
  • Not Synced
    and are working to try to make
    it happen much sooner.
  • Not Synced
    And recent surveys have shown
    that most AI researchers
  • Not Synced
    have actually shared Demis's optimism,
  • Not Synced
    expecting that we will get AGI
    within decades,
  • Not Synced
    so within the lifetime of many of us,
  • Not Synced
    which begs the question --
  • Not Synced
    and then what?
  • Not Synced
    What do we want the role of humans to be
  • Not Synced
    if machines can do everything better
    and cheaper than us?
  • Not Synced
    The way I see it, we face a choice.
  • Not Synced
    One option is to be complacent.
  • Not Synced
    We can say, "Oh, let's just build machines
    that can do everything we can do
  • Not Synced
    and not worry about the consequences.
  • Not Synced
    Come on, if we build technology
    that makes all humans obsolete,
  • Not Synced
    what could possibly go wrong?"
  • Not Synced
    (Laughter)
  • Not Synced
    But I think that would be
    embarrassingly lame.
  • Not Synced
    I think we should be more ambitious --
  • Not Synced
    in the spirit of TED.
  • Not Synced
    Let's envision the truly inspiring
    high-tech future
  • Not Synced
    and try to steer towards it.
  • Not Synced
    This brings us to the second part
    of our rocket metaphor:
  • Not Synced
    the steering.
  • Not Synced
    We're making AI more powerful,
  • Not Synced
    but how can we steer towards the future
  • Not Synced
    where AI helps humanity flourish
    rather than flounder?
  • Not Synced
    To help with this,
  • Not Synced
    I co-founded the Future Life Institute.
  • Not Synced
    It's a small non-profit promoting
    beneficial technology use
  • Not Synced
    and our goal is simply
    for the future of life to exist
  • Not Synced
    and be as inspiring as possible.
  • Not Synced
    You know, I love technology.
  • Not Synced
    Technology is why today is better
    than the stoneage.
  • Not Synced
    And I'm optimistic that we can create
    a really inspiring high-tech future,
  • Not Synced
    if --
  • Not Synced
    and this is a big if --
  • Not Synced
    if we win the wisdom race --
  • Not Synced
    the race between the growing
    power of our technology
  • Not Synced
    and the growing wisdom
    with which we manage it.
  • Not Synced
    But this is going to require
    a change of strategy
  • Not Synced
    because our old strategy has been
    learning from mistakes.
  • Not Synced
    We invented fire,
  • Not Synced
    screwed up a bunch of times,
  • Not Synced
    invented the fire extinguisher.
  • Not Synced
    (Laughter)
  • Not Synced
    We invented the car,
  • Not Synced
    screwed up a bunch of times,
  • Not Synced
    invented the traffic light,
    the seatbelt and the airbag,
  • Not Synced
    but more powerful technology
    like nuclear weapons and AGI --
  • Not Synced
    learning from mistakes is lousy strategy,
  • Not Synced
    don't you think?
  • Not Synced
    (Laughter)
  • Not Synced
    It's much better to be proactive
    rather than be reactive;
  • Not Synced
    plan ahead and get things
    right the first time
  • Not Synced
    because that might be
    the only time we'll get.
  • Not Synced
    But it is funny because
    sometimes people tell me,
  • Not Synced
    "Max,
  • Not Synced
    ssshhhh,
  • Not Synced
    don't talk like that.
  • Not Synced
    That's Luddite scare-mongering."
  • Not Synced
    But it's not scare-mongering.
  • Not Synced
    It's what we at MIT
    call safety engineering.
  • Not Synced
    Think about it:
  • Not Synced
    before NASA launched
    the Apollo 11 Mission,
  • Not Synced
    they systematically thought through
    everything that could go wrong
  • Not Synced
    when you put people on top of
    explosive fuel tanks
  • Not Synced
    and launched them somewhere
    where no one could help them.
  • Not Synced
    And there was a lot that could go wrong.
  • Not Synced
    Was that scare-mongering?
  • Not Synced
    No.
  • Not Synced
    That's was precisely
    the safety engineering
  • Not Synced
    that insured the success of the mission,
  • Not Synced
    and that is precisely the strategy
    I think we should take with AGI.
  • Not Synced
    Think through what can go wrong
    to make sure it goes right.
  • Not Synced
    So in this spirit,
  • Not Synced
    we've organized conferences,
  • Not Synced
    bringing together leading AI researchers
    and other thinkers
  • Not Synced
    who discuss how to grow this wisdom
    we need to keep AI beneficial.
  • Not Synced
    Our last conference
    was in Asilomar, California last year
  • Not Synced
    and produced this list of 23 principles
  • Not Synced
    which have since been signed
    by over 1,000 AI researchers
  • Not Synced
    and key industry leaders.
  • Not Synced
    And I want to tell you
    about three of these principles.
  • Not Synced
    One is that we should avoid an arms race
    and lethal autonomous weapons.
  • Not Synced
    The idea here is that any science
    can be used for new ways of helping people
  • Not Synced
    of new ways of harming people.
  • Not Synced
    For example, biology and chemistry
    are much more likely to be used
  • Not Synced
    for new medicines or new cures
    than for new ways of killing people,
  • Not Synced
    because biologist and chemists
    pushed hard --
  • Not Synced
    and successfully --
  • Not Synced
    for bans on biological
    and chemical weapons.
  • Not Synced
    And in the same spirit,
  • Not Synced
    most AI researchers want to stigmatize
    and ban lethal autonomous weapons.
  • Not Synced
    Another Asilomar AI principle
  • Not Synced
    is that we should mitigate
    AI-fueled income inequality.
  • Not Synced
    I think that if we can grow
    the economic pie dramatically with AI,
  • Not Synced
    and we still can't figure out how
    to divide this pie
  • Not Synced
    so that everyone gets better off,
  • Not Synced
    then shame on us.
  • Not Synced
    (Applause)
  • Not Synced
    All right, now raise your hand
    if your computer has ever crashed.
  • Not Synced
    (Laughter)
  • Not Synced
    Wow, that's a lot of hands.
  • Not Synced
    Well, then you'll appreciate
    this principle
  • Not Synced
    that we should invest much more
    in the AI safety research,
  • Not Synced
    because as we put AI in charge
    of more decisions and infrastructure,
  • Not Synced
    we need to figure out how to transform
    today's buggy and hackable computers
  • Not Synced
    into robust AI systems
    that we can really trust,
  • Not Synced
    because otherwise,
  • Not Synced
    all this awesome new technology
    can malfunction and harm us
  • Not Synced
    or get hacked and be turned against us.
  • Not Synced
    And this AI safety work has to include
    work on AI value alignment,
  • Not Synced
    because the real threat
    from AGI isn't malice,
  • Not Synced
    like in silly Hollywood movies,
  • Not Synced
    but competence.
  • Not Synced
    AGI accomplishing goals that just
    aren't aligned with ours.
  • Not Synced
    For example,
  • Not Synced
    when we humans drove
    the West African Black Rhino extinct,
  • Not Synced
    we didn't do it because we're a bunch
    of evil rhinocerous haters,
  • Not Synced
    did we?
  • Not Synced
    We did it because we were
    smarter than them
  • Not Synced
    and our goals weren't aligned with theirs.
  • Not Synced
    But AGI is by definition smarter than us,
  • Not Synced
    so to make sure that we don't put
    ourselves in the position of those rhinos
  • Not Synced
    if we create AGI,
  • Not Synced
    we need to figure out how to make machines
    understand our goals,
  • Not Synced
    adopt our goals
  • Not Synced
    and retain our goals.
  • Not Synced
    And whose goals should these be, anyway?
  • Not Synced
    Which goals should they be?
  • Not Synced
    This brings us to the third part
    of our rocket metaphor:
  • Not Synced
    the destination.
  • Not Synced
    We're making AI more powerful,
  • Not Synced
    trying to figure out how to steer it,
  • Not Synced
    but where do we want to go with it?
  • Not Synced
    This is the elephant in the room
    that almost nobody talks about --
  • Not Synced
    not even here at TED --
  • Not Synced
    because we're so fixated
    on short-term AI challenges.
  • Not Synced
    Look, our species is trying
    to build AGI,
  • Not Synced
    motivated by curiosity and economics,
  • Not Synced
    but what sort of future society
    are we hoping for if we succeed?
  • Not Synced
    We did an opinion poll on this recently,
  • Not Synced
    and I was struck to see
  • Not Synced
    that most people actually want us
    to build super-intelligence:
  • Not Synced
    AI that's vastly smarter
    than us in all ways.
  • Not Synced
    What there was the greatest agreement on
    was that we should be ambitious
  • Not Synced
    and help life spread into the cosmos,
  • Not Synced
    but there was much less agreement
    about who or what should be in charge.
  • Not Synced
    And I was actually quite amused
  • Not Synced
    to see that there's some some people
    who want it to be just the machines.
  • Not Synced
    (Laughter)
  • Not Synced
    And there was total disagreement
    about what the role of humans should be,
  • Not Synced
    even at the most basic level,
  • Not Synced
    so let's take a closer look
    at possible futures
  • Not Synced
    that we might choose to steer towards.
  • Not Synced
    So don't get be wrong here;
  • Not Synced
    I'm not talking about space travel,
  • Not Synced
    merely about humanity's
    metaphorical journey into the future.
  • Not Synced
    So one option that some
    of my AI colleagues like
  • Not Synced
    is to build super-intelligence
    and keep it under human control,
  • Not Synced
    like an enslaved god,
  • Not Synced
    disconnected from the internet
  • Not Synced
    and used to create unimaginable
    technology and wealth
  • Not Synced
    for whoever controls it.
  • Not Synced
    But [Lord Acton] warned us
    that power corrupts,
  • Not Synced
    and absolute power corrupts absolutely,
  • Not Synced
    so you might worry that maybe
    we humans just aren't smart enough
  • Not Synced
    or wise enough rather,
  • Not Synced
    to handle this much power.
  • Not Synced
    Also, aside from any moral
    qualms you might have
  • Not Synced
    about enslaving superior minds,
  • Not Synced
    you might worry that maybe
    the super intelligence could outsmart us,
  • Not Synced
    break out
  • Not Synced
    and take over.
  • Not Synced
    But I also have colleauges who are fine
    with AI taking over
  • Not Synced
    and even causing human extinction,
  • Not Synced
    as long as we feel the the AIs
    are our worthy descendants,
  • Not Synced
    like our children.
  • Not Synced
    But how would we know that they AIs
    have adopted our best values,
  • Not Synced
    and aren't just unconscious zombies
    tricking us into anthropomorphizing them?
  • Not Synced
    Also, shouldn't those people who don't
    want human extiction
  • Not Synced
    have a say in the matter, too?
  • Not Synced
    Now, if you didn't like either
    of those two high-tech options,
  • Not Synced
    it's important to remember
    that low-tech is suicide
  • Not Synced
    from a cosmic perspective,
  • Not Synced
    because if we don't go far beyond
    today's technology,
  • Not Synced
    the question isn't whether humanity
    is going to go extinct,
  • Not Synced
    merely whether we're going to be
    taken out by the next killer asteroid,
  • Not Synced
    super volcano
  • Not Synced
    or some other problem that better
    technology could have solved.
  • Not Synced
    So, how about having
    our cake and eating it ...
  • Not Synced
    with AGI that's not enslaved
  • Not Synced
    but treats us well because its values
    are aligned with ours?
  • Not Synced
    This is the gist of what Eleazer Yukowski
    has called "Friendly AI,"
  • Not Synced
    and if we can do this,
  • Not Synced
    it could be awesome.
  • Not Synced
    It could not only eliminate negative
    experiences like disease, poverty,
  • Not Synced
    crime and other suffering,
  • Not Synced
    but it could also give us
    the freedom to choose
  • Not Synced
    from a fantastic new diversity
    of positive experiences --
  • Not Synced
    basically making us the masters
    of our own destiny.
  • Not Synced
    So in summary,
  • Not Synced
    our situation with technology
    is complicated,
  • Not Synced
    but the big picture is rather simple.
  • Not Synced
    Most AI researchers expect AGI
    within decades,
  • Not Synced
    and if we just bumble
    into this unprepared,
  • Not Synced
    it will probably be the biggest
    mistake in human history --
  • Not Synced
    let's face it.
  • Not Synced
    It could enable brutal,
    global dictatorship
  • Not Synced
    with unprecedented inequality,
    surveillance and suffering,
  • Not Synced
    and maybe even human extinction.
  • Not Synced
    But if we steer carefully,
  • Not Synced
    we could end up in a fantastic future,
  • Not Synced
    where everybody's better off:
  • Not Synced
    the poor are richer,
  • Not Synced
    the rich are richer,
  • Not Synced
    everybody is healthy and free
    to live out their dreams.
  • Not Synced
    Now, hang on.
  • Not Synced
    Do you folks want the future
    that's politically right or left?
  • Not Synced
    Do you want the pious society
    with strict moral rules,
  • Not Synced
    or do you an hedonistic free-for-all,
  • Not Synced
    more like Burning Man 24/7?
  • Not Synced
    Do you want beautiful beaches,
    forests and lakes
  • Not Synced
    or would you prefer to rearrange
    some of those atoms
  • Not Synced
    with the computers and they
    can be vitual experiences?
  • Not Synced
    With friendly AI,
  • Not Synced
    we could simply build
    all of these societies
  • Not Synced
    and give people the freedom to choose
    which one they want to live in,
  • Not Synced
    because we would no longer
    be limited by our intelligence,
  • Not Synced
    merely by the laws of physics.
  • Not Synced
    So the resources and space for this
    would be astronomical --
  • Not Synced
    literally.
  • Not Synced
    So here's our choice.
  • Not Synced
    We can either be complacent
    about our future,
  • Not Synced
    taking as an article of blind faith
  • Not Synced
    that any new technology
    is guaranteed to be beneficial,
  • Not Synced
    and just repeat that to ourselves
    as a mantra over and over and over again
  • Not Synced
    as we drift like a rudderless ship
    towards our own obsolesence.
  • Not Synced
    Or we can be ambitious --
  • Not Synced
    thinking hard about how
    to steer our technology
  • Not Synced
    and where we want to go with it
  • Not Synced
    to create the age of amazement.
  • Not Synced
    We're all here to celebrate
    the age of amazement,
  • Not Synced
    and I feel that its essence should lie
    in becoming not overpowered
  • Not Synced
    but empowered by our technology.
  • Not Synced
    Thank you.
  • Not Synced
    (Applause)
Title:
How to get empowered, not overpowered, by AI
Speaker:
Max Tegmark
Description:

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
17:15

English subtitles

Revisions Compare revisions