< Return to Video

How to get empowered, not overpowered, by AI

  • 0:01 - 0:05
    After 13.8 billion years
    of cosmic history,
  • 0:05 - 0:07
    our universe has woken up
  • 0:07 - 0:09
    and become aware of itself.
  • 0:09 - 0:11
    From a small blue planet,
  • 0:11 - 0:16
    tiny, conscious parts of our universe
    have begun gazing out into the cosmos
  • 0:16 - 0:17
    with telescopes,
  • 0:17 - 0:18
    discovering something humbling.
  • 0:19 - 0:22
    We've discovered that our universe
    is vastly grander
  • 0:22 - 0:24
    than our ancestors imagined
  • 0:24 - 0:28
    and that life seems to be an almost
    imperceptibly small perturbation
  • 0:28 - 0:30
    on an otherwise dead universe.
  • 0:30 - 0:33
    But we've also discovered
    something inspiring,
  • 0:33 - 0:36
    which is that the technology
    we're developing has the potential
  • 0:36 - 0:39
    to help life flourish like never before,
  • 0:39 - 0:42
    not just for centuries
    but for billions of years,
  • 0:42 - 0:46
    and not just on earth but throughout
    much of this amazing cosmos.
  • 0:48 - 0:51
    I think of the earliest life as "Life 1.0"
  • 0:51 - 0:52
    because it was really dumb,
  • 0:52 - 0:57
    like bacteria, unable to learn
    anything during its lifetime.
  • 0:57 - 1:00
    I think of us humans as "Life 2.0"
    because we can learn,
  • 1:00 - 1:02
    which we in nerdy, geek speak,
  • 1:02 - 1:05
    might think of as installing
    new software into our brains,
  • 1:05 - 1:07
    like languages and job skills.
  • 1:08 - 1:12
    "Life 3.0," which can design not only
    its software but also its hardware
  • 1:12 - 1:14
    of course doesn't exist yet.
  • 1:14 - 1:17
    But perhaps our technology
    has already made us "Life 2.1,"
  • 1:17 - 1:22
    with our artificial knees,
    pacemakers and cochlear implants.
  • 1:22 - 1:26
    So let's take a closer look
    at our relationship with technology, OK?
  • 1:27 - 1:28
    As an example,
  • 1:28 - 1:33
    the Apollo 11 moon mission
    was both successful and inspiring,
  • 1:33 - 1:36
    showing that when we humans
    use technology wisely,
  • 1:36 - 1:40
    we can accomplish things
    that our ancestors could only dream of.
  • 1:40 - 1:43
    But there's an even more inspiring journey
  • 1:43 - 1:46
    propelled by something
    more powerful than rocket engines,
  • 1:47 - 1:50
    where the passengers
    aren't just three astronauts
  • 1:50 - 1:51
    but all of humanity.
  • 1:51 - 1:54
    Let's talk about our collective
    journey into the future
  • 1:54 - 1:56
    with artificial intelligence.
  • 1:57 - 2:01
    My friend Jaan Tallinn likes to point out
    that just as with rocketry,
  • 2:02 - 2:05
    it's not enough to make
    our technology powerful.
  • 2:06 - 2:09
    We also have to figure out,
    if we're going to be really ambitious,
  • 2:09 - 2:10
    how to steer it
  • 2:10 - 2:12
    and where we want to go with it.
  • 2:13 - 2:16
    So let's talk about all three
    for artificial intelligence:
  • 2:16 - 2:19
    the power, the steering
    and the destination.
  • 2:20 - 2:21
    Let's start with the power.
  • 2:22 - 2:25
    I define intelligence very inclusively --
  • 2:25 - 2:29
    simply as our ability
    to accomplish complex goals,
  • 2:29 - 2:33
    because I want to include both
    biological and artificial intelligence.
  • 2:33 - 2:37
    And I want to avoid
    the silly carbon-chauvinism idea
  • 2:37 - 2:39
    that you can only be smart
    if you're made of meat.
  • 2:41 - 2:45
    It's really amazing how the power
    of AI has grown recently.
  • 2:45 - 2:46
    Just think about it.
  • 2:46 - 2:50
    Not long ago, robots couldn't walk.
  • 2:51 - 2:53
    Now, they can do backflips.
  • 2:54 - 2:56
    Not long ago,
  • 2:56 - 2:58
    we didn't have self-driving cars.
  • 2:59 - 3:01
    Now, we have self-flying rockets.
  • 3:04 - 3:05
    Not long ago,
  • 3:05 - 3:08
    AI couldn't do face recognition.
  • 3:08 - 3:11
    Now, AI can generate fake faces
  • 3:11 - 3:15
    and simulate your face
    saying stuff that you never said.
  • 3:16 - 3:18
    Not long ago,
  • 3:18 - 3:20
    AI couldn't beat us at the game of Go.
  • 3:20 - 3:25
    Then, Google DeepMind's AlphaZero AI
    took 3,000 years of human Go games
  • 3:26 - 3:27
    and Go wisdom,
  • 3:27 - 3:32
    ignored it all and became the world's best
    player by just playing against itself.
  • 3:32 - 3:35
    And the most impressive feat here
    wasn't that it crushed human gamers,
  • 3:36 - 3:38
    but that it crushed human AI researchers
  • 3:38 - 3:42
    who had spent decades
    handcrafting game-playing software.
  • 3:42 - 3:47
    And AlphaZero crushed human AI researchers
    not just in Go but even at chess,
  • 3:47 - 3:49
    which we have been working on since 1950.
  • 3:50 - 3:54
    So all this amazing recent progress in AI
    really begs the question:
  • 3:55 - 3:57
    How far will it go?
  • 3:58 - 3:59
    I like to think about this question
  • 4:00 - 4:02
    in terms of this abstract
    landscape of tasks,
  • 4:03 - 4:06
    where the elevation represents
    how hard it is for AI to do each task
  • 4:06 - 4:07
    at human level,
  • 4:07 - 4:10
    and the sea level represents
    what AI can do today.
  • 4:11 - 4:13
    The sea level is rising
    as AI improves,
  • 4:13 - 4:17
    so there's a kind of global warming
    going on here in the task landscape.
  • 4:18 - 4:21
    And the obvious takeaway
    is to avoid careers at the waterfront --
  • 4:21 - 4:23
    (Laughter)
  • 4:23 - 4:26
    which will soon be
    automated and disrupted.
  • 4:26 - 4:29
    But there's a much
    bigger question as well.
  • 4:29 - 4:30
    How high will the water end up rising?
  • 4:31 - 4:35
    Will it eventually rise
    to flood everything,
  • 4:36 - 4:38
    matching human intelligence at all tasks.
  • 4:38 - 4:42
    This is the definition
    of artificial general intelligence --
  • 4:42 - 4:43
    AGI,
  • 4:43 - 4:47
    which has been the holy grail
    of AI research since its inception.
  • 4:47 - 4:49
    By this definition, people who say,
  • 4:49 - 4:52
    "Ah, there will always be jobs
    that humans can do better than machines,"
  • 4:52 - 4:55
    are simply saying
    that we'll never get AGI.
  • 4:56 - 4:59
    Sure, we might still choose
    to have some human jobs
  • 4:59 - 5:02
    or to give humans income
    and purpose with our jobs,
  • 5:02 - 5:06
    but AGI will in any case
    transform life as we know it
  • 5:06 - 5:09
    with humans no longer being
    the most intelligent.
  • 5:09 - 5:13
    Now, if the water level does reach AGI,
  • 5:13 - 5:18
    then further AI progress will be driven
    mainly not by humans but by AI,
  • 5:18 - 5:20
    which means that there's a possibility
  • 5:20 - 5:22
    that further AI progress
    could be way faster
  • 5:22 - 5:26
    than the typical human research
    and development timescale of years,
  • 5:26 - 5:30
    raising the controversial possibility
    of an intelligence explosion
  • 5:30 - 5:32
    where recursively self-improving AI
  • 5:32 - 5:35
    rapidly leaves human
    intelligence far behind,
  • 5:35 - 5:38
    creating what's known
    as superintelligence.
  • 5:40 - 5:42
    Alright, reality check:
  • 5:43 - 5:46
    Are we going to get AGI any time soon?
  • 5:46 - 5:49
    Some famous AI researchers,
    like Rodney Brooks,
  • 5:49 - 5:52
    think it won't happen
    for hundreds of years.
  • 5:52 - 5:55
    But others, like Google DeepMind
    founder Demis Hassabis,
  • 5:56 - 5:57
    are more optimistic
  • 5:57 - 5:59
    and are working to try to make
    it happen much sooner.
  • 5:59 - 6:03
    And recent surveys have shown
    that most AI researchers
  • 6:03 - 6:06
    actually share Demis's optimism,
  • 6:06 - 6:09
    expecting that we will
    get AGI within decades,
  • 6:10 - 6:12
    so within the lifetime of many of us,
  • 6:12 - 6:14
    which begs the question -- and then what?
  • 6:15 - 6:17
    What do we want the role of humans to be
  • 6:17 - 6:20
    if machines can do everything better
    and cheaper than us?
  • 6:23 - 6:25
    The way I see it, we face a choice.
  • 6:26 - 6:28
    One option is to be complacent.
  • 6:28 - 6:31
    We can say, "Oh, let's just build machines
    that can do everything we can do
  • 6:31 - 6:33
    and not worry about the consequences.
  • 6:33 - 6:36
    Come on, if we build technology
    that makes all humans obsolete,
  • 6:37 - 6:39
    what could possibly go wrong?"
  • 6:39 - 6:40
    (Laughter)
  • 6:40 - 6:43
    But I think that would be
    embarrassingly lame.
  • 6:44 - 6:48
    I think we should be more ambitious --
    in the spirit of TED.
  • 6:48 - 6:51
    Let's envision a truly inspiring
    high-tech future
  • 6:51 - 6:53
    and try to steer towards it.
  • 6:54 - 6:57
    This brings us to the second part
    of our rocket metaphor: the steering.
  • 6:57 - 6:59
    We're making AI more powerful,
  • 6:59 - 7:03
    but how can we steer towards a future
  • 7:03 - 7:06
    where AI helps humanity flourish
    rather than flounder?
  • 7:07 - 7:08
    To help with this,
  • 7:08 - 7:10
    I cofounded the Future of Life Institute.
  • 7:10 - 7:13
    It's a small nonprofit promoting
    beneficial technology use,
  • 7:13 - 7:16
    and our goal is simply
    for the future of life to exist
  • 7:16 - 7:18
    and to be as inspiring as possible.
  • 7:18 - 7:21
    You know, I love technology.
  • 7:21 - 7:24
    Technology is why today
    is better than the Stone Age.
  • 7:25 - 7:29
    And I'm optimistic that we can create
    a really inspiring high-tech future ...
  • 7:30 - 7:31
    if -- and this is a big if --
  • 7:31 - 7:34
    if we win the wisdom race --
  • 7:34 - 7:36
    the race between the growing
    power of our technology
  • 7:37 - 7:39
    and the growing wisdom
    with which we manage it.
  • 7:39 - 7:42
    But this is going to require
    a change of strategy
  • 7:42 - 7:45
    because our old strategy
    has been learning from mistakes.
  • 7:45 - 7:47
    We invented fire,
  • 7:47 - 7:48
    screwed up a bunch of times --
  • 7:48 - 7:50
    invented the fire extinguisher.
  • 7:50 - 7:52
    (Laughter)
  • 7:52 - 7:54
    We invented the car,
    screwed up a bunch of times --
  • 7:54 - 7:57
    invented the traffic light,
    the seat belt and the airbag,
  • 7:57 - 8:01
    but with more powerful technology
    like nuclear weapons and AGI,
  • 8:01 - 8:04
    learning from mistakes
    is a lousy strategy,
  • 8:04 - 8:05
    don't you think?
  • 8:05 - 8:06
    (Laughter)
  • 8:06 - 8:09
    It's much better to be proactive
    rather than reactive;
  • 8:09 - 8:11
    plan ahead and get things
    right the first time
  • 8:11 - 8:14
    because that might be
    the only time we'll get.
  • 8:14 - 8:16
    But it is funny because
    sometimes people tell me,
  • 8:16 - 8:19
    "Max, shhh, don't talk like that.
  • 8:19 - 8:21
    That's Luddite scaremongering."
  • 8:22 - 8:24
    But it's not scaremongering.
  • 8:24 - 8:26
    It's what we at MIT
    call safety engineering.
  • 8:27 - 8:28
    Think about it:
  • 8:28 - 8:31
    before NASA launched
    the Apollo 11 mission,
  • 8:31 - 8:34
    they systematically thought through
    everything that could go wrong
  • 8:34 - 8:36
    when you put people
    on top of explosive fuel tanks
  • 8:36 - 8:39
    and launch them somewhere
    where no one could help them.
  • 8:39 - 8:41
    And there was a lot that could go wrong.
  • 8:41 - 8:42
    Was that scaremongering?
  • 8:43 - 8:44
    No.
  • 8:44 - 8:46
    That's was precisely
    the safety engineering
  • 8:46 - 8:48
    that ensured the success of the mission,
  • 8:48 - 8:53
    and that is precisely the strategy
    I think we should take with AGI.
  • 8:53 - 8:57
    Think through what can go wrong
    to make sure it goes right.
  • 8:57 - 8:59
    So in this spirit,
    we've organized conferences,
  • 8:59 - 9:02
    bringing together leading
    AI researchers and other thinkers
  • 9:02 - 9:06
    to discuss how to grow this wisdom
    we need to keep AI beneficial.
  • 9:06 - 9:09
    Our last conference
    was in Asilomar, California last year
  • 9:09 - 9:12
    and produced this list of 23 principles
  • 9:12 - 9:15
    which have since been signed
    by over 1,000 AI researchers
  • 9:15 - 9:16
    and key industry leaders,
  • 9:16 - 9:20
    and I want to tell you
    about three of these principles.
  • 9:20 - 9:25
    One is that we should avoid an arms race
    and lethal autonomous weapons.
  • 9:25 - 9:29
    The idea here is that any science
    can be used for new ways of helping people
  • 9:29 - 9:31
    or new ways of harming people.
  • 9:31 - 9:35
    For example, biology and chemistry
    are much more likely to be used
  • 9:35 - 9:39
    for new medicines or new cures
    than for new ways of killing people,
  • 9:40 - 9:42
    because biologists
    and chemists pushed hard --
  • 9:42 - 9:43
    and successfully --
  • 9:43 - 9:45
    for bans on biological
    and chemical weapons.
  • 9:45 - 9:46
    And in the same spirit,
  • 9:46 - 9:51
    most AI researchers want to stigmatize
    and ban lethal autonomous weapons.
  • 9:52 - 9:53
    Another Asilomar AI principle
  • 9:53 - 9:57
    is that we should mitigate
    AI-fueled income inequality.
  • 9:57 - 10:02
    I think that if we can grow
    the economic pie dramatically with AI
  • 10:02 - 10:04
    and we still can't figure out
    how to divide this pie
  • 10:04 - 10:06
    so that everyone is better off,
  • 10:06 - 10:07
    then shame on us.
  • 10:07 - 10:11
    (Applause)
  • 10:11 - 10:15
    Alright, now raise your hand
    if your computer has ever crashed.
  • 10:15 - 10:17
    (Laughter)
  • 10:17 - 10:18
    Wow, that's a lot of hands.
  • 10:18 - 10:21
    Well, then you'll appreciate
    this principle
  • 10:21 - 10:24
    that we should invest much more
    in AI safety research,
  • 10:24 - 10:27
    because as we put AI in charge
    of even more decisions and infrastructure,
  • 10:27 - 10:31
    we need to figure out how to transform
    today's buggy and hackable computers
  • 10:31 - 10:34
    into robust AI systems
    that we can really trust,
  • 10:34 - 10:35
    because otherwise,
  • 10:35 - 10:38
    all this awesome new technology
    can malfunction and harm us,
  • 10:38 - 10:40
    or get hacked and be turned against us.
  • 10:40 - 10:45
    And this AI safety work
    has to include work on AI value alignment,
  • 10:45 - 10:48
    because the real threat
    from AGI isn't malice,
  • 10:48 - 10:50
    like in silly Hollywood movies,
  • 10:50 - 10:52
    but competence --
  • 10:52 - 10:55
    AGI accomplishing goals
    that just aren't aligned with ours.
  • 10:55 - 11:00
    For example, when we humans drove
    the West African black rhino extinct,
  • 11:00 - 11:04
    we didn't do it because we were a bunch
    of evil rhinoceros haters, did we?
  • 11:04 - 11:06
    We did it because
    we were smarter than them
  • 11:06 - 11:08
    and our goals weren't aligned with theirs.
  • 11:08 - 11:11
    But AGI is by definition smarter than us,
  • 11:11 - 11:15
    so to make sure that we don't put
    ourselves in the position of those rhinos
  • 11:15 - 11:17
    if we create AGI,
  • 11:17 - 11:21
    we need to figure out how
    to make machines understand our goals,
  • 11:21 - 11:24
    adopt our goals and retain our goals.
  • 11:25 - 11:28
    And whose goals should these be, anyway?
  • 11:28 - 11:30
    Which goals should they be?
  • 11:30 - 11:34
    This brings us to the third part
    of our rocket metaphor: the destination.
  • 11:35 - 11:37
    We're making AI more powerful,
  • 11:37 - 11:39
    trying to figure out how to steer it,
  • 11:39 - 11:41
    but where do we want to go with it?
  • 11:42 - 11:45
    This is the elephant in the room
    that almost nobody talks about --
  • 11:45 - 11:47
    not even here at TED --
  • 11:47 - 11:51
    because we're so fixated
    on short-term AI challenges.
  • 11:52 - 11:57
    Look, our species is trying to build AGI,
  • 11:57 - 12:00
    motivated by curiosity and economics,
  • 12:00 - 12:04
    but what sort of future society
    are we hoping for if we succeed?
  • 12:05 - 12:07
    We did an opinion poll on this recently,
  • 12:07 - 12:08
    and I was struck to see
  • 12:08 - 12:11
    that most people actually
    want us to build superintelligence:
  • 12:11 - 12:14
    AI that's vastly smarter
    than us in all ways.
  • 12:15 - 12:19
    What there was the greatest agreement on
    was that we should be ambitious
  • 12:19 - 12:21
    and help life spread into the cosmos,
  • 12:21 - 12:25
    but there was much less agreement
    about who or what should be in charge.
  • 12:25 - 12:27
    And I was actually quite amused
  • 12:27 - 12:30
    to see that there's some some people
    who want it to be just machines.
  • 12:30 - 12:32
    (Laughter)
  • 12:32 - 12:36
    And there was total disagreement
    about what the role of humans should be,
  • 12:36 - 12:38
    even at the most basic level,
  • 12:38 - 12:41
    so let's take a closer look
    at possible futures
  • 12:41 - 12:44
    that we might choose
    to steer toward, alright?
  • 12:44 - 12:45
    So don't get be wrong here.
  • 12:45 - 12:47
    I'm not talking about space travel,
  • 12:47 - 12:50
    merely about humanity's
    metaphorical journey into the future.
  • 12:51 - 12:54
    So one option that some
    of my AI colleagues like
  • 12:54 - 12:58
    is to build superintelligence
    and keep it under human control,
  • 12:58 - 13:00
    like an enslaved god,
  • 13:00 - 13:01
    disconnected from the internet
  • 13:01 - 13:05
    and used to create unimaginable
    technology and wealth
  • 13:05 - 13:06
    for whoever controls it.
  • 13:07 - 13:08
    But Lord Acton warned us
  • 13:08 - 13:12
    that power corrupts,
    and absolute power corrupts absolutely,
  • 13:12 - 13:16
    so you might worry that maybe
    we humans just aren't smart enough,
  • 13:16 - 13:18
    or wise enough rather,
  • 13:18 - 13:19
    to handle this much power.
  • 13:20 - 13:22
    Also, aside from any
    moral qualms you might have
  • 13:22 - 13:24
    about enslaving superior minds,
  • 13:25 - 13:28
    you might worry that maybe
    the superintelligence could outsmart us,
  • 13:29 - 13:31
    break out and take over.
  • 13:32 - 13:35
    But I also have colleagues
    who are fine with AI taking over
  • 13:35 - 13:37
    and even causing human extinction,
  • 13:37 - 13:41
    as long as we feel the the AIs
    are our worthy descendants,
  • 13:41 - 13:43
    like our children.
  • 13:43 - 13:48
    But how would we know that the AIs
    have adopted our best values
  • 13:48 - 13:53
    and aren't just unconscious zombies
    tricking us into anthropomorphizing them?
  • 13:53 - 13:56
    Also, shouldn't those people
    who don't want human extinction
  • 13:56 - 13:57
    have a say in the matter, too?
  • 13:58 - 14:02
    Now, if you didn't like either
    of those two high-tech options,
  • 14:02 - 14:05
    it's important to remember
    that low-tech is suicide
  • 14:05 - 14:06
    from a cosmic perspective,
  • 14:06 - 14:09
    because if we don't go far
    beyond today's technology,
  • 14:09 - 14:11
    the question isn't whether humanity
    is going to go extinct,
  • 14:11 - 14:13
    merely whether
    we're going to get taken out
  • 14:13 - 14:16
    by the next killer asteroid, supervolcano
  • 14:16 - 14:19
    or some other problem
    that better technology could have solved.
  • 14:19 - 14:22
    So, how about having
    our cake and eating it ...
  • 14:22 - 14:24
    with AGI that's not enslaved
  • 14:25 - 14:28
    but treats us well because its values
    are aligned with ours?
  • 14:28 - 14:32
    This is the gist of what Eliezer Yudkowsky
    has called "friendly AI,"
  • 14:33 - 14:35
    and if we can do this,
    it could be awesome.
  • 14:36 - 14:41
    It could not only eliminate negative
    experiences like disease, poverty,
  • 14:41 - 14:42
    crime and other suffering,
  • 14:42 - 14:45
    but it could also give us
    the freedom to choose
  • 14:45 - 14:49
    from a fantastic new diversity
    of positive experiences --
  • 14:49 - 14:52
    basically making us
    the masters of our own destiny.
  • 14:54 - 14:56
    So in summary,
  • 14:56 - 14:59
    our situation with technology
    is complicated,
  • 14:59 - 15:01
    but the big picture is rather simple.
  • 15:01 - 15:05
    Most AI researchers
    expect AGI within decades,
  • 15:05 - 15:08
    and if we just bumble
    into this unprepared,
  • 15:08 - 15:11
    it will probably be
    the biggest mistake in human history --
  • 15:11 - 15:13
    let's face it.
  • 15:13 - 15:15
    It could enable brutal,
    global dictatorship
  • 15:15 - 15:19
    with unprecedented inequality,
    surveillance and suffering,
  • 15:19 - 15:21
    and maybe even human extinction.
  • 15:21 - 15:23
    But if we steer carefully,
  • 15:24 - 15:28
    we could end up in a fantastic future
    where everybody's better off:
  • 15:28 - 15:30
    the poor are richer, the rich are richer,
  • 15:30 - 15:34
    everybody is healthy
    and free to live out their dreams.
  • 15:35 - 15:37
    Now, hang on.
  • 15:37 - 15:41
    Do you folks want the future
    that's politically right or left?
  • 15:41 - 15:44
    Do you want the pious society
    with strict moral rules,
  • 15:44 - 15:46
    or do you an hedonistic free-for-all,
  • 15:46 - 15:48
    more like Burning Man 24/7?
  • 15:48 - 15:51
    Do you want beautiful beaches,
    forests and lakes,
  • 15:51 - 15:54
    or would you prefer to rearrange
    some of those atoms with the computers,
  • 15:54 - 15:56
    enabling virtual experiences?
  • 15:56 - 15:59
    With friendly AI, we could simply
    build all of these societies
  • 15:59 - 16:02
    and give people the freedom
    to choose which one they want to live in
  • 16:02 - 16:05
    because we would no longer
    be limited by our intelligence,
  • 16:05 - 16:07
    merely by the laws of physics.
  • 16:07 - 16:11
    So the resources and space
    for this would be astronomical --
  • 16:11 - 16:13
    literally.
  • 16:13 - 16:15
    So here's our choice.
  • 16:16 - 16:18
    We can either be complacent
    about our future,
  • 16:19 - 16:22
    taking as an article of blind faith
  • 16:22 - 16:26
    that any new technology
    is guaranteed to be beneficial,
  • 16:26 - 16:30
    and just repeat that to ourselves
    as a mantra over and over and over again
  • 16:30 - 16:34
    as we drift like a rudderless ship
    towards our own obsolescence.
  • 16:35 - 16:37
    Or we can be ambitious --
  • 16:38 - 16:40
    thinking hard about how
    to steer our technology
  • 16:40 - 16:42
    and where we want to go with it
  • 16:42 - 16:44
    to create the age of amazement.
  • 16:45 - 16:48
    We're all here to celebrate
    the age of amazement,
  • 16:48 - 16:52
    and I feel that its essence should lie
    in becoming not overpowered
  • 16:53 - 16:56
    but empowered by our technology.
  • 16:56 - 16:57
    Thank you.
  • 16:57 - 17:00
    (Applause)
Title:
How to get empowered, not overpowered, by AI
Speaker:
Max Tegmark
Description:

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
17:15

English subtitles

Revisions Compare revisions