How to get empowered, not overpowered, by AI
-
0:01 - 0:05After 13.8 billion years
of cosmic history, -
0:05 - 0:07our universe has woken up
-
0:07 - 0:09and become aware of itself.
-
0:09 - 0:11From a small blue planet,
-
0:11 - 0:16tiny, conscious parts of our universe
have begun gazing out into the cosmos -
0:16 - 0:17with telescopes,
-
0:17 - 0:18discovering something humbling.
-
0:19 - 0:22We've discovered that our universe
is vastly grander -
0:22 - 0:24than our ancestors imagined
-
0:24 - 0:28and that life seems to be an almost
imperceptibly small perturbation -
0:28 - 0:30on an otherwise dead universe.
-
0:30 - 0:33But we've also discovered
something inspiring, -
0:33 - 0:36which is that the technology
we're developing has the potential -
0:36 - 0:39to help life flourish like never before,
-
0:39 - 0:42not just for centuries
but for billions of years, -
0:42 - 0:46and not just on earth but throughout
much of this amazing cosmos. -
0:48 - 0:51I think of the earliest life as "Life 1.0"
-
0:51 - 0:52because it was really dumb,
-
0:52 - 0:57like bacteria, unable to learn
anything during its lifetime. -
0:57 - 1:00I think of us humans as "Life 2.0"
because we can learn, -
1:00 - 1:02which we in nerdy, geek speak,
-
1:02 - 1:05might think of as installing
new software into our brains, -
1:05 - 1:07like languages and job skills.
-
1:08 - 1:12"Life 3.0," which can design not only
its software but also its hardware -
1:12 - 1:14of course doesn't exist yet.
-
1:14 - 1:17But perhaps our technology
has already made us "Life 2.1," -
1:17 - 1:22with our artificial knees,
pacemakers and cochlear implants. -
1:22 - 1:26So let's take a closer look
at our relationship with technology, OK? -
1:27 - 1:28As an example,
-
1:28 - 1:33the Apollo 11 moon mission
was both successful and inspiring, -
1:33 - 1:36showing that when we humans
use technology wisely, -
1:36 - 1:40we can accomplish things
that our ancestors could only dream of. -
1:40 - 1:43But there's an even more inspiring journey
-
1:43 - 1:46propelled by something
more powerful than rocket engines, -
1:47 - 1:50where the passengers
aren't just three astronauts -
1:50 - 1:51but all of humanity.
-
1:51 - 1:54Let's talk about our collective
journey into the future -
1:54 - 1:56with artificial intelligence.
-
1:57 - 2:01My friend Jaan Tallinn likes to point out
that just as with rocketry, -
2:02 - 2:05it's not enough to make
our technology powerful. -
2:06 - 2:09We also have to figure out,
if we're going to be really ambitious, -
2:09 - 2:10how to steer it
-
2:10 - 2:12and where we want to go with it.
-
2:13 - 2:16So let's talk about all three
for artificial intelligence: -
2:16 - 2:19the power, the steering
and the destination. -
2:20 - 2:21Let's start with the power.
-
2:22 - 2:25I define intelligence very inclusively --
-
2:25 - 2:29simply as our ability
to accomplish complex goals, -
2:29 - 2:33because I want to include both
biological and artificial intelligence. -
2:33 - 2:37And I want to avoid
the silly carbon-chauvinism idea -
2:37 - 2:39that you can only be smart
if you're made of meat. -
2:41 - 2:45It's really amazing how the power
of AI has grown recently. -
2:45 - 2:46Just think about it.
-
2:46 - 2:50Not long ago, robots couldn't walk.
-
2:51 - 2:53Now, they can do backflips.
-
2:54 - 2:56Not long ago,
-
2:56 - 2:58we didn't have self-driving cars.
-
2:59 - 3:01Now, we have self-flying rockets.
-
3:04 - 3:05Not long ago,
-
3:05 - 3:08AI couldn't do face recognition.
-
3:08 - 3:11Now, AI can generate fake faces
-
3:11 - 3:15and simulate your face
saying stuff that you never said. -
3:16 - 3:18Not long ago,
-
3:18 - 3:20AI couldn't beat us at the game of Go.
-
3:20 - 3:25Then, Google DeepMind's AlphaZero AI
took 3,000 years of human Go games -
3:26 - 3:27and Go wisdom,
-
3:27 - 3:32ignored it all and became the world's best
player by just playing against itself. -
3:32 - 3:35And the most impressive feat here
wasn't that it crushed human gamers, -
3:36 - 3:38but that it crushed human AI researchers
-
3:38 - 3:42who had spent decades
handcrafting game-playing software. -
3:42 - 3:47And AlphaZero crushed human AI researchers
not just in Go but even at chess, -
3:47 - 3:49which we have been working on since 1950.
-
3:50 - 3:54So all this amazing recent progress in AI
really begs the question: -
3:55 - 3:57How far will it go?
-
3:58 - 3:59I like to think about this question
-
4:00 - 4:02in terms of this abstract
landscape of tasks, -
4:03 - 4:06where the elevation represents
how hard it is for AI to do each task -
4:06 - 4:07at human level,
-
4:07 - 4:10and the sea level represents
what AI can do today. -
4:11 - 4:13The sea level is rising
as AI improves, -
4:13 - 4:17so there's a kind of global warming
going on here in the task landscape. -
4:18 - 4:21And the obvious takeaway
is to avoid careers at the waterfront -- -
4:21 - 4:23(Laughter)
-
4:23 - 4:26which will soon be
automated and disrupted. -
4:26 - 4:29But there's a much
bigger question as well. -
4:29 - 4:30How high will the water end up rising?
-
4:31 - 4:35Will it eventually rise
to flood everything, -
4:36 - 4:38matching human intelligence at all tasks.
-
4:38 - 4:42This is the definition
of artificial general intelligence -- -
4:42 - 4:43AGI,
-
4:43 - 4:47which has been the holy grail
of AI research since its inception. -
4:47 - 4:49By this definition, people who say,
-
4:49 - 4:52"Ah, there will always be jobs
that humans can do better than machines," -
4:52 - 4:55are simply saying
that we'll never get AGI. -
4:56 - 4:59Sure, we might still choose
to have some human jobs -
4:59 - 5:02or to give humans income
and purpose with our jobs, -
5:02 - 5:06but AGI will in any case
transform life as we know it -
5:06 - 5:09with humans no longer being
the most intelligent. -
5:09 - 5:13Now, if the water level does reach AGI,
-
5:13 - 5:18then further AI progress will be driven
mainly not by humans but by AI, -
5:18 - 5:20which means that there's a possibility
-
5:20 - 5:22that further AI progress
could be way faster -
5:22 - 5:26than the typical human research
and development timescale of years, -
5:26 - 5:30raising the controversial possibility
of an intelligence explosion -
5:30 - 5:32where recursively self-improving AI
-
5:32 - 5:35rapidly leaves human
intelligence far behind, -
5:35 - 5:38creating what's known
as superintelligence. -
5:40 - 5:42Alright, reality check:
-
5:43 - 5:46Are we going to get AGI any time soon?
-
5:46 - 5:49Some famous AI researchers,
like Rodney Brooks, -
5:49 - 5:52think it won't happen
for hundreds of years. -
5:52 - 5:55But others, like Google DeepMind
founder Demis Hassabis, -
5:56 - 5:57are more optimistic
-
5:57 - 5:59and are working to try to make
it happen much sooner. -
5:59 - 6:03And recent surveys have shown
that most AI researchers -
6:03 - 6:06actually share Demis's optimism,
-
6:06 - 6:09expecting that we will
get AGI within decades, -
6:10 - 6:12so within the lifetime of many of us,
-
6:12 - 6:14which begs the question -- and then what?
-
6:15 - 6:17What do we want the role of humans to be
-
6:17 - 6:20if machines can do everything better
and cheaper than us? -
6:23 - 6:25The way I see it, we face a choice.
-
6:26 - 6:28One option is to be complacent.
-
6:28 - 6:31We can say, "Oh, let's just build machines
that can do everything we can do -
6:31 - 6:33and not worry about the consequences.
-
6:33 - 6:36Come on, if we build technology
that makes all humans obsolete, -
6:37 - 6:39what could possibly go wrong?"
-
6:39 - 6:40(Laughter)
-
6:40 - 6:43But I think that would be
embarrassingly lame. -
6:44 - 6:48I think we should be more ambitious --
in the spirit of TED. -
6:48 - 6:51Let's envision a truly inspiring
high-tech future -
6:51 - 6:53and try to steer towards it.
-
6:54 - 6:57This brings us to the second part
of our rocket metaphor: the steering. -
6:57 - 6:59We're making AI more powerful,
-
6:59 - 7:03but how can we steer towards a future
-
7:03 - 7:06where AI helps humanity flourish
rather than flounder? -
7:07 - 7:08To help with this,
-
7:08 - 7:10I cofounded the Future of Life Institute.
-
7:10 - 7:13It's a small nonprofit promoting
beneficial technology use, -
7:13 - 7:16and our goal is simply
for the future of life to exist -
7:16 - 7:18and to be as inspiring as possible.
-
7:18 - 7:21You know, I love technology.
-
7:21 - 7:24Technology is why today
is better than the Stone Age. -
7:25 - 7:29And I'm optimistic that we can create
a really inspiring high-tech future ... -
7:30 - 7:31if -- and this is a big if --
-
7:31 - 7:34if we win the wisdom race --
-
7:34 - 7:36the race between the growing
power of our technology -
7:37 - 7:39and the growing wisdom
with which we manage it. -
7:39 - 7:42But this is going to require
a change of strategy -
7:42 - 7:45because our old strategy
has been learning from mistakes. -
7:45 - 7:47We invented fire,
-
7:47 - 7:48screwed up a bunch of times --
-
7:48 - 7:50invented the fire extinguisher.
-
7:50 - 7:52(Laughter)
-
7:52 - 7:54We invented the car,
screwed up a bunch of times -- -
7:54 - 7:57invented the traffic light,
the seat belt and the airbag, -
7:57 - 8:01but with more powerful technology
like nuclear weapons and AGI, -
8:01 - 8:04learning from mistakes
is a lousy strategy, -
8:04 - 8:05don't you think?
-
8:05 - 8:06(Laughter)
-
8:06 - 8:09It's much better to be proactive
rather than reactive; -
8:09 - 8:11plan ahead and get things
right the first time -
8:11 - 8:14because that might be
the only time we'll get. -
8:14 - 8:16But it is funny because
sometimes people tell me, -
8:16 - 8:19"Max, shhh, don't talk like that.
-
8:19 - 8:21That's Luddite scaremongering."
-
8:22 - 8:24But it's not scaremongering.
-
8:24 - 8:26It's what we at MIT
call safety engineering. -
8:27 - 8:28Think about it:
-
8:28 - 8:31before NASA launched
the Apollo 11 mission, -
8:31 - 8:34they systematically thought through
everything that could go wrong -
8:34 - 8:36when you put people
on top of explosive fuel tanks -
8:36 - 8:39and launch them somewhere
where no one could help them. -
8:39 - 8:41And there was a lot that could go wrong.
-
8:41 - 8:42Was that scaremongering?
-
8:43 - 8:44No.
-
8:44 - 8:46That's was precisely
the safety engineering -
8:46 - 8:48that ensured the success of the mission,
-
8:48 - 8:53and that is precisely the strategy
I think we should take with AGI. -
8:53 - 8:57Think through what can go wrong
to make sure it goes right. -
8:57 - 8:59So in this spirit,
we've organized conferences, -
8:59 - 9:02bringing together leading
AI researchers and other thinkers -
9:02 - 9:06to discuss how to grow this wisdom
we need to keep AI beneficial. -
9:06 - 9:09Our last conference
was in Asilomar, California last year -
9:09 - 9:12and produced this list of 23 principles
-
9:12 - 9:15which have since been signed
by over 1,000 AI researchers -
9:15 - 9:16and key industry leaders,
-
9:16 - 9:20and I want to tell you
about three of these principles. -
9:20 - 9:25One is that we should avoid an arms race
and lethal autonomous weapons. -
9:25 - 9:29The idea here is that any science
can be used for new ways of helping people -
9:29 - 9:31or new ways of harming people.
-
9:31 - 9:35For example, biology and chemistry
are much more likely to be used -
9:35 - 9:39for new medicines or new cures
than for new ways of killing people, -
9:40 - 9:42because biologists
and chemists pushed hard -- -
9:42 - 9:43and successfully --
-
9:43 - 9:45for bans on biological
and chemical weapons. -
9:45 - 9:46And in the same spirit,
-
9:46 - 9:51most AI researchers want to stigmatize
and ban lethal autonomous weapons. -
9:52 - 9:53Another Asilomar AI principle
-
9:53 - 9:57is that we should mitigate
AI-fueled income inequality. -
9:57 - 10:02I think that if we can grow
the economic pie dramatically with AI -
10:02 - 10:04and we still can't figure out
how to divide this pie -
10:04 - 10:06so that everyone is better off,
-
10:06 - 10:07then shame on us.
-
10:07 - 10:11(Applause)
-
10:11 - 10:15Alright, now raise your hand
if your computer has ever crashed. -
10:15 - 10:17(Laughter)
-
10:17 - 10:18Wow, that's a lot of hands.
-
10:18 - 10:21Well, then you'll appreciate
this principle -
10:21 - 10:24that we should invest much more
in AI safety research, -
10:24 - 10:27because as we put AI in charge
of even more decisions and infrastructure, -
10:27 - 10:31we need to figure out how to transform
today's buggy and hackable computers -
10:31 - 10:34into robust AI systems
that we can really trust, -
10:34 - 10:35because otherwise,
-
10:35 - 10:38all this awesome new technology
can malfunction and harm us, -
10:38 - 10:40or get hacked and be turned against us.
-
10:40 - 10:45And this AI safety work
has to include work on AI value alignment, -
10:45 - 10:48because the real threat
from AGI isn't malice, -
10:48 - 10:50like in silly Hollywood movies,
-
10:50 - 10:52but competence --
-
10:52 - 10:55AGI accomplishing goals
that just aren't aligned with ours. -
10:55 - 11:00For example, when we humans drove
the West African black rhino extinct, -
11:00 - 11:04we didn't do it because we were a bunch
of evil rhinoceros haters, did we? -
11:04 - 11:06We did it because
we were smarter than them -
11:06 - 11:08and our goals weren't aligned with theirs.
-
11:08 - 11:11But AGI is by definition smarter than us,
-
11:11 - 11:15so to make sure that we don't put
ourselves in the position of those rhinos -
11:15 - 11:17if we create AGI,
-
11:17 - 11:21we need to figure out how
to make machines understand our goals, -
11:21 - 11:24adopt our goals and retain our goals.
-
11:25 - 11:28And whose goals should these be, anyway?
-
11:28 - 11:30Which goals should they be?
-
11:30 - 11:34This brings us to the third part
of our rocket metaphor: the destination. -
11:35 - 11:37We're making AI more powerful,
-
11:37 - 11:39trying to figure out how to steer it,
-
11:39 - 11:41but where do we want to go with it?
-
11:42 - 11:45This is the elephant in the room
that almost nobody talks about -- -
11:45 - 11:47not even here at TED --
-
11:47 - 11:51because we're so fixated
on short-term AI challenges. -
11:52 - 11:57Look, our species is trying to build AGI,
-
11:57 - 12:00motivated by curiosity and economics,
-
12:00 - 12:04but what sort of future society
are we hoping for if we succeed? -
12:05 - 12:07We did an opinion poll on this recently,
-
12:07 - 12:08and I was struck to see
-
12:08 - 12:11that most people actually
want us to build superintelligence: -
12:11 - 12:14AI that's vastly smarter
than us in all ways. -
12:15 - 12:19What there was the greatest agreement on
was that we should be ambitious -
12:19 - 12:21and help life spread into the cosmos,
-
12:21 - 12:25but there was much less agreement
about who or what should be in charge. -
12:25 - 12:27And I was actually quite amused
-
12:27 - 12:30to see that there's some some people
who want it to be just machines. -
12:30 - 12:32(Laughter)
-
12:32 - 12:36And there was total disagreement
about what the role of humans should be, -
12:36 - 12:38even at the most basic level,
-
12:38 - 12:41so let's take a closer look
at possible futures -
12:41 - 12:44that we might choose
to steer toward, alright? -
12:44 - 12:45So don't get me wrong here.
-
12:45 - 12:47I'm not talking about space travel,
-
12:47 - 12:50merely about humanity's
metaphorical journey into the future. -
12:51 - 12:54So one option that some
of my AI colleagues like -
12:54 - 12:58is to build superintelligence
and keep it under human control, -
12:58 - 13:00like an enslaved god,
-
13:00 - 13:01disconnected from the internet
-
13:01 - 13:05and used to create unimaginable
technology and wealth -
13:05 - 13:06for whoever controls it.
-
13:07 - 13:08But Lord Acton warned us
-
13:08 - 13:12that power corrupts,
and absolute power corrupts absolutely, -
13:12 - 13:16so you might worry that maybe
we humans just aren't smart enough, -
13:16 - 13:18or wise enough rather,
-
13:18 - 13:19to handle this much power.
-
13:20 - 13:22Also, aside from any
moral qualms you might have -
13:22 - 13:24about enslaving superior minds,
-
13:25 - 13:28you might worry that maybe
the superintelligence could outsmart us, -
13:29 - 13:31break out and take over.
-
13:32 - 13:35But I also have colleagues
who are fine with AI taking over -
13:35 - 13:37and even causing human extinction,
-
13:37 - 13:41as long as we feel the the AIs
are our worthy descendants, -
13:41 - 13:43like our children.
-
13:43 - 13:48But how would we know that the AIs
have adopted our best values -
13:48 - 13:53and aren't just unconscious zombies
tricking us into anthropomorphizing them? -
13:53 - 13:56Also, shouldn't those people
who don't want human extinction -
13:56 - 13:57have a say in the matter, too?
-
13:58 - 14:02Now, if you didn't like either
of those two high-tech options, -
14:02 - 14:05it's important to remember
that low-tech is suicide -
14:05 - 14:06from a cosmic perspective,
-
14:06 - 14:09because if we don't go far
beyond today's technology, -
14:09 - 14:11the question isn't whether humanity
is going to go extinct, -
14:11 - 14:13merely whether
we're going to get taken out -
14:13 - 14:16by the next killer asteroid, supervolcano
-
14:16 - 14:19or some other problem
that better technology could have solved. -
14:19 - 14:22So, how about having
our cake and eating it ... -
14:22 - 14:24with AGI that's not enslaved
-
14:25 - 14:28but treats us well because its values
are aligned with ours? -
14:28 - 14:32This is the gist of what Eliezer Yudkowsky
has called "friendly AI," -
14:33 - 14:35and if we can do this,
it could be awesome. -
14:36 - 14:41It could not only eliminate negative
experiences like disease, poverty, -
14:41 - 14:42crime and other suffering,
-
14:42 - 14:45but it could also give us
the freedom to choose -
14:45 - 14:49from a fantastic new diversity
of positive experiences -- -
14:49 - 14:52basically making us
the masters of our own destiny. -
14:54 - 14:56So in summary,
-
14:56 - 14:59our situation with technology
is complicated, -
14:59 - 15:01but the big picture is rather simple.
-
15:01 - 15:05Most AI researchers
expect AGI within decades, -
15:05 - 15:08and if we just bumble
into this unprepared, -
15:08 - 15:11it will probably be
the biggest mistake in human history -- -
15:11 - 15:13let's face it.
-
15:13 - 15:15It could enable brutal,
global dictatorship -
15:15 - 15:19with unprecedented inequality,
surveillance and suffering, -
15:19 - 15:21and maybe even human extinction.
-
15:21 - 15:23But if we steer carefully,
-
15:24 - 15:28we could end up in a fantastic future
where everybody's better off: -
15:28 - 15:30the poor are richer, the rich are richer,
-
15:30 - 15:34everybody is healthy
and free to live out their dreams. -
15:35 - 15:37Now, hang on.
-
15:37 - 15:41Do you folks want the future
that's politically right or left? -
15:41 - 15:44Do you want the pious society
with strict moral rules, -
15:44 - 15:46or do you an hedonistic free-for-all,
-
15:46 - 15:48more like Burning Man 24/7?
-
15:48 - 15:51Do you want beautiful beaches,
forests and lakes, -
15:51 - 15:54or would you prefer to rearrange
some of those atoms with the computers, -
15:54 - 15:56enabling virtual experiences?
-
15:56 - 15:59With friendly AI, we could simply
build all of these societies -
15:59 - 16:02and give people the freedom
to choose which one they want to live in -
16:02 - 16:05because we would no longer
be limited by our intelligence, -
16:05 - 16:07merely by the laws of physics.
-
16:07 - 16:11So the resources and space
for this would be astronomical -- -
16:11 - 16:13literally.
-
16:13 - 16:15So here's our choice.
-
16:16 - 16:18We can either be complacent
about our future, -
16:19 - 16:22taking as an article of blind faith
-
16:22 - 16:26that any new technology
is guaranteed to be beneficial, -
16:26 - 16:30and just repeat that to ourselves
as a mantra over and over and over again -
16:30 - 16:34as we drift like a rudderless ship
towards our own obsolescence. -
16:35 - 16:37Or we can be ambitious --
-
16:38 - 16:40thinking hard about how
to steer our technology -
16:40 - 16:42and where we want to go with it
-
16:42 - 16:44to create the age of amazement.
-
16:45 - 16:48We're all here to celebrate
the age of amazement, -
16:48 - 16:52and I feel that its essence should lie
in becoming not overpowered -
16:53 - 16:56but empowered by our technology.
-
16:56 - 16:57Thank you.
-
16:57 - 17:00(Applause)
- Title:
- How to get empowered, not overpowered, by AI
- Speaker:
- Max Tegmark
- Description:
-
Many artificial intelligence researchers expect AI to outsmart humans at all tasks and jobs within decades, enabling a future where we're restricted only by the laws of physics, not the limits of our intelligence. MIT physicist and AI researcher Max Tegmark separates the real opportunities and threats from the myths, describing the concrete steps we should take today to ensure that AI ends up being the best -- rather than worst -- thing to ever happen to humanity.
- Video Language:
- English
- Team:
closed TED
- Project:
- TEDTalks
- Duration:
- 17:15
![]() |
Brian Greene edited English subtitles for How to get empowered, not overpowered, by AI | |
![]() |
Brian Greene edited English subtitles for How to get empowered, not overpowered, by AI | |
![]() |
Brian Greene approved English subtitles for How to get empowered, not overpowered, by AI | |
![]() |
Brian Greene edited English subtitles for How to get empowered, not overpowered, by AI | |
![]() |
Joanna Pietrulewicz accepted English subtitles for How to get empowered, not overpowered, by AI | |
![]() |
Joanna Pietrulewicz edited English subtitles for How to get empowered, not overpowered, by AI | |
![]() |
Joanna Pietrulewicz edited English subtitles for How to get empowered, not overpowered, by AI | |
![]() |
Leslie Gauthier edited English subtitles for How to get empowered, not overpowered, by AI |