After 13.8 billion years
of cosmic history,
our universe has woken up
and become aware of itself.
From a small blue planet,
tiny, concious parts of our universe
have begun gazing out into the cosmos
with telescopes,
discovering something humbling.
We've discovered that our universe
is vastly grander
than our ancestor's imagined,
and that life seems to be an almost
imprectibly small [protobation]
on an otherwise dead universe.
But we've also discovered
something inspiring,
which is that the technology
we're developing has the potential
to help life flourish like never before,
not just for centuries
but for billions of years,
and not just on Earth but throughout
much of this amazing cosmos.
I think of the earliest life as life 1.0
because it was really dumb,
like bacteria unable to learn
anything during its lifetime.
I think of us humans as Life 2.0
because we can learn,
which in nerdy geek speak
might think of as installing
new software into our brains,
like languages and job skills.
Life 3.0, which can design not only
its software but also its hardware
of course doesn't exist yet.
But perhaps our technology
has already made us life 2.1,
with our artificial knees, pacemakers
and cochlear implants.
So let's take a closer look
at our relationship with technology, OK?
As an example,
the Apollo 11 Moon Mission was both
successful and inspiring,
showing that when we humans
use technology wisely,
we can accomplish things
that our ancestors could only dream of.
But there's an even more inspiring journey
propelled by something
more powerful than rocket engines ...
with passengers who
aren't just three astronauts
but all of humanity.
Let's talk about our collective
journey into the future
with artificial intelligence.
My friend [Yan Tallan] likes to point out
that just as with rocketry,
it's not enough to make
our technology powerful.
We also have to figure out,
if we're going to be really ambitious,
how to steer it and where
we want to go with it.
So let's talk about all three
artificial intelligences:
the power, the steering
and the destination.
Let's start with the power.
I define intelligence very inclusively --
simply as our ability
to accomplish complex goals
because I want to include both
biological and artificial intelligence
and I want to avoid the silly
[carbon]-chauvenism idea
that you can only be smart
if you're made of meat.
It's really amazing how the power
of AI has grown recently.
Just think about it.
Not long ago,
robots couldn't walk.
Now, they can do backflips.
Not long ago,
we didn't have self-driving cars.
Now, we have self-flying rockets.
Not long ago,
AI couldn't do face recognition.
Now, AI can generate fake faces
and simulate your face saying stuff
that you never said.
Not long ago,
AI couldn't beat us at the game of Go.
Then, Google DeepMind's Alpha Zero AI
took 3,000 years of human Go games
and Go wisdom,
ignored it all and became the world's best
player by just playing against itself.
And the most impressive feat here
wasn't that it crushed human gamers,
but that it crushed human AI researchers
who had spent decades hand-crafting
gameplaying software.
And Alphazero crushed human AI researchers
not just in GO but even at chess,
which we have been working on since 1950.
So all this amazing recent progress in AI
really begs the question:
how far will it go?
I like to think about this question
in terms of this abstract
landscape of tasks,
where the elevation represents
how hard it is for AI to do each task
at human level,
and the sea level represents
what AI can do today.
The seal level is rising
as the AI improves,
so there's a kind of global warming
going on here in the task landscape.
And the obvious takeaway is to avoid
careers at the waterfront --
(Laughter)
which will soon be
automated and disrupted.
But there's a much
bigger question as well.
How high will the water end up rising?
Will it eventually rise
to flood everything?
Imagine human intelligence at all tasks.
This is the definition
of artificial general intelligence --
AGI,
which has been the holy grail
of AI research since its inception,
but this definition,
people will say, "Ah,
there will always be jobs
that humans can do better than machines,
are simply saying
that we'll never get AGI.
Sure, we might still choose to have
some human jobs
or to give humans income
and purpose with our jobs,
but AGI will in any case transform
life as we know it
with humans no longer being
the most intelligent.
Now if the water level does reach AGI,
then further AI progress will be driven
mainly not by humans but by AI,
which means that there's a possiblity
that further AI progress
could be way faster than the typical
human research and development
time scale of years,
raising the controversial possibility
of an intelligence explosion
where recursively self-improving AI
rapidly leaves human
intelligence far behind,
creating what's known
as super intelligence.
All right, reality check:
are we going to get AGI any time soon?
Some famous AI researchers
like Rodney Brooks think
it won't happen for hundreds of years.
But others, like Google DeepMind
founder Demis Hassabis,
are more optimistic
and are working to try to make
it happen much sooner.
And recent surveys have shown
that most AI researchers
have actually shared Demis's optimism,
expecting that we will get AGI
within decades,
so within the lifetime of many of us,
which begs the question --
and then what?
What do we want the role of humans to be
if machines can do everything better
and cheaper than us?
The way I see it, we face a choice.
One option is to be complacent.
We can say, "Oh, let's just build machines
that can do everything we can do
and not worry about the consequences.
Come on, if we build technology
that makes all humans obsolete,
what could possibly go wrong?"
(Laughter)
But I think that would be
embarrassingly lame.
I think we should be more ambitious --
in the spirit of TED.
Let's envision the truly inspiring
high-tech future
and try to steer towards it.
This brings us to the second part
of our rocket metaphor:
the steering.
We're making AI more powerful,
but how can we steer towards the future
where AI helps humanity flourish
rather than flounder?
To help with this,
I co-founded the Future Life Institute.
It's a small non-profit promoting
beneficial technology use
and our goal is simply
for the future of life to exist
and be as inspiring as possible.
You know, I love technology.
Technology is why today is better
than the stoneage.
And I'm optimistic that we can create
a really inspiring high-tech future,
if --
and this is a big if --
if we win the wisdom race --
the race between the growing
power of our technology
and the growing wisdom
with which we manage it.
But this is going to require
a change of strategy
because our old strategy has been
learning from mistakes.
We invented fire,
screwed up a bunch of times,
invented the fire extinguisher.
(Laughter)
We invented the car,
screwed up a bunch of times,
invented the traffic light,
the seatbelt and the airbag,
but more powerful technology
like nuclear weapons and AGI --
learning from mistakes is lousy strategy,
don't you think?
(Laughter)
It's much better to be proactive
rather than be reactive;
plan ahead and get things
right the first time
because that might be
the only time we'll get.
But it is funny because
sometimes people tell me,
"Max,
ssshhhh,
don't talk like that.
That's Luddite scare-mongering."
But it's not scare-mongering.
It's what we at MIT
call safety engineering.
Think about it:
before NASA launched
the Apollo 11 Mission,
they systematically thought through
everything that could go wrong
when you put people on top of
explosive fuel tanks
and launched them somewhere
where no one could help them.
And there was a lot that could go wrong.
Was that scare-mongering?
No.
That's was precisely
the safety engineering
that insured the success of the mission,
and that is precisely the strategy
I think we should take with AGI.
Think through what can go wrong
to make sure it goes right.
So in this spirit,
we've organized conferences,
bringing together leading AI researchers
and other thinkers
who discuss how to grow this wisdom
we need to keep AI beneficial.
Our last conference
was in Asilomar, California last year
and produced this list of 23 principles
which have since been signed
by over 1,000 AI researchers
and key industry leaders.
And I want to tell you
about three of these principles.
One is that we should avoid an arms race
and lethal autonomous weapons.
The idea here is that any science
can be used for new ways of helping people
of new ways of harming people.
For example, biology and chemistry
are much more likely to be used
for new medicines or new cures
than for new ways of killing people,
because biologist and chemists
pushed hard --
and successfully --
for bans on biological
and chemical weapons.
And in the same spirit,
most AI researchers want to stigmatize
and ban lethal autonomous weapons.
Another Asilomar AI principle
is that we should mitigate
AI-fueled income inequality.
I think that if we can grow
the economic pie dramatically with AI,
and we still can't figure out how
to divide this pie
so that everyone gets better off,
then shame on us.
(Applause)
All right, now raise your hand
if your computer has ever crashed.
(Laughter)
Wow, that's a lot of hands.
Well, then you'll appreciate
this principle
that we should invest much more
in the AI safety research,
because as we put AI in charge
of more decisions and infrastructure,
we need to figure out how to transform
today's buggy and hackable computers
into robust AI systems
that we can really trust,
because otherwise,
all this awesome new technology
can malfunction and harm us
or get hacked and be turned against us.
And this AI safety work has to include
work on AI value alignment,
because the real threat
from AGI isn't malice,
like in silly Hollywood movies,
but competence.
AGI accomplishing goals that just
aren't aligned with ours.
For example,
when we humans drove
the West African Black Rhino extinct,
we didn't do it because we're a bunch
of evil rhinocerous haters,
did we?
We did it because we were
smarter than them
and our goals weren't aligned with theirs.
But AGI is by definition smarter than us,
so to make sure that we don't put
ourselves in the position of those rhinos
if we create AGI,
we need to figure out how to make machines
understand our goals,
adopt our goals
and retain our goals.
And whose goals should these be, anyway?
Which goals should they be?
This brings us to the third part
of our rocket metaphor:
the destination.
We're making AI more powerful,
trying to figure out how to steer it,
but where do we want to go with it?
This is the elephant in the room
that almost nobody talks about --
not even here at TED --
because we're so fixated
on short-term AI challenges.
Look, our species is trying
to build AGI,
motivated by curiosity and economics,
but what sort of future society
are we hoping for if we succeed?
We did an opinion poll on this recently,
and I was struck to see
that most people actually want us
to build super-intelligence:
AI that's vastly smarter
than us in all ways.
What there was the greatest agreement on
was that we should be ambitious
and help life spread into the cosmos,
but there was much less agreement
about who or what should be in charge.
And I was actually quite amused
to see that there's some some people
who want it to be just the machines.
(Laughter)
And there was total disagreement
about what the role of humans should be,
even at the most basic level,
so let's take a closer look
at possible futures
that we might choose to steer towards.
So don't get be wrong here;
I'm not talking about space travel,
merely about humanity's
metaphorical journey into the future.
So one option that some
of my AI colleagues like
is to build super-intelligence
and keep it under human control,
like an enslaved god,
disconnected from the internet
and used to create unimaginable
technology and wealth
for whoever controls it.
But [Lord Acton] warned us
that power corrupts,
and absolute power corrupts absolutely,
so you might worry that maybe
we humans just aren't smart enough
or wise enough rather,
to handle this much power.
Also, aside from any moral
qualms you might have
about enslaving superior minds,
you might worry that maybe
the super intelligence could outsmart us,
break out
and take over.
But I also have colleauges who are fine
with AI taking over
and even causing human extinction,
as long as we feel the the AIs
are our worthy descendants,
like our children.
But how would we know that they AIs
have adopted our best values,
and aren't just unconscious zombies
tricking us into anthropomorphizing them?
Also, shouldn't those people who don't
want human extiction
have a say in the matter, too?
Now, if you didn't like either
of those two high-tech options,
it's important to remember
that low-tech is suicide
from a cosmic perspective,
because if we don't go far beyond
today's technology,
the question isn't whether humanity
is going to go extinct,
merely whether we're going to be
taken out by the next killer asteroid,
super volcano
or some other problem that better
technology could have solved.
So, how about having
our cake and eating it ...
with AGI that's not enslaved
but treats us well because its values
are aligned with ours?
This is the gist of what Eleazer Yukowski
has called "Friendly AI,"
and if we can do this,
it could be awesome.
It could not only eliminate negative
experiences like disease, poverty,
crime and other suffering,
but it could also give us
the freedom to choose
from a fantastic new diversity
of positive experiences --
basically making us the masters
of our own destiny.
So in summary,
our situation with technology
is complicated,
but the big picture is rather simple.
Most AI researchers expect AGI
within decades,
and if we just bumble
into this unprepared,
it will probably be the biggest
mistake in human history --
let's face it.
It could enable brutal,
global dictatorship
with unprecedented inequality,
surveillance and suffering,
and maybe even human extinction.
But if we steer carefully,
we could end up in a fantastic future,
where everybody's better off:
the poor are richer,
the rich are richer,
everybody is healthy and free
to live out their dreams.
Now, hang on.
Do you folks want the future
that's politically right or left?
Do you want the pious society
with strict moral rules,
or do you an hedonistic free-for-all,
more like Burning Man 24/7?
Do you want beautiful beaches,
forests and lakes
or would you prefer to rearrange
some of those atoms
with the computers and they
can be vitual experiences?
With friendly AI,
we could simply build
all of these societies
and give people the freedom to choose
which one they want to live in,
because we would no longer
be limited by our intelligence,
merely by the laws of physics.
So the resources and space for this
would be astronomical --
literally.
So here's our choice.
We can either be complacent
about our future,
taking as an article of blind faith
that any new technology
is guaranteed to be beneficial,
and just repeat that to ourselves
as a mantra over and over and over again
as we drift like a rudderless ship
towards our own obsolesence.
Or we can be ambitious --
thinking hard about how
to steer our technology
and where we want to go with it
to create the age of amazement.
We're all here to celebrate
the age of amazement,
and I feel that its essence should lie
in becoming not overpowered
but empowered by our technology.
Thank you.
(Applause)