Don't fear superintelligent AI
-
0:01 - 0:05When I was a kid,
I was the quintessential nerd. -
0:05 - 0:07I think some of you were, too.
-
0:08 - 0:09(Laughter)
-
0:09 - 0:12And you, sir, who laughed the loudest,
you probably still are. -
0:12 - 0:14(Laughter)
-
0:14 - 0:18I grew up in a small town
in the dusty plains of north Texas, -
0:18 - 0:21the son of a sheriff
who was the son of a pastor. -
0:21 - 0:23Getting into trouble was not an option.
-
0:24 - 0:27And so I started reading
calculus books for fun. -
0:27 - 0:29(Laughter)
-
0:29 - 0:31You did, too.
-
0:31 - 0:34That led me to building a laser
and a computer and model rockets, -
0:34 - 0:37and that led me to making
rocket fuel in my bedroom. -
0:38 - 0:42Now, in scientific terms,
-
0:42 - 0:45we call this a very bad idea.
-
0:45 - 0:46(Laughter)
-
0:46 - 0:48Around that same time,
-
0:48 - 0:52Stanley Kubrick's "2001: A Space Odyssey"
came to the theaters, -
0:52 - 0:54and my life was forever changed.
-
0:54 - 0:56I loved everything about that movie,
-
0:56 - 0:59especially the HAL 9000.
-
0:59 - 1:01Now, HAL was a sentient computer
-
1:01 - 1:03designed to guide the Discovery spacecraft
-
1:03 - 1:06from the Earth to Jupiter.
-
1:06 - 1:08HAL was also a flawed character,
-
1:08 - 1:12for in the end he chose
to value the mission over human life. -
1:13 - 1:15Now, HAL was a fictional character,
-
1:15 - 1:18but nonetheless he speaks to our fears,
-
1:18 - 1:20our fears of being subjugated
-
1:20 - 1:23by some unfeeling, artificial intelligence
-
1:23 - 1:25who is indifferent to our humanity.
-
1:26 - 1:28I believe that such fears are unfounded.
-
1:28 - 1:31Indeed, we stand at a remarkable time
-
1:31 - 1:33in human history,
-
1:33 - 1:38where, driven by refusal to accept
the limits of our bodies and our minds, -
1:38 - 1:39we are building machines
-
1:39 - 1:43of exquisite, beautiful
complexity and grace -
1:43 - 1:45that will extend the human experience
-
1:45 - 1:47in ways beyond our imagining.
-
1:48 - 1:50After a career that led me
from the Air Force Academy -
1:50 - 1:52to Space Command to now,
-
1:52 - 1:54I became a systems engineer,
-
1:54 - 1:57and recently I was drawn
into an engineering problem -
1:57 - 1:59associated with NASA's mission to Mars.
-
1:59 - 2:02Now, in space flights to the Moon,
-
2:02 - 2:05we can rely upon
mission control in Houston -
2:05 - 2:07to watch over all aspects of a flight.
-
2:07 - 2:11However, Mars is 200 times further away,
-
2:11 - 2:14and as a result it takes
on average 13 minutes -
2:14 - 2:17for a signal to travel
from the Earth to Mars. -
2:17 - 2:20If there's trouble,
there's not enough time. -
2:21 - 2:23And so a reasonable engineering solution
-
2:23 - 2:26calls for us to put mission control
-
2:26 - 2:29inside the walls of the Orion spacecraft.
-
2:29 - 2:32Another fascinating idea
in the mission profile -
2:32 - 2:35places humanoid robots
on the surface of Mars -
2:35 - 2:37before the humans themselves arrive,
-
2:37 - 2:38first to build facilities
-
2:38 - 2:42and later to serve as collaborative
members of the science team. -
2:43 - 2:46Now, as I looked at this
from an engineering perspective, -
2:46 - 2:49it became very clear to me
that what I needed to architect -
2:49 - 2:52was a smart, collaborative,
-
2:52 - 2:54socially intelligent
artificial intelligence. -
2:54 - 2:58In other words, I needed to build
something very much like a HAL -
2:58 - 3:01but without the homicidal tendencies.
-
3:01 - 3:02(Laughter)
-
3:03 - 3:05Let's pause for a moment.
-
3:05 - 3:09Is it really possible to build
an artificial intelligence like that? -
3:09 - 3:10Actually, it is.
-
3:10 - 3:11In many ways,
-
3:11 - 3:13this is a hard engineering problem
-
3:13 - 3:15with elements of AI,
-
3:15 - 3:20not some wet hair ball of an AI problem
that needs to be engineered. -
3:20 - 3:22To paraphrase Alan Turing,
-
3:22 - 3:25I'm not interested
in building a sentient machine. -
3:25 - 3:26I'm not building a HAL.
-
3:26 - 3:29All I'm after is a simple brain,
-
3:29 - 3:32something that offers
the illusion of intelligence. -
3:33 - 3:36The art and the science of computing
have come a long way -
3:36 - 3:38since HAL was onscreen,
-
3:38 - 3:41and I'd imagine if his inventor
Dr. Chandra were here today, -
3:41 - 3:43he'd have a whole lot of questions for us.
-
3:43 - 3:45Is it really possible for us
-
3:45 - 3:49to take a system of millions
upon millions of devices, -
3:49 - 3:51to read in their data streams,
-
3:51 - 3:53to predict their failures
and act in advance? -
3:53 - 3:54Yes.
-
3:54 - 3:58Can we build systems that converse
with humans in natural language? -
3:58 - 3:59Yes.
-
3:59 - 4:02Can we build systems
that recognize objects, identify emotions, -
4:02 - 4:05emote themselves,
play games and even read lips? -
4:05 - 4:06Yes.
-
4:07 - 4:09Can we build a system that sets goals,
-
4:09 - 4:12that carries out plans against those goals
and learns along the way? -
4:12 - 4:14Yes.
-
4:14 - 4:17Can we build systems
that have a theory of mind? -
4:17 - 4:18This we are learning to do.
-
4:18 - 4:22Can we build systems that have
an ethical and moral foundation? -
4:22 - 4:25This we must learn how to do.
-
4:25 - 4:27So let's accept for a moment
-
4:27 - 4:30that it's possible to build
such an artificial intelligence -
4:30 - 4:32for this kind of mission and others.
-
4:32 - 4:34The next question
you must ask yourself is, -
4:34 - 4:36should we fear it?
-
4:36 - 4:38Now, every new technology
-
4:38 - 4:41brings with it
some measure of trepidation. -
4:41 - 4:42When we first saw cars,
-
4:43 - 4:47people lamented that we would see
the destruction of the family. -
4:47 - 4:49When we first saw telephones come in,
-
4:49 - 4:52people were worried it would destroy
all civil conversation. -
4:52 - 4:56At a point in time we saw
the written word become pervasive, -
4:56 - 4:59people thought we would lose
our ability to memorize. -
4:59 - 5:01These things are all true to a degree,
-
5:01 - 5:03but it's also the case
that these technologies -
5:03 - 5:07brought to us things
that extended the human experience -
5:07 - 5:08in some profound ways.
-
5:10 - 5:12So let's take this a little further.
-
5:13 - 5:18I do not fear the creation
of an AI like this, -
5:18 - 5:22because it will eventually
embody some of our values. -
5:22 - 5:25Consider this: building a cognitive system
is fundamentally different -
5:25 - 5:29than building a traditional
software-intensive system of the past. -
5:29 - 5:31We don't program them. We teach them.
-
5:31 - 5:34In order to teach a system
how to recognize flowers, -
5:34 - 5:37I show it thousands of flowers
of the kinds I like. -
5:37 - 5:39In order to teach a system
how to play a game -- -
5:39 - 5:41Well, I would. You would, too.
-
5:43 - 5:45I like flowers. Come on.
-
5:45 - 5:48To teach a system
how to play a game like Go, -
5:48 - 5:50I'd have it play thousands of games of Go,
-
5:50 - 5:52but in the process I also teach it
-
5:52 - 5:54how to discern
a good game from a bad game. -
5:55 - 5:58If I want to create an artificially
intelligent legal assistant, -
5:58 - 6:00I will teach it some corpus of law
-
6:00 - 6:03but at the same time I am fusing with it
-
6:03 - 6:06the sense of mercy and justice
that is part of that law. -
6:07 - 6:10In scientific terms,
this is what we call ground truth, -
6:10 - 6:12and here's the important point:
-
6:12 - 6:13in producing these machines,
-
6:13 - 6:16we are therefore teaching them
a sense of our values. -
6:17 - 6:20To that end, I trust
an artificial intelligence -
6:20 - 6:23the same, if not more,
as a human who is well-trained. -
6:24 - 6:25But, you may ask,
-
6:25 - 6:28what about rogue agents,
-
6:28 - 6:31some well-funded
nongovernment organization? -
6:31 - 6:35I do not fear an artificial intelligence
in the hand of a lone wolf. -
6:35 - 6:40Clearly, we cannot protect ourselves
against all random acts of violence, -
6:40 - 6:42but the reality is such a system
-
6:42 - 6:45requires substantial training
and subtle training -
6:45 - 6:47far beyond the resources of an individual.
-
6:47 - 6:49And furthermore,
-
6:49 - 6:52it's far more than just injecting
an internet virus to the world, -
6:52 - 6:55where you push a button,
all of a sudden it's in a million places -
6:55 - 6:57and laptops start blowing up
all over the place. -
6:57 - 7:00Now, these kinds of substances
are much larger, -
7:00 - 7:02and we'll certainly see them coming.
-
7:03 - 7:06Do I fear that such
an artificial intelligence -
7:06 - 7:08might threaten all of humanity?
-
7:08 - 7:13If you look at movies
such as "The Matrix," "Metropolis," -
7:13 - 7:16"The Terminator,"
shows such as "Westworld," -
7:16 - 7:18they all speak of this kind of fear.
-
7:18 - 7:22Indeed, in the book "Superintelligence"
by the philosopher Nick Bostrom, -
7:22 - 7:24he picks up on this theme
-
7:24 - 7:28and observes that a superintelligence
might not only be dangerous, -
7:28 - 7:32it could represent an existential threat
to all of humanity. -
7:32 - 7:34Dr. Bostrom's basic argument
-
7:34 - 7:37is that such systems will eventually
-
7:37 - 7:40have such an insatiable
thirst for information -
7:40 - 7:43that they will perhaps learn how to learn
-
7:43 - 7:46and eventually discover
that they may have goals -
7:46 - 7:48that are contrary to human needs.
-
7:48 - 7:50Dr. Bostrom has a number of followers.
-
7:50 - 7:54He is supported by people
such as Elon Musk and Stephen Hawking. -
7:55 - 7:57With all due respect
-
7:58 - 8:00to these brilliant minds,
-
8:00 - 8:02I believe that they
are fundamentally wrong. -
8:02 - 8:06Now, there are a lot of pieces
of Dr. Bostrom's argument to unpack, -
8:06 - 8:08and I don't have time to unpack them all,
-
8:08 - 8:11but very briefly, consider this:
-
8:11 - 8:14super knowing is very different
than super doing. -
8:14 - 8:16HAL was a threat to the Discovery crew
-
8:16 - 8:21only insofar as HAL commanded
all aspects of the Discovery. -
8:21 - 8:23So it would have to be
with a superintelligence. -
8:23 - 8:26It would have to have dominion
over all of our world. -
8:26 - 8:29This is the stuff of Skynet
from the movie "The Terminator" -
8:29 - 8:30in which we had a superintelligence
-
8:30 - 8:32that commanded human will,
-
8:32 - 8:36that directed every device
that was in every corner of the world. -
8:36 - 8:37Practically speaking,
-
8:37 - 8:39it ain't gonna happen.
-
8:39 - 8:42We are not building AIs
that control the weather, -
8:42 - 8:44that direct the tides,
-
8:44 - 8:47that command us
capricious, chaotic humans. -
8:47 - 8:51And furthermore, if such
an artificial intelligence existed, -
8:51 - 8:54it would have to compete
with human economies, -
8:54 - 8:57and thereby compete for resources with us.
-
8:57 - 8:58And in the end --
-
8:58 - 9:00don't tell Siri this --
-
9:00 - 9:02we can always unplug them.
-
9:02 - 9:04(Laughter)
-
9:05 - 9:08We are on an incredible journey
-
9:08 - 9:10of coevolution with our machines.
-
9:10 - 9:13The humans we are today
-
9:13 - 9:15are not the humans we will be then.
-
9:15 - 9:19To worry now about the rise
of a superintelligence -
9:19 - 9:22is in many ways a dangerous distraction
-
9:22 - 9:24because the rise of computing itself
-
9:24 - 9:27brings to us a number
of human and societal issues -
9:27 - 9:29to which we must now attend.
-
9:29 - 9:32How shall I best organize society
-
9:32 - 9:35when the need for human labor diminishes?
-
9:35 - 9:38How can I bring understanding
and education throughout the globe -
9:38 - 9:40and still respect our differences?
-
9:40 - 9:44How might I extend and enhance human life
through cognitive healthcare? -
9:44 - 9:47How might I use computing
-
9:47 - 9:49to help take us to the stars?
-
9:50 - 9:52And that's the exciting thing.
-
9:52 - 9:55The opportunities to use computing
-
9:55 - 9:56to advance the human experience
-
9:56 - 9:58are within our reach,
-
9:58 - 10:00here and now,
-
10:00 - 10:01and we are just beginning.
-
10:02 - 10:03Thank you very much.
-
10:04 - 10:08(Applause)
- Title:
- Don't fear superintelligent AI
- Speaker:
- Grady Booch
- Description:
-
New tech spawns new anxieties, says scientist and philosopher Grady Booch, but we don't need to fear an all-powerful, unfeeling AI. Booch allays our worst (sci-fi induced) fears about superintelligent computers by explaining how we'll teach, not program, them to share our values. Rather than worry about an unlikely existential threat, he urges us to consider how artificial intelligence will enhance human life.
- Video Language:
- English
- Team:
- closed TED
- Project:
- TEDTalks
- Duration:
- 10:20
Brian Greene edited English subtitles for Don't fear superintelligent AI | ||
Brian Greene edited English subtitles for Don't fear superintelligent AI | ||
Brian Greene edited English subtitles for Don't fear superintelligent AI | ||
Brian Greene edited English subtitles for Don't fear superintelligent AI | ||
Brian Greene edited English subtitles for Don't fear superintelligent AI | ||
Joanna Pietrulewicz accepted English subtitles for Don't fear superintelligent AI | ||
Joanna Pietrulewicz edited English subtitles for Don't fear superintelligent AI | ||
Joanna Pietrulewicz edited English subtitles for Don't fear superintelligent AI |