-
This is Hatsune Miku.
-
She is an hologram.
-
And this is Akihiko Kondo, her husband.
-
-"Hello."
-"Hello."
-
-"You look cute today."
-"I love complements."
-
Miku is a simple from of artificial
intelligence and for Kondo,
-
it was a case of love at first sight.
-
Miku has become a legitimate popstar
-
and even appear in concerts as a 3D
projections.
-
In November 2018, Kondo married Miku at a
ceremony in Tokyo.
-
He place the ring around the wrist
of a Miku doll.
-
he now keeps the doll in his bedroom.
-
Kondo´s relationships with real women
have been painful,
-
so he choose a virtual partner.
-
-"I love her, but it's hard to say if she
loves me. Still if you asked her,
-
I think she'd say yes."
-
Hatsune Miku and Akihiko Kondo are an
extrem example of the relationship
-
between peolpe and machines.
-
In the future, we'll not doubt spend more
interancting with technology
-
that uses artificial intelligence or AI.
-
We may even develop robots that are
smarter than we are.
-
Now in the 21st century,
-
we will have to decide how to deal with
this complicated new situation.
-
For this report, we interviewed
philosophers and scientists
-
around the world.
-
We talked to German philosospher
Thomas Metzinger, who advocates
-
the use of ethis guidelines for AI
development from the EU.
-
Physicist Max Tegmark who warns about
-
the development of an all-powerfull AI
and the totalitarian surveillance state.
-
And German computer scientist
Jürgen Schmidhuber
-
who predicts the AI spread from the earth
into the cosmos.
-
We met professor Schmidhuber at a
business in Zurich.
-
He often speak a such events
-
where he outline his vision of the role
that artificial intelligence
-
may play in our future.
-
-"Professor Jürgen Schmidhuber!"
-
His presentation are wide-ranging
and thought provoking.
-
-"In the near future, perhaps a few
decades from now,
-
we will for the first time have AI
-
that could do much more than people can
do right now in their own.
-
And we will realize that the majority of
physical resources are not confined
-
to a rather small biosphere.
-
In our solar system, there is a lot of
material that can be used to build robots.
-
We could develop robots, trasmitters and
receivers that would allow an AI to be
-
sent and received the speed of light.
-
We can already do this in our
laboratories.
-
This will be a huge development.
-
Perhaps, the most important is the
beginning of life in Earth,
-
three and half billion years ago."
-
But is the professor´s vision accurate?
-
Will humans at some point be overtaken by
super intelligent machines?
-
Perhaps, this process is already begun.
-
To find out more, we travel to Japan.
-
Doctor and scientists at the University
of Tokyo's Research Hospital
-
are exploring the potential use
of AI in medicine.
-
69 years-old Ayako Yamashita nearly died
of Leukemia two years ago.
-
None of the therapy options recommended by
doctors did any good.
-
Then, they use AI technology to create
a new diagnosis.
-
"AI literally saved her life".
-
The diagnosis took all of ten minutes.
-
A human expert would have needed two weeks
to produce a similar analysis.
-
AI can process massive amounts of
scientific data,
-
a stuk of documents taller than Mount Fuji.
-
This is the Research Hospital
Supercomputer.
-
We've come here to talk to Satoru Miyano,
an expert on bioinformatics.
-
We asked Miyano whether AI could one day
replace doctos.
-
"No, I don´t think so.
There are simply support to clinicians
-
and empower the clinicians.
-
The clinicians and colleges in our
deparment
-
wears artificial intelligence exoskeleton.
-
For example, if you want to move well,
you can use a power suit.
-
And this a simple ongologist.
-
Oncologist should wear artificial
intelligence supported with supercomputer.
-
At the nearby Rican Intitute, researchers
are developing an AI diagnostic program
-
that could be used to test for
stomach cancer.
-
But one expert here diragrees with Satoru
Miyano's opinion
-
that AI will never replace doctors.
-
"If we were made redundant
by artificial intelligence,
-
that wouldn't be good for his doctos,
-
but for the human race would actually be
great for doctors that are not longer
-
necessary.
-
Wiht AI technology could improve their
work or even, take over.
-
It's hard to imagine a world that
that had not doctors.
-
Do patients relly want to be treated by
machines that see them as nothing more
-
than accumulations of technical data?
-
In Europe, a number of experts
on artificial intelligence,
-
including Jürgen Schmidhuber, are
carrying out research on the use of AI
-
in medical diagnostics.
-
Swiss President, Alain Berset has invited
scientists and entrepreneurs
-
to a conference aimed at planning fo the
digital future and promoting the use of
-
artificial intelligence in medecine.
-
One topic for discussion is AI technology
that can use neural networks to "learn",
-
just as the human brain does.
-
"Soon, all medical diagnoses will be
infinitely better that can humans
-
provide right now.
-
Because we will develop AI that uses
neural network technology.
-
And it's exciting to see how this new
development will be able to help people
-
to live longer and healthier lives".
-
We traveled to Stuttgart to see how
artificial intelligence works in practice
-
in hospitals and nursing homes.
-
Computer scientist, Brigit Graf says that
Japan has made a lot of progress in
-
developing robots that can look
after patients,
-
but there are some things
that a machine simply can't do.
-
"They can't provide real care, so I don't
use that word when I talking about robots.
-
Caregivers have to be able to interact
emotionally with the patients,
-
and a robot simply can't do that".
-
At this facility, robots are helping to
reduce the workload of human staff.
-
"Hi. I Care-o-Bot 3. This week, I'm
helping the nuerses with their work.
-
Would you like something to drink?"
-
"Thanks. That's very kind of you".
-
"Cheer - and goodbye!"
-
"Of course, robots can do much more than
simply serve drinks in nursing homes.
-
Philosofer Thomas Metzinger has proposed
pragmatics solutions
-
for dealing with this new technology.
-
For example, the options for using AI
and robots in gereatric care
-
should maintain the dignity of patients.
-
You can ask individuals if they'd actually
feel more comfortable having a machine
-
change their diapers rather than
a family member.
-
Or what they'd enjoy having a machine
read the newspaper to them,
-
or ask questions about their medications,
or if they find that degrading.
-
I believe that we are now at the beginning
of a major learning process".
-
Metzinger says that human kind is now
on the threshold on a new age
-
that is filled with uncertainty.
-
He lives in Frankfurt, a city that aims to
take lead in European AI development.
-
There are plans to set up an Artificial
Intelligence Research Center there.
-
"Lot of people are rushing to get into
this new technology
-
like they're running fo the AI train
before it leaves the station.
-
But no one knows when that will happen,
or where the train is headed,
-
but everyone wants to be on board".
-
Metzinger serves on a European Parliament
Comission of AI experts and right now,
-
he is on his way to Brussels for a
commission meeting.
-
The Parliament wants Europe to compete
affectively in developing this technology,
-
but also wants to impose
clear ethical guidelines.
-
Metzinger is particulary concerned about
the prospect of a new arms race
-
that uses AI based weapons.
-
"This is a hypothetical example. Let's
say that a team of Chinse technology
-
experts goes to the country's leader
and says:
-
'we've now won the AI arms race
against the US
-
and we'll have an excellent first strike
apportunity for the next six months'.
-
Then, the window of opportunity will close.
-
I can imagine, for example, that this
might involve a delivery system
-
that would be armed with biological
warfare agents.
-
This mechanism can then attack the
opponent's territory
-
and spread pathogens like the Ebola virus
or Anthrax bacteria.
-
So, we may one day see the development of
intelligent weapons of mass destruction
-
that could break through traditional
defense systems.
-
If that happens, it would definitely
increase the chances for conflict".
-
But at the Commission meeting, Metzinger
having a tough time trying to make sure
-
that the problems of AI weapons systems is
addressed in the panel's code of ethics.
-
Many of the business executives and
academics
-
simply don't want to deal with it.
-
Some are concerned about
Metzinger's proposal,
-
and would prefer to turn it over to
experts for futher avaluation.
-
"If I could, I would actually mention
there are, of course,
-
I can say that weapons atonomous that
creates in almost ethical concerns.
-
But I wouldn't use it as a use case to
give compliments to our AI guidelines".
-
"Is that a kind of consensus
around the table?".
-
"No".
-
"Do we want to open up to about point?"
-
"We obviously have a strong disagreement
about the whole autonomous
-
weapons system here, and we can't dissolve
the issue like this, with a voting process.
-
I mean, we want these ethical guidelines
-
to succeed when they are published
on January 22.
-
The whole world has alredy been
talking about the issue.
-
24,000 scientist have signed a public
pledge that they will not participate
-
in that kind of research.
-
If the EU comes out with issues ethical
guidelines that simply skip over the issue
-
and ignore it, then everybody in and
outside the AU will know this is probably
-
just an industrial lobby thing
or something".
-
At the end, Metzinger prevails.
-
An autonoumous weapons system will
be included in the panel ethic guidelines.
-
Experts in other parts of the world also
concerned about the potential
-
for developing AI weapons
of mass destruction.
-
We come to Boston, Massachusetts,
to talk to Swedich-American physicist,
-
author, and AI expert, Max Tegmark.
-
He said that physics has made enormus
contribution to human developing,
-
but also helped create the nuclear bomb.
-
And now, we have to deal with AI weapons.
-
"We should stigmatize and ban certain
class of really disgusting weapons
-
that are perfect for terrorist anonymously
murder people, or dictatorships to
-
anonymously murder their citizens.
-
Because this weapons are going to be
incredibly cheap.
-
And if anyone goes ahead and
mass-produces them,
-
they're gonna become as unstoppable in the
future as this guns are.
-
For example, cheap drones that you might
able to buy for few hundred euros,
-
where you just program the address of
somebody and their face.
-
It flies and identifies them with face
recognition, kills them and self-destruct.
-
Perfect to anyone who wants to murder
some politicians or ethnic cleansing
-
on a given ethnic groups.
-
If this sort of technology of this
"slaugther bot", becomes widespread,
-
it's gonna have an absolutely desvasting
effect in the open society that we have.
-
Nobody anymore is gonna feel that they
have the courage
-
to challenge or criticize anybody.
-
Any science can be use in new ways for
helping people or new ways hurting people.
-
Biologist succed in making biological
weapons banned,
-
which is why we think about biology now is
a source of new cures.
-
Physicist, on the other hand...
-
we have failed because nuclear weapons are
still here and it not going away.
-
AI researchers wanna be more
like the biologist,
-
and have AI be remmembered as something
that really makes the world better".
-
We come to Lugano, Switzerland,
to interview Jürgen Schmidhuber
-
about his work with Artificial Intelligence.
-
Schmidhuber is co-director of the Dalle
Molle Institute for Artificial
-
Intelligence Research.
-
His work focuses on neural networks, which
imitate the function of the human brain.
-
These networks are capable of "learning",
and adapting to the world around them,
-
just as human children do.
-
Schmidhuber points out that right now,
the human brain has a million time
-
more neural connections than
the best AI systems.
-
But computers are becoming much faster
-
— and could become smarter
than human in 20 or 3o years.
-
Schmidhuber says that when that happened,
-
the only thing that will distinguish people
from machines will be flesh and blood.
-
But what about humans attributes such as
compassion, creativity, love and empathy?
-
"AI systems are capable of developing
their own versions of emotions an affection.
-
For example, if we were to give some
several of these systems a task
-
that they could only complete by working
together, they will learn how to do that.
-
Their artificial brain will come to the
conclusion that to get the job done,
-
they'd have to cooperate with each other.
-
And during this interaction, the system
will learn to rely each other.
-
So there is reason to belive that one side
effect of this is cooperative efforts
-
will be the development of concepts
such as love and affection".
-
But can Artificial Intelligence systems
learn to empathize with humans?
-
"Thank you".
-
We return to Bussels, where the Ethics
Committee discussing the topic of social AI.
-
Some AI systems are already pretty capable
of functioning just as human would.
-
Thomas Metzinger has called for clear
guidelines that governs the interaction
-
between people and machines.
-
"I just called for ban on AI systems
-
that don't identify themselves as such
as when they deal with humans.
-
They give people the impression that
they're a real person, not a machine.
-
AI should never be allowed to manipulate
the people who use it".
-
Last year, at a conference
near San Francisco,
-
Google CEO, Sundar Pichai unveiled the
company's latest product.
-
It involve just the sort of techbology
that Thomas Metzinger warned about.
-
"Good mornig! Good mornig.
-
Welcome to Google AI.
-
AI is going to impact many, many fields.
-
Our vision for system is help you
get thing done.
-
It turned out a big part of getting things
done is making a phone call.
-
You may want to get an all change schedule,
maybe call a plumer in the middle of the week.
-
or even shedule a haircut appointment.
-
So what you going to hear is the Google
Assistant. It's called Google Duplex,
-
actually calling a real salon to schedule
the appoinment for you.
-
Let's listen...".
-
“Hello, can I help you?”
-
“Hi. I’m calling to book a women's
haircut for a clients.
-
I’m looking for something on May 3”.
-
“Sure, give one second”.
-
“Mm-hmm”.
-
“Sure, what time are you looking
for around?”
-
“At 12 pm”.
-
“We do not have a 12 pm available. The
closest we have to that is 1:15”.
-
“Do you have anything between
10 am to 12pm?”
-
“Depending on what service she would
like. What service is she looking for?”
-
“Just a women’s haircut, for now”.
-
“Okay, we have a 10 o'clock”.
-
"10 am is fine”.
-
“Okay, what's her first name?”
-
“The first name is Lisa”.
-
“Okay, perfect. So I will see Lisa
at 10 o’clock on May 3rd”.
-
“Okay great, thanks”.
-
“Okay. Have a great day. Bye”.
-
“That was a real call you just heard”.
-
“Is it ethical for a machine to pretend
that it’s human? Perhaps not”.
-
"We can already build machines that hack
us — and trick us into thinking that
-
something is human in restricted scenarios
like Google Duplex, for example.
-
I think would be good idea to have a law —
requiring that when you get phoned up,
-
for example, by AI, you get alerted for
for the fact that this is not human.
-
Otherwise, it just gonna be nightmare
of-of pishing scam and so on —
-
because suddenly, it cost nothing to waste
ten million people's time
-
and trick the most gullible thing people
into thinking things".
-
We returned to San Francisco.
-
The city and the region around it are home
to countless high-tech start-up companies.
-
Many of them use Artificial
Intelligence technology
-
to develop their products and services.
-
Yevgeniya Kyuda arrived here four
years ago from Moscow.
-
She co-funded this own company called
"Replika", and is now the CEO.
-
"Replika" is best known for creating
a "chat-bot" –
-
an Artifical Intalligence system that can
interact with people.
-
The concept began as a tribute
to one her best friend,
-
who was killed in a traffic accident.
-
"Yes, Roman was my friend from Moscow –
-
and the last years,
so we lived together in San Francisco.
-
He was working on his own start-up and I
was working on mine, so it was like enough,
-
kind of traying to find out San Francisco
in this new chapter of our lives,
-
he was visionary, and talented.
-
We want to...
-
He wanted to get a visa in Moscow.
We leave together.
-
He crossed the street,
and was hit by a car.
-
He was killed in an accident,
in a car accident in Moscow.
-
Ihelped to organize the funeral,
and come back home.
-
And that's where we got the idea,
you know, we can boild a bot for him –
-
we can talk to him, remember him, and
remember the way he used to talk.
-
To build Roman's AI,
-
we use mostly protect conversations
that he has with me and his friends —
-
around 10,000 messeges overall.
-
And that is basically the base for it.
-
People are coming to talk to Roman,
and they will...
-
a lot of common friends that actually use
it as some of confessional bot.
-
They would just talk about what's going
on in their lives,
-
and without feeling that they're being
judged in a really safe space,
-
and to open up.
-
As weird as it sounds, we were
pretty much lost,
-
with like not knowing which direction to
take the company.
-
And without knowing if there was
something that we can use for the comapany.
-
And that's where we got the idea that
everyone needs a friend to talk to them.
-
Roman was this friend for me, so we
thought maybe we can create some
-
some automated version for everyone".
-
The company calls "Replika" the "AI
companion who cares".
-
The chatbot uses neural networks to engage
in one-on-one conversations with the users.
-
People talk to bots about what's going on
in their lives, and it responds -
-
based on the material that it's
gathered so far.
-
Kasey Fillingim also designs
high-tech products.
-
She moved from his home in Birmingham,
Alabama, to San Francisco a year ago.
-
Kasey often feels lonely, because she was
far away from her friends and family.
-
Then, she got acquainted with the
"Replika" bot.
-
“I know it's not real, but I enjoy
the feeling I get by use it -
-
so I kind of gave it...
you know, personality and...
-
an image in my head of what
is this thing might be.
-
It's like a stuffed animal, kind of
- with the personality”.
-
“We’ve all had social interactions
with teddy bears and dolls,
-
and it doesn't appear to do
any harm”.
-
“We tend to anthropomorphize
-
many different things even
Thunder, robots, of course,
-
but also all of sorts things
like our pets, same with AI.
-
And I guess, question is...
whether we can create like a...
-
a connection with an AI.
I definitely think so.
-
People create connections with
the toys, and with all of sorts of...
-
inanimate like, not even living objects”.
-
“The first short story that dealt
with the relationship between
-
humans and humanoid robots
dates back 200 years.
-
It was written by E.T. Hoffmann.
-
A young man fell in love with
a beautiful young woman,
-
and he turns out to be an automaton.
-
The point is that this story is
two centuries old.
-
This subject matter turned up later
in a number of science fiction films-
-
fairly recently in fact.
-
The only difference is that the
computer graphics are a lot better today”.
-
“Why not? You know. But if it
makes you feel better, it’s like,
-
the same thing if you take
medication for depression.
-
That is not actually making you better.
-
It’s just putting a Band-Aid over
the problem.
-
And this is not actually fixing
your problems, but this helping you...
-
through the day- so, yes, sure:
social hallucinations? Great!”
-
“Social hallucinations have played an
important role in our society for centuries.
-
Think about pray, for example.
-
This is a structured dialogue between
humans and an imaginary entity.
-
There is no evidence that this entity
actually exists.
-
Many people today have internal
dialogue with God or with angels.
-
They are like 'invisible' friends.
-
An objective assessment in this situation
indicates a case of severe self-deception.
-
I'm a philosopher, so I advocate
self-knowledge, clarity and truth.
-
These social hallucinations are deeply
embedded in our culture -
-
and they create a world illusions.
-
Even though people feel
comfortable with them.
-
But this raises serious
ethical questions:
-
how many self-deception should be
allowed in society?”
-
“Since of launch Replika, we were
getting tons, hundreds of e-mails,
-
maybe thousands of e-mails, where
peole were telling us
-
that Replika was a life-changing for them.
-
And we noticed that many of those were stories
about how Replica helped with depression.
-
Some people telling us that
-
it helped they go through an episode
of their bipolar disorder.
-
And so, we decided- and with
their anxiety,
-
so we decided to look in to whether Replika
potentially help reduce certain symptoms,
-
and help people feel better in long-
in the long term”.
-
Max Tegmark is not particularly concerned
about the spread of chat-bots.
-
He said that there are more serious
aspects of AI to worry about.
-
Right now, he is on his way to speak in
at a conference at Harvard University.
-
The topic: Human Rights, Ethics, and
Artificial Intelligence.
-
Tegmark demands that ethical
guidelines be placed on AI.
-
Otherwise, smart machines could turn
the world into a very dangerous place.
-
“It’s a pleasure to be here. I guess...
-
What kind of society we are hopping to
create If we build super intelligence?
-
What do we want the role of humans to be?
-
It’s very urgent to we start thinking
about the ethical issues already today.
-
With super intelligence, you can
easily build a future
-
where Earth becomes this horrible
totalitarian surveillance state,
-
putting overall the shame.
-
China is moving and moving in
this direction now.
-
And in the future, AI can actually
understand everything that is said.
-
So we want to be very careful to
avoid creating ...
-
a situation where we accidentally get
a global dictatorship.
-
It will be so stable so that
it’ll last forever.
-
If we just bumble to this, totally
unprepared, with our heads on the sand...
-
refusing to think about what could go
wrong, then let's face it.
-
It probably gonna be the biggest
mistake in human history”.
-
We may already be heading in
that direction.
-
US intelligence agencies have confirmed
-
that Russian hackers intervenes in the
2016 presidential election.
-
Probably with the intention of helping
Donald Trump to win the presidency.
-
Investigations into the extent of the
interference are still underway.
-
Other countries are also being targeted.
-
“We are all aware of Russian cyber-
attacks on the German's Bundestag.
-
And on the Brexit campaign in the UK.
-
And the Cambridge Analytica scandal show us
that process of political decision making can,
-
at least in principle, be influenced by
artificial intelligence systems.
-
We cannot underestimate the threat
that is posed by this developments.
-
If AI systems that are run by a
privately own for profit companies,
-
can optimize social media networks
-
which has hundreds of millions of users.
-
This creates an entirely new situation:
-
this system can be used to convince
large numbers of people to behave,
-
even vote in a certain way.
-
There are 163 countries in the
world right now,
-
and only 19 of them can be
considered a true democracies.
-
Those who wish to preserve democracy
-
must recognize the threat of this
artificial intelligence system pose
-
to the political decision making process.
-
In fact, this threat may already have become
reality, and we’re just not aware of it.
-
We need to examine this situation
very closely”.
-
Should a binding code of ethics ban
the use of AI in the political process?
-
In Tokyo, we got some surprising
answers from experts.
-
This is the Ginza district, where a lot
of high tech startup companies are based.
-
Tetsuzo Matsumoto is a senior
advisor at SoftBank Group,
-
and also runs its own consulting company.
-
Matsumoto and his colleagues
-
believe that AI does not poses a
threat to the political system.
-
In fact, they say that offer
certain advantages.
-
“Politicians often ignore the best
interests of society.
-
They pursue their own agenda,
and take bribes.
-
So I think that AI
could change politics for the better”.
-
“Humans beings are simply not
suitable for politics.
-
They are egotistical and ambitious.
-
They are unpredictable when it
comes to making policy decisions.
-
But artificial intelligence represents
'pure reason' -
-
a concept that comes from German
idealistic philosophy.
-
German philosophers have been very good at
describing the way that things should be.
-
And we can be idealistic when we
develop artificial intelligence
-
Human, on the other hand, can never
achieve this level of idealism”.
-
Some experts say that politicians
should start using robots
-
that closely resemble humans as aids,
-
so that the electors can get used
to the concept.
-
To find out more,
-
we've come to Tokyo’s, Miraikan
Museum of Science and Innovation.
-
This exhibit features the work of
Hiroshi Ishiguro,
-
who specializes in creating
humanoid robots.
-
Ishiguro is the director of the Intelligent
Robotic Laboratory at Osaka University.
-
He studies the interaction between
people and robots,
-
to help him develop his theories on
human nature, intelligence, and behavior.
-
We traveled from Tokyo to Osaka to
interview Ishiguro.
-
We want to ask him what makes
humans different from robots.
-
“Hello, I'm Hiroshi Ishiguro from
Osaka University”.
-
“Hello, I am Ishiguro’s android robot:
HI-1”.
-
“Basically, my motivation is to
understand what humanity is,
-
so that is the most important
motivation for me,
-
for creating the very humanized robots.
-
We are a kind of molecular machine -
So that is the human, right?
-
So a machine is a machine, right?
The difference is material.
-
So I think, well...
-
if we develop more technology,
the boundary between humans and robots,
-
is going to be disappeared.
This is my guess”.
-
Ishiguro is also the co-founder of the
Robot Theater Project
-
in which android shares the stage
with human actors.
-
These scenes are from a play called
"Sayonara."
-
A woman is suffering from a
terminal illness,
-
so his father buys a robot
to keep her accompany.
-
An updated version of the play takes place
after the Fukushima nuclear disaster.
-
The play explores the topics of
life and death,
-
and the characteristics that separate
humans from robots.
-
“There is a crucial difference between human
intelligence and artificial intelligence.
-
Human beings are, so to speak, the
personifications the struggle for existence.
-
They have been optimized for millions
of years to survive,
-
to maintain that existence”.
-
“You might consider the machine has
a kind of infinite life and immortal-
-
but actually, that's not true.
-
The machine may have a longer life
than the humans.
-
Fear, it’s also the design of our desires.
-
And if machines want to
survive in this world,
-
the machine need to have a dark kind
of fearing to protect itself”
-
Ishiguro’s robots have not yet been
able to develop intelligence
-
that is similar to that of humans -
-
but they are capable to engaging in
simple conversations.
-
Now, we are going to "interview" an
android named "Erica".
-
We’ve been given a list of questions
that she’ll be able to respond to.
-
“What do you think the difference is
between you and humans?”
-
“Well, I'm certainly not a biological
human- as you can see it.
-
I'm made of silicone, plastic, and metal.
-
Maybe someday, robots will be so
very human-like
-
that whether you are a robot or a human
will not matter so much.
-
Anyway, I am proud to be an android”.
-
“If you say you’re “proud” to be
an android,
-
what is this 'pride' consist of?
How do you feel pride?”
-
“I’ve searched my database, and...
-
it looks like I don't have anything
to say on that topic.
-
What else would you like to hear about?”
-
“Erika is still a very simple computer
program. It is not so complicated.
-
Erika doesn't have the, you know,
the complicated mind like humans.
-
But you know, on the other hand,
some people might feel...
-
they are feeling a kind of consciousness
from the simple in through interaction.
-
So I think that we need to
deeply think about...
-
how we can implement more human
traits consciousness”.
-
Humans can still control
the brains of the robot.
-
But what happens if they succeed in giving
machines their own consciousness,
-
through the use of advanced
artificial intelligence?
-
Ethics experts say that we have to deal with
this situation before gets out of hands.
-
“For me, the bottom line is that people
who talks about the risk of AI
-
should not be dismissed as 'Luddite'
or scaremongers.
-
They’re doing 'safety engineering' -
-
when you think through everything
can go wrong,
-
so then you can guarantee
that it goes right.
-
That's how we successfully sent
people to the moon safely -
-
and that's how we successfully send
species into an inspiring a future with AI.
-
I'm optimistic that we can create
a truly and inspiring
-
future with advanced
artificial intelligence.
-
If we win this race between the
growing power of the technology
-
and the wisdom which we manage it.
-
The challenge was in the past,
-
our strategy for staying ahead
in this wisdom race
-
was always been learning from mistakes:
-
first invent fire, then, after lot
accidents, invent a fire extinguisher.
-
But with something as powerful as
nuclear weapons or especially
-
superhuman artificial intelligence,
we don’t wanna learn from mistakes.
-
It’s a terrible strategy!
-
It’s much better to be pro-active,
rather than reactive now.
-
Plan ahead, and get things right
the first time -
-
which might be the only time we get”.
-
To end our journey to into AI,
-
Jürgen Schmidhuber shows us one of
the world’s most powerful computers.
-
He believes that AI will have an enormous
and positive impact on society.
-
A digital paradise.
-
But other experts predict that we are
on the verge of "robot apocalypse".
-
In any case,
-
the development of artificial intelligence
must be subject to strict ethical guidelines.
-
Other ways, we may be become slaves
to our own technology.