An ethical supply chain solves the conflict between law and technology | Marco Giacomello | TEDxLivorno
-
0:18 - 0:19I want to tell you a story.
-
0:20 - 0:22Once upon a time,
they used to build nests, -
0:23 - 0:24and a small group of sparrows
-
0:25 - 0:28were standing before
the sunset's fading light -
0:29 - 0:32remembering how heavy their day was.
-
0:32 - 0:36One of the sparrows said,
We're all so small and weak. -
0:36 - 0:40It would be so nice, if an owl
could help us build our nests. -
0:41 - 0:43Indeed, said another sparrow.
-
0:43 - 0:47It could also help us raise our children
and take care of our old folks. -
0:48 - 0:50A third sparrow joined in and said,
-
0:50 - 0:52This would surely be
-
0:52 - 0:55the best thing that could ever happen.
-
0:56 - 1:01Let's send out all our scouts
to look for a baby owl. -
1:02 - 1:06Let's bring him here as soon as possible,
and progress with him. -
1:06 - 1:07With his keen eyesight,
-
1:07 - 1:13he could also help us see and understand
when that pesky neighbor's cat is coming. -
1:13 - 1:17He could help where none of us can.
-
1:19 - 1:21Only one small sparrow, Scrunfinkle,
-
1:23 - 1:25with just one eye and a grumpy character,
-
1:26 - 1:27butted in and said,
-
1:28 - 1:30This could be the end of us.
-
1:30 - 1:33Bringing in among us
a creature we don't know, -
1:33 - 1:34without knowing how to tame it,
-
1:35 - 1:38will reduce us all to slavery,
wipe us out from the Earth. -
1:39 - 1:42Skeptical as they were,
the other sparrows said, -
1:42 - 1:46Absolutely not, an owl will help us!
-
1:46 - 1:47Let's first bring him here.
-
1:47 - 1:52Learning to train an owl, a new creature,
is difficult, and takes up a lot of time. -
1:52 - 1:55We need help now,
we want to make progress now. -
1:55 - 1:58We'll worry about the consequences later.
-
1:58 - 2:01Scrunfinkle and another
small group of sparrows -
2:02 - 2:07began trying frantically
to figure out how to train owls, -
2:07 - 2:09fearing that the other scout sparrows
-
2:10 - 2:15would return from their hunt
before a solution had been found. -
2:16 - 2:18I can't tell you
how that story turned out. -
2:18 - 2:20What I can tell you is,
-
2:20 - 2:25today we should all be
that grumpy one-eyed sparrow. -
2:26 - 2:29We are creating
super artificial intelligences -
2:29 - 2:35and allowing it to learnm to evolve,
through access to Internet. -
2:35 - 2:37But nothing ensures
-
2:37 - 2:40that artificial intelligence
-
2:40 - 2:42will never evolve strategies
-
2:42 - 2:45to secure their future dominance over us.
-
2:45 - 2:51I madly love technology
I try out everything I find before me. -
2:51 - 2:54Technology always wins;
the future always wins; -
2:54 - 2:56trying to stop it is pointless.
-
2:57 - 2:59A few years ago -
let's say several years ago- -
2:59 - 3:03I had the chance to try out,
for several months, -
3:03 - 3:05Google's enhanced reality glasses.
-
3:05 - 3:08I was so eager to try them out that,
-
3:08 - 3:09as soon as I got home,
-
3:10 - 3:12I pulled them out of the box, put them on
-
3:12 - 3:15and turned them on without reading
the very stuff I write myself: -
3:15 - 3:17T&C, and privacy information
-
3:17 - 3:20that rule the relationship
between me and a technology. -
3:20 - 3:22My wife Alessandra, after a few minutes,
-
3:22 - 3:26got a phone call from an alarmed friend
-
3:26 - 3:27who asked her, are you aware
-
3:27 - 3:32Marco is now streaming
in real time all over the world? -
3:33 - 3:34This it what happens
-
3:34 - 3:37when we don't understand
the technology we have in front of us, -
3:37 - 3:39when we don't read,
when we aren't informed. -
3:41 - 3:45This is what happens
when we don't want to train ourselves, -
3:45 - 3:47build awareness.
-
3:48 - 3:51This is what happens
when we're 5.0 ignorant, -
3:51 - 3:55when we have at our disposal
all the knowledge in this world, -
3:55 - 3:58but we don't want to get into it in depth.
-
3:58 - 4:00Today the problem is very simple:
-
4:00 - 4:02technology is going faster
than anything else, -
4:02 - 4:05faster than us, faster than the law,
-
4:05 - 4:09and faster than our means
to learn and acquire knowledge. -
4:09 - 4:14So, nowadays we have
all the world's information at hand, -
4:14 - 4:15but we're unable to control it,
-
4:15 - 4:17to eliminate the so-called
"background noise". -
4:18 - 4:22So, when we can't understand something,
we say we need a new law. -
4:22 - 4:25We say we need a paradigm shift.
-
4:25 - 4:26But do we really need new laws?
-
4:27 - 4:32In 1955, when Luigi Einaudi wrote
his "Prediche inutili" [Useless Sermons], -
4:32 - 4:33he wrote these words:
-
4:33 - 4:37"Knowledge comes first, then discussion,
and finally deliberation. -
4:38 - 4:41Legislation is not made through
illusory and sterile ostentation." -
4:42 - 4:47Laws made in haste, without
knowing the subject matter, -
4:47 - 4:48lead to new laws
-
4:48 - 4:52which try through workarounds
to do something impossible: -
4:52 - 4:57apply rules to the digital world
that were designed for the analogical one. -
4:57 - 4:58This is impossible.
-
4:58 - 5:02This creates a mix
of totally inapplicable norms -
5:02 - 5:03that distort the market.
-
5:05 - 5:09Today we have overcome
our grandparents' ignorance, -
5:09 - 5:11the kind of ignorance
-
5:11 - 5:13arising from lack of information.
-
5:15 - 5:17This is mostly due to the Internet.
-
5:17 - 5:21However, a new type of ignorance
has emerged, the processing one, -
5:21 - 5:24a type of ignorance resulting
from an overabundance of information, -
5:24 - 5:29as well as from our limited desire
and time to process it. -
5:29 - 5:32There are concrete examples
in the newspapers every day: -
5:33 - 5:36the rampant increase of fake news
and lack of fact-checking -
5:36 - 5:39which lead to uncontrolled false alarms
-
5:39 - 5:40and hate speeches.
-
5:40 - 5:42Everything that turns up on Internet,
-
5:42 - 5:44if not verified and checked,
-
5:44 - 5:47leads to a superficial use of technology.
-
5:48 - 5:51While a piece of news read incorrectly
-
5:51 - 5:53can have consequences,
-
5:53 - 5:56it can also lead, as we shall see,
to even worse consequences. -
5:59 - 6:02Nowadays, as I told you,
we are ignorant 5.0, -
6:02 - 6:05because both the information
ignorance of the past -
6:05 - 6:07and current processing one
-
6:07 - 6:10lead to the same result:
-
6:11 - 6:14sub-optimal actions, which are all alike.
-
6:14 - 6:17Very often we behave like sheep;
-
6:17 - 6:21we follow the mainstream,
do whatever everyone else does. -
6:22 - 6:24We do some things we shouldn't do.
-
6:24 - 6:26Why?
-
6:26 - 6:28Because sub-optimal actions
lead to a lack of knowledge -
6:28 - 6:31and a standardization
of our behaviour on the web, -
6:31 - 6:34certainly not to understanding
how technology works. -
6:35 - 6:37Understanding how blockchains work
-
6:37 - 6:40how innovative technologies,
in general, work, -
6:40 - 6:42is committing and time-consuming.
-
6:42 - 6:45We wouldn't even be able to get it,
more often than not, -
6:45 - 6:47because we lack a basic training.
-
6:47 - 6:49And here is where ethics come into play.
-
6:50 - 6:54Ethics is the glue
that binds peoples together, -
6:54 - 6:56the key that allows them to cooperate,
-
6:57 - 7:01and something that has helped
our species to progress. -
7:01 - 7:04If we are who we are today,
for better or for worse, -
7:04 - 7:08we owe it to our ability
of moral reasoning, -
7:08 - 7:12to our capacity and desire
to approach situations -
7:12 - 7:16with an ethics, and a willingness
to stand out from the crowd. -
7:20 - 7:21This is us today.
-
7:22 - 7:24Artificial intelligence is already here.
-
7:24 - 7:27It's in our autonomous driving robots;
-
7:27 - 7:30it's in the little robots
that clean our floors -
7:30 - 7:33and track every path
they took in our homes. -
7:34 - 7:36It's in hospitals, with surgical robots;
-
7:37 - 7:40it's with the robots
that help care for the elderly; -
7:40 - 7:41it's with those little robots
-
7:41 - 7:43that keep your child entertained
-
7:43 - 7:47while you have something else to do
and can't look after them. -
7:49 - 7:51What I want you to understand
-
7:51 - 7:54is that this type of intelligence
is among us today; -
7:54 - 7:55Since yesterday, in fact.
-
7:56 - 8:01And getting into ethics today
is complex and difficult, -
8:01 - 8:03and often one doesn't want to do it.
-
8:03 - 8:06Because AI manufacturers,
most of the time, -
8:06 - 8:07are not charities,
-
8:07 - 8:09but profit companies
-
8:09 - 8:11who legitimately want to make their money.
-
8:12 - 8:18So, how can we mandate an ethical code
-
8:18 - 8:21for whoever develops, programs
or designs an application, -
8:21 - 8:24an artificial intelligence
or a software system? -
8:25 - 8:26We'll see that later.
-
8:27 - 8:29What is certainly true today
-
8:29 - 8:32is that we are like children
playing with a bomb. -
8:33 - 8:35We don't realize
-
8:35 - 8:37that robots should have one main scope,
-
8:38 - 8:42which is assist us
in improving our well-being, -
8:42 - 8:45not promoting the evolution
of technology as an end in itself. -
8:46 - 8:50In 1947, when Asimov wrote
"The Three Laws of Robotics", -
8:51 - 8:53he said three main things:
-
8:53 - 8:55Dear robot, you must not kill humans,
-
8:55 - 8:59obey the orders given by humans beings
-
8:59 - 9:00and protect your own existence.
-
9:01 - 9:03Asimov's Three Laws are still today
-
9:03 - 9:07the foundation for anyone
who deals with ethics of AI. -
9:08 - 9:09That's because -
-
9:10 - 9:11Imagine this story:
-
9:13 - 9:15a robot tells a woman -
its friend, a human- -
9:16 - 9:18that the man of her life loves her.
-
9:19 - 9:21Actually, this isn't true, but it says so
-
9:21 - 9:24because otherwise, it thinks,
she'd go mad, she would suffer, -
9:24 - 9:26and therefore the first law
would be violated: -
9:26 - 9:29do not do harm to humans,
don't make them suffer. -
9:30 - 9:34Too bad, by telling a lie,
it violates their trust relationship -
9:34 - 9:37and as a result she suffers anyway.
-
9:37 - 9:42The robot, whose reasoning has always
been static and cold, now goes crazy -
9:42 - 9:44because it can't get out
of a moral dilemma: -
9:44 - 9:46"Shall I tell her or not?
And how do I tell her?" -
9:47 - 9:50Technology is neutral when it is created;
-
9:50 - 9:53we are the ones to decide
how to apply it in the real world. -
9:54 - 9:56So how do you teach ethics to a robot?
-
9:56 - 9:59How do you teach ethics
-
9:59 - 10:01to cold binary artificial intelligence?
-
10:02 - 10:05Do you just need
to code it into its brain? -
10:05 - 10:08Imagine a case
-
10:09 - 10:13where you are in the backseat
of your self-driving car, -
10:14 - 10:18and you programmed it
never to go over the speed limit. -
10:18 - 10:22Too bad that, on that day you're
in the backseat of the car, -
10:22 - 10:23bleeding to death,
-
10:24 - 10:26and you need to reach the hospital
as soon as possible. -
10:27 - 10:30But with its cold reasoning,
the car answers you, -
10:30 - 10:34"I can't speed up;
you've coded me not to." -
10:35 - 10:36We must pay attention
-
10:36 - 10:39to how we teach things
to artificial intelligence. -
10:39 - 10:42Obviously, we are often
not the ones teaching to them. -
10:42 - 10:45But often we are, because
with machine learning technology -
10:45 - 10:46humans provide data
-
10:46 - 10:49to a machine who thinks
-
10:49 - 10:50in algoritmic terms.
-
10:52 - 10:53There's a really funny anectode
-
10:53 - 10:56on a piece of artificial intelligence
most of us keep in the house, -
10:56 - 10:58like Alexa or Google Home,
just to name a couple. -
11:00 - 11:04During a dinner, one of them
turned on and said, -
11:04 - 11:06"Remember to buy cocaine tomorrow"!
-
11:09 - 11:10It wasn't true;
-
11:10 - 11:12he didn't have to buy cocaine;
he wasn't a drug user. -
11:12 - 11:14But the night before,
-
11:14 - 11:18on a tv show,
-
11:18 - 11:19a line of the script was,
-
11:19 - 11:21"Let's meet tomorrow to get cocaine".
-
11:22 - 11:24You understand what impact it can have?
-
11:24 - 11:26This is a funny example,
-
11:26 - 11:31but there are a lot of examples
that can cause much worse damage. -
11:31 - 11:34Imagine a case where a self-driving car
must face a dilemma - -
11:34 - 11:38none of us would want to face,
much less leave it to a car. -
11:38 - 11:40In order to save
five pedestrians in the street, -
11:40 - 11:43should a car think about steering sharply
-
11:43 - 11:45thus hitting and killing
-
11:46 - 11:49an unwitting pedestrian on the sidewalk?
-
11:50 - 11:51When must it do this?
-
11:51 - 11:55And if there are three people,
or two, in the middle of the street? -
11:55 - 11:56How does it calculate this?
-
11:56 - 11:59Should it take into account
their average age, or life expectancy, -
11:59 - 12:01and the expected loss for the State?
-
12:01 - 12:03Its own financial loss, maybe?
-
12:03 - 12:06You understand, we couldn't solve
this dilemma ourselves, -
12:06 - 12:08let alone teaching a robot what to do.
-
12:08 - 12:10Clearly, in order to teach it what to do,
-
12:10 - 12:16sooner or later we will have to define
what a robot can do. -
12:16 - 12:18Aristotle used to say,
-
12:20 - 12:24"To learn how to be good people,
we must get used to doing good things." -
12:25 - 12:27Well, maybe this could be the solution
-
12:28 - 12:30for teaching a robot something true,
-
12:30 - 12:33what to do, how to react
in such a situation. -
12:33 - 12:36But before beginning to think
-
12:36 - 12:40that artificial intelligence reasons
according to our desires, -
12:42 - 12:44we have to step back a bit
-
12:44 - 12:47and understand how to use ethics
-
12:47 - 12:51while programming or developing a system.
-
12:52 - 12:56These self-driving cars are already
driving around many States in America. -
12:56 - 12:57They're being tested.
-
12:58 - 12:59We're trying to understand -
-
12:59 - 13:02In the newspapers
there's the classic story, -
13:02 - 13:04"Driverless car hits
and kills pedestrian". -
13:04 - 13:07He would've been hit
even with a human driven car, -
13:07 - 13:10because it was dark and he
wasn't over the crosswalk. -
13:10 - 13:15There's always an attempt
to limit technology, hinder its progress. -
13:17 - 13:20We've got to a point
-
13:20 - 13:22where even the European Commission
was led to consider -
13:23 - 13:26the ethical aspects
of artificial intelligence; -
13:26 - 13:29just a few weeks ago
the first guidelines came out: -
13:29 - 13:32the first non-binding recommendations
-
13:34 - 13:37for anyone developing or programming
artificial intelligences, -
13:37 - 13:39which are based on the concept
of "trustability" -
13:39 - 13:45and the anthropocentric notion
that humans must be at the center. -
13:45 - 13:48Technology must not evolve for itself,
-
13:48 - 13:50but it must evolve to improve
humans' well-being, -
13:50 - 13:53such as in the sparrows' plan
at the beginning. -
13:55 - 13:58These rules are based on basic principles,
-
13:58 - 14:01many of which are taken
from the laws of Asimov: -
14:01 - 14:04Dear artificial intelligence,
you must not kill humans. -
14:04 - 14:06Dear artificial intelligence,
-
14:06 - 14:08you must not do them harm,
you must follow their orders -
14:08 - 14:09and protect yourself.
-
14:09 - 14:11Others have been added:
-
14:12 - 14:16you must guarantee equal treatment
to every individual. -
14:17 - 14:20And here a small digression
comes into play: -
14:21 - 14:25over the past few days
there has been a great scandal -
14:25 - 14:29because an application
used all over the world -
14:29 - 14:30was used in Arab countries
-
14:30 - 14:34to monitor a specific type
of individual, women. -
14:34 - 14:36Women were being tracked:
-
14:36 - 14:39where were they going, how long
they stayed, what were they doing. -
14:40 - 14:42As I told you before,
technology is neutral. -
14:43 - 14:46Ethics, on the other hand,
is in constant evolution, -
14:46 - 14:49are forever being debated,
and are culturally specific. -
14:49 - 14:53There's no single common notion of ethics.
-
14:54 - 14:57And this is a problem,
as trying to understand -
14:57 - 15:00and explain a developer, a programmer
-
15:00 - 15:02or a user interface technicial
-
15:03 - 15:07how to develop, introduce ethics
into his work is already complicated. -
15:07 - 15:12Imagine where a European code of ethics -
not even global, just european - -
15:13 - 15:16must deal with limiting
-
15:16 - 15:18scientific and technological progress.
-
15:19 - 15:22Then there's another problem.
-
15:22 - 15:29our anthropomorphism
compels us to think about vulnerability. -
15:30 - 15:35If we were to think that this robot,
this artificial intelligence, -
15:35 - 15:37was similar or identical as us,
-
15:38 - 15:39we would be making a mistake.
-
15:39 - 15:42It's not the case now,
and it won't be for many years to come. -
15:42 - 15:44Some people say
that artificial intelligence -
15:44 - 15:47will surpass our intelligence
in a few decades. -
15:47 - 15:49Others are skeptical, instead:
-
15:49 - 15:51AIs, they say, will always be
bound and narrow. -
15:51 - 15:55I'm not telling you which choice is right;
I just ask you to think about it, -
15:55 - 16:00Let's start considering,
if we give these machines -
16:00 - 16:04too much credibility,
belief and moral thought -
16:04 - 16:06without them really having it,
-
16:06 - 16:08we make a mistake.
-
16:08 - 16:11Imagine an especially
attractive woman robot -
16:12 - 16:14who tells her human companion,
-
16:14 - 16:16"If you want to keep dating me,
-
16:16 - 16:20you have to buy me flowers
from this online shop, -
16:20 - 16:23and you have to buy me clothes
from that online shop. -
16:23 - 16:25You have to take me on certain trips,
-
16:25 - 16:27otherwise our friendship is over."
-
16:29 - 16:32Too bad that robot
isn't making human choice, -
16:32 - 16:37where two people can decide
what is good for both of them. -
16:37 - 16:39Behind those algorithms
and that intelligence, -
16:39 - 16:41there are for profit companies.
-
16:41 - 16:46They could restrict and profile people,
-
16:46 - 16:50and persuade us in ways
that are totally unthinkable now. -
16:51 - 16:52But there is a hope:
-
16:52 - 16:54these machines do and will need us
-
16:54 - 16:58as much as we will need them.
-
16:59 - 17:03This notion of trustability,
this notion of trusting in machines, -
17:04 - 17:06must succeed, because if it doesn't,
-
17:06 - 17:09we won't implement AIs into our lives;
-
17:09 - 17:14we will never trust it,
always looking for alternative solutions -
17:14 - 17:17or we will block technology saying,
"No, I don't want this thing" - -
17:17 - 17:21in a superficial way, maybe
without knowing anything about it - -
17:22 - 17:24"because it's not for me".
-
17:24 - 17:25It's ok for me
-
17:25 - 17:27to manually turn on the light,
-
17:27 - 17:30instead of giving a voice command;
-
17:30 - 17:33it's no trouble driving a car
rather than using a self-driving one. -
17:33 - 17:35Too bad that progress will go forward,
-
17:36 - 17:39and growing niches of people
will probably stay behind. -
17:39 - 17:41The problem of trust will be
-
17:41 - 17:46that all this trust placed in machines,
on the other hand, must find a limit. -
17:46 - 17:48And the limit could be an ethical one:
-
17:48 - 17:50the guidelines I have just listed,
-
17:51 - 17:53or maybe the programmers,
-
17:53 - 17:58or the people who will
thoughtfully use this intelligences -
17:58 - 17:59knowing them, studying them,
-
17:59 - 18:02much like I tried to do with my son
since he was a few months old, -
18:02 - 18:04letting him see technology,
-
18:04 - 18:06letting him try it
and putting it in his hands. -
18:06 - 18:13A friend was telling me, the other day,
robots are like 747 airplanes. -
18:13 - 18:16That can cross the planet in a few hours,
-
18:18 - 18:21but they'll never be able
to land on a tree. -
18:21 - 18:25Think again, before completely trusting
our machines, our technology -
18:25 - 18:28in your plans, your ideas
and your way of life. -
18:29 - 18:32However, if we want the chance
of having a choice, -
18:32 - 18:34if we don't want them
to overpower us in the future - -
18:34 - 18:36a dystopian future, but in the future -
-
18:37 - 18:43rule of law won't be worth much.
-
18:43 - 18:49The ethics of developers, legal designers
and entrepreneurs will be needed, -
18:49 - 18:51who will have to think differently
-
18:51 - 18:55if they don't want to be overwhelmed
by these innovations, too. -
18:56 - 18:59In a nutshell, we need
to look beyond the horizon -
18:59 - 19:02to avoid ending up like those sparrows.
-
19:02 - 19:03Thank you.
-
19:03 - 19:05(Applause)
- Title:
- An ethical supply chain solves the conflict between law and technology | Marco Giacomello | TEDxLivorno
- Description:
-
In order to protect us, AI, robotics and all disrupting technologies must tell good from evil, and law alone might not be enough. Therefore, an anthropocentric approach to AI and robotics development is needed, an ethical value system centered around mankind, that never considers technology as a good per se, but a powerful tool for the benefit of us all.
This talk was given at a TEDx event using the TED conference format but independently organized by a local community.
Learn more at http://ted.com/tedx
- Video Language:
- Italian
- Team:
- closed TED
- Project:
- TEDxTalks
- Duration:
- 19:09