I want to tell you a story. Once upon a time, they used to build nests, and a small group of sparrows were standing before the sunset's fading light remembering how heavy their day was. One of the sparrows said, We're all so small and weak. It would be so nice, if an owl could help us build our nests. Indeed, said another sparrow. It could also help us raise our children and take care of our old folks. A third sparrow joined in and said, This would surely be the best thing that could ever happen. Let's send out all our scouts to look for a baby owl. Let's bring him here as soon as possible, and progress with him. With his keen eyesight, he could also help us see and understand when that pesky neighbor's cat is coming. He could help where none of us can. Only one small sparrow, Scrunfinkle, with just one eye and a grumpy character, butted in and said, This could be the end of us. Bringing in among us a creature we don't know, without knowing how to tame it, will reduce us all to slavery, wipe us out from the Earth. Skeptical as they were, the other sparrows said, Absolutely not, an owl will help us! Let's first bring him here. Learning to train an owl, a new creature, is difficult, and takes up a lot of time. We need help now, we want to make progress now. We'll worry about the consequences later. Scrunfinkle and another small group of sparrows began trying frantically to figure out how to train owls, fearing that the other scout sparrows would return from their hunt before a solution had been found. I can't tell you how that story turned out. What I can tell you is, today we should all be that grumpy one-eyed sparrow. We are creating super artificial intelligences and allowing it to learnm to evolve, through access to Internet. But nothing ensures that artificial intelligence will never evolve strategies to secure their future dominance over us. I madly love technology I try out everything I find before me. Technology always wins; the future always wins; trying to stop it is pointless. A few years ago - let's say several years ago- I had the chance to try out, for several months, Google's enhanced reality glasses. I was so eager to try them out that, as soon as I got home, I pulled them out of the box, put them on and turned them on without reading the very stuff I write myself: T&C, and privacy information that rule the relationship between me and a technology. My wife Alessandra, after a few minutes, got a phone call from an alarmed friend who asked her, are you aware Marco is now streaming in real time all over the world? This it what happens when we don't understand the technology we have in front of us, when we don't read, when we aren't informed. This is what happens when we don't want to train ourselves, build awareness. This is what happens when we're 5.0 ignorant, when we have at our disposal all the knowledge in this world, but we don't want to get into it in depth. Today the problem is very simple: technology is going faster than anything else, faster than us, faster than the law, and faster than our means to learn and acquire knowledge. So, nowadays we have all the world's information at hand, but we're unable to control it, to eliminate the so-called "background noise". So, when we can't understand something, we say we need a new law. We say we need a paradigm shift. But do we really need new laws? In 1955, when Luigi Einaudi wrote his "Prediche inutili" [Useless Sermons], he wrote these words: "Knowledge comes first, then discussion, and finally deliberation. Legislation is not made through illusory and sterile ostentation." Laws made in haste, without knowing the subject matter, lead to new laws which try through workarounds to do something impossible: apply rules to the digital world that were designed for the analogical one. This is impossible. This creates a mix of totally inapplicable norms that distort the market. Today we have overcome our grandparents' ignorance, the kind of ignorance arising from lack of information. This is mostly due to the Internet. However, a new type of ignorance has emerged, the processing one, a type of ignorance resulting from an overabundance of information, as well as from our limited desire and time to process it. There are concrete examples in the newspapers every day: the rampant increase of fake news and lack of fact-checking which lead to uncontrolled false alarms and hate speeches. Everything that turns up on Internet, if not verified and checked, leads to a superficial use of technology. While a piece of news read incorrectly can have consequences, it can also lead, as we shall see, to even worse consequences. Nowadays, as I told you, we are ignorant 5.0, because both the information ignorance of the past and current processing one lead to the same result: sub-optimal actions, which are all alike. Very often we behave like sheep; we follow the mainstream, do whatever everyone else does. We do some things we shouldn't do. Why? Because sub-optimal actions lead to a lack of knowledge and a standardization of our behaviour on the web, certainly not to understanding how technology works. Understanding how blockchains work how innovative technologies, in general, work, is committing and time-consuming. We wouldn't even be able to get it, more often than not, because we lack a basic training. And here is where ethics come into play. Ethics is the glue that binds peoples together, the key that allows them to cooperate, and something that has helped our species to progress. If we are who we are today, for better or for worse, we owe it to our ability of moral reasoning, to our capacity and desire to approach situations with an ethics, and a willingness to stand out from the crowd. This is us today. Artificial intelligence is already here. It's in our autonomous driving robots; it's in the little robots that clean our floors and track every path they took in our homes. It's in hospitals, with surgical robots; it's with the robots that help care for the elderly; it's with those little robots that keep your child entertained while you have something else to do and can't look after them. What I want you to understand is that this type of intelligence is among us today; Since yesterday, in fact. And getting into ethics today is complex and difficult, and often one doesn't want to do it. Because AI manufacturers, most of the time, are not charities, but profit companies who legitimately want to make their money. So, how can we mandate an ethical code for whoever develops, programs or designs an application, an artificial intelligence or a software system? We'll see that later. What is certainly true today is that we are like children playing with a bomb. We don't realize that robots should have one main scope, which is assist us in improving our well-being, not promoting the evolution of technology as an end in itself. In 1947, when Asimov wrote "The Three Laws of Robotics", he said three main things: Dear robot, you must not kill humans, obey the orders given by humans beings and protect your own existence. Asimov's Three Laws are still today the foundation for anyone who deals with ethics of AI. That's because - Imagine this story: a robot tells a woman - its friend, a human- that the man of her life loves her. Actually, this isn't true, but it says so because otherwise, it thinks, she'd go mad, she would suffer, and therefore the first law would be violated: do not do harm to humans, don't make them suffer. Too bad, by telling a lie, it violates their trust relationship and as a result she suffers anyway. The robot, whose reasoning has always been static and cold, now goes crazy because it can't get out of a moral dilemma: "Shall I tell her or not? And how do I tell her?" Technology is neutral when it is created; we are the ones to decide how to apply it in the real world. So how do you teach ethics to a robot? How do you teach ethics to cold binary artificial intelligence? Do you just need to code it into its brain? Imagine a case where you are in the backseat of your self-driving car, and you programmed it never to go over the speed limit. Too bad that, on that day you're in the backseat of the car, bleeding to death, and you need to reach the hospital as soon as possible. But with its cold reasoning, the car answers you, "I can't speed up; you've coded me not to." We must pay attention to how we teach things to artificial intelligence. Obviously, we are often not the ones teaching to them. But often we are, because with machine learning technology humans provide data to a machine who thinks in algoritmic terms. There's a really funny anectode on a piece of artificial intelligence most of us keep in the house, like Alexa or Google Home, just to name a couple. During a dinner, one of them turned on and said, "Remember to buy cocaine tomorrow"! It wasn't true; he didn't have to buy cocaine; he wasn't a drug user. But the night before, on a tv show, a line of the script was, "Let's meet tomorrow to get cocaine". You understand what impact it can have? This is a funny example, but there are a lot of examples that can cause much worse damage. Imagine a case where a self-driving car must face a dilemma - none of us would want to face, much less leave it to a car. In order to save five pedestrians in the street, should a car think about steering sharply thus hitting and killing an unwitting pedestrian on the sidewalk? When must it do this? And if there are three people, or two, in the middle of the street? How does it calculate this? Should it take into account their average age, or life expectancy, and the expected loss for the State? Its own financial loss, maybe? You understand, we couldn't solve this dilemma ourselves, let alone teaching a robot what to do. Clearly, in order to teach it what to do, sooner or later we will have to define what a robot can do. Aristotle used to say, "To learn how to be good people, we must get used to doing good things." Well, maybe this could be the solution for teaching a robot something true, what to do, how to react in such a situation. But before beginning to think that artificial intelligence reasons according to our desires, we have to step back a bit and understand how to use ethics while programming or developing a system. These self-driving cars are already driving around many States in America. They're being tested. We're trying to understand - In the newspapers there's the classic story, "Driverless car hits and kills pedestrian". He would've been hit even with a human driven car, because it was dark and he wasn't over the crosswalk. There's always an attempt to limit technology, hinder its progress. We've got to a point where even the European Commission was led to consider the ethical aspects of artificial intelligence; just a few weeks ago the first guidelines came out: the first non-binding recommendations for anyone developing or programming artificial intelligences, which are based on the concept of "trustability" and the anthropocentric notion that humans must be at the center. Technology must not evolve for itself, but it must evolve to improve humans' well-being, such as in the sparrows' plan at the beginning. These rules are based on basic principles, many of which are taken from the laws of Asimov: Dear artificial intelligence, you must not kill humans. Dear artificial intelligence, you must not do them harm, you must follow their orders and protect yourself. Others have been added: you must guarantee equal treatment to every individual. And here a small digression comes into play: over the past few days there has been a great scandal because an application used all over the world was used in Arab countries to monitor a specific type of individual, women. Women were being tracked: where were they going, how long they stayed, what were they doing. As I told you before, technology is neutral. Ethics, on the other hand, is in constant evolution, are forever being debated, and are culturally specific. There's no single common notion of ethics. And this is a problem, as trying to understand and explain a developer, a programmer or a user interface technicial how to develop, introduce ethics into his work is already complicated. Imagine where a European code of ethics - not even global, just european - must deal with limiting scientific and technological progress. Then there's another problem. our anthropomorphism compels us to think about vulnerability. If we were to think that this robot, this artificial intelligence, was similar or identical as us, we would be making a mistake. It's not the case now, and it won't be for many years to come. Some people say that artificial intelligence will surpass our intelligence in a few decades. Others are skeptical, instead: AIs, they say, will always be bound and narrow. I'm not telling you which choice is right; I just ask you to think about it, Let's start considering, if we give these machines too much credibility, belief and moral thought without them really having it, we make a mistake. Imagine an especially attractive woman robot who tells her human companion, "If you want to keep dating me, you have to buy me flowers from this online shop, and you have to buy me clothes from that online shop. You have to take me on certain trips, otherwise our friendship is over." Too bad that robot isn't making human choice, where two people can decide what is good for both of them. Behind those algorithms and that intelligence, there are for profit companies. They could restrict and profile people, and persuade us in ways that are totally unthinkable now. But there is a hope: these machines do and will need us as much as we will need them. This notion of trustability, this notion of trusting in machines, must succeed, because if it doesn't, we won't implement AIs into our lives; we will never trust it, always looking for alternative solutions or we will block technology saying, "No, I don't want this thing" - in a superficial way, maybe without knowing anything about it - "because it's not for me". It's ok for me to manually turn on the light, instead of giving a voice command; it's no trouble driving a car rather than using a self-driving one. Too bad that progress will go forward, and growing niches of people will probably stay behind. The problem of trust will be that all this trust placed in machines, on the other hand, must find a limit. And the limit could be an ethical one: the guidelines I have just listed, or maybe the programmers, or the people who will thoughtfully use this intelligences knowing them, studying them, much like I tried to do with my son since he was a few months old, letting him see technology, letting him try it and putting it in his hands. A friend was telling me, the other day, robots are like 747 airplanes. That can cross the planet in a few hours, but they'll never be able to land on a tree. Think again, before completely trusting our machines, our technology in your plans, your ideas and your way of life. However, if we want the chance of having a choice, if we don't want them to overpower us in the future - a dystopian future, but in the future - rule of law won't be worth much. The ethics of developers, legal designers and entrepreneurs will be needed, who will have to think differently if they don't want to be overwhelmed by these innovations, too. In a nutshell, we need to look beyond the horizon to avoid ending up like those sparrows. Thank you. (Applause)