1 00:00:18,000 --> 00:00:19,440 I want to tell you a story. 2 00:00:20,339 --> 00:00:22,360 Once upon a time, they used to build nests, 3 00:00:22,880 --> 00:00:24,320 and a small group of sparrows 4 00:00:24,840 --> 00:00:28,000 were standing before the sunset's fading light 5 00:00:28,560 --> 00:00:31,779 remembering how heavy their day was. 6 00:00:32,479 --> 00:00:35,560 One of the sparrows said, We're all so small and weak. 7 00:00:36,080 --> 00:00:40,160 It would be so nice, if an owl could help us build our nests. 8 00:00:41,080 --> 00:00:42,680 Indeed, said another sparrow. 9 00:00:42,680 --> 00:00:46,960 It could also help us raise our children and take care of our old folks. 10 00:00:47,760 --> 00:00:49,680 A third sparrow joined in and said, 11 00:00:49,680 --> 00:00:52,080 This would surely be 12 00:00:52,080 --> 00:00:54,600 the best thing that could ever happen. 13 00:00:56,360 --> 00:01:01,120 Let's send out all our scouts to look for a baby owl. 14 00:01:02,360 --> 00:01:05,550 Let's bring him here as soon as possible, and progress with him. 15 00:01:05,550 --> 00:01:07,241 With his keen eyesight, 16 00:01:07,241 --> 00:01:13,271 he could also help us see and understand when that pesky neighbor's cat is coming. 17 00:01:13,271 --> 00:01:16,840 He could help where none of us can. 18 00:01:19,320 --> 00:01:21,440 Only one small sparrow, Scrunfinkle, 19 00:01:22,880 --> 00:01:25,440 with just one eye and a grumpy character, 20 00:01:26,160 --> 00:01:27,240 butted in and said, 21 00:01:27,760 --> 00:01:29,760 This could be the end of us. 22 00:01:29,760 --> 00:01:32,720 Bringing in among us a creature we don't know, 23 00:01:32,720 --> 00:01:34,400 without knowing how to tame it, 24 00:01:35,080 --> 00:01:38,280 will reduce us all to slavery, wipe us out from the Earth. 25 00:01:39,331 --> 00:01:41,640 Skeptical as they were, the other sparrows said, 26 00:01:41,640 --> 00:01:45,590 Absolutely not, an owl will help us! 27 00:01:45,590 --> 00:01:47,355 Let's first bring him here. 28 00:01:47,355 --> 00:01:52,120 Learning to train an owl, a new creature, is difficult, and takes up a lot of time. 29 00:01:52,120 --> 00:01:55,040 We need help now, we want to make progress now. 30 00:01:55,040 --> 00:01:58,040 We'll worry about the consequences later. 31 00:01:58,040 --> 00:02:00,600 Scrunfinkle and another small group of sparrows 32 00:02:01,840 --> 00:02:06,760 began trying frantically to figure out how to train owls, 33 00:02:06,760 --> 00:02:09,320 fearing that the other scout sparrows 34 00:02:09,880 --> 00:02:14,640 would return from their hunt before a solution had been found. 35 00:02:15,851 --> 00:02:18,160 I can't tell you how that story turned out. 36 00:02:18,160 --> 00:02:20,200 What I can tell you is, 37 00:02:20,200 --> 00:02:25,010 today we should all be that grumpy one-eyed sparrow. 38 00:02:26,040 --> 00:02:28,640 We are creating super artificial intelligences 39 00:02:29,284 --> 00:02:34,800 and allowing it to learnm to evolve, through access to Internet. 40 00:02:35,440 --> 00:02:37,240 But nothing ensures 41 00:02:37,240 --> 00:02:40,440 that artificial intelligence 42 00:02:40,440 --> 00:02:42,240 will never evolve strategies 43 00:02:42,240 --> 00:02:44,789 to secure their future dominance over us. 44 00:02:45,440 --> 00:02:51,440 I madly love technology I try out everything I find before me. 45 00:02:51,440 --> 00:02:54,480 Technology always wins; the future always wins; 46 00:02:54,480 --> 00:02:55,971 trying to stop it is pointless. 47 00:02:56,680 --> 00:02:58,880 A few years ago - let's say several years ago- 48 00:02:58,880 --> 00:03:02,640 I had the chance to try out, for several months, 49 00:03:02,640 --> 00:03:04,660 Google's enhanced reality glasses. 50 00:03:05,240 --> 00:03:07,720 I was so eager to try them out that, 51 00:03:07,720 --> 00:03:09,240 as soon as I got home, 52 00:03:10,099 --> 00:03:12,360 I pulled them out of the box, put them on 53 00:03:12,360 --> 00:03:15,430 and turned them on without reading the very stuff I write myself: 54 00:03:15,430 --> 00:03:17,000 T&C, and privacy information 55 00:03:17,000 --> 00:03:19,597 that rule the relationship between me and a technology. 56 00:03:19,597 --> 00:03:22,360 My wife Alessandra, after a few minutes, 57 00:03:22,360 --> 00:03:25,680 got a phone call from an alarmed friend 58 00:03:25,680 --> 00:03:27,360 who asked her, are you aware 59 00:03:27,360 --> 00:03:32,183 Marco is now streaming in real time all over the world? 60 00:03:32,920 --> 00:03:33,951 This it what happens 61 00:03:33,951 --> 00:03:37,200 when we don't understand the technology we have in front of us, 62 00:03:37,200 --> 00:03:39,440 when we don't read, when we aren't informed. 63 00:03:41,400 --> 00:03:44,880 This is what happens when we don't want to train ourselves, 64 00:03:44,880 --> 00:03:46,659 build awareness. 65 00:03:47,920 --> 00:03:51,040 This is what happens when we're 5.0 ignorant, 66 00:03:51,040 --> 00:03:54,800 when we have at our disposal all the knowledge in this world, 67 00:03:54,800 --> 00:03:58,430 but we don't want to get into it in depth. 68 00:03:58,430 --> 00:04:00,122 Today the problem is very simple: 69 00:04:00,122 --> 00:04:02,480 technology is going faster than anything else, 70 00:04:02,480 --> 00:04:04,845 faster than us, faster than the law, 71 00:04:04,845 --> 00:04:08,508 and faster than our means to learn and acquire knowledge. 72 00:04:09,400 --> 00:04:13,657 So, nowadays we have all the world's information at hand, 73 00:04:13,657 --> 00:04:15,280 but we're unable to control it, 74 00:04:15,280 --> 00:04:17,444 to eliminate the so-called "background noise". 75 00:04:18,403 --> 00:04:22,403 So, when we can't understand something, we say we need a new law. 76 00:04:22,403 --> 00:04:24,507 We say we need a paradigm shift. 77 00:04:24,507 --> 00:04:26,143 But do we really need new laws? 78 00:04:27,365 --> 00:04:32,017 In 1955, when Luigi Einaudi wrote his "Prediche inutili" [Useless Sermons], 79 00:04:32,017 --> 00:04:33,191 he wrote these words: 80 00:04:33,191 --> 00:04:36,880 "Knowledge comes first, then discussion, and finally deliberation. 81 00:04:37,520 --> 00:04:40,960 Legislation is not made through illusory and sterile ostentation." 82 00:04:42,360 --> 00:04:46,600 Laws made in haste, without knowing the subject matter, 83 00:04:46,600 --> 00:04:47,880 lead to new laws 84 00:04:47,880 --> 00:04:52,200 which try through workarounds to do something impossible: 85 00:04:52,200 --> 00:04:56,600 apply rules to the digital world that were designed for the analogical one. 86 00:04:56,600 --> 00:04:58,000 This is impossible. 87 00:04:58,000 --> 00:05:02,040 This creates a mix of totally inapplicable norms 88 00:05:02,040 --> 00:05:03,290 that distort the market. 89 00:05:04,640 --> 00:05:09,200 Today we have overcome our grandparents' ignorance, 90 00:05:09,200 --> 00:05:10,800 the kind of ignorance 91 00:05:10,800 --> 00:05:13,060 arising from lack of information. 92 00:05:14,803 --> 00:05:16,920 This is mostly due to the Internet. 93 00:05:17,441 --> 00:05:20,520 However, a new type of ignorance has emerged, the processing one, 94 00:05:20,520 --> 00:05:23,916 a type of ignorance resulting from an overabundance of information, 95 00:05:23,916 --> 00:05:28,765 as well as from our limited desire and time to process it. 96 00:05:28,765 --> 00:05:31,720 There are concrete examples in the newspapers every day: 97 00:05:32,520 --> 00:05:36,013 the rampant increase of fake news and lack of fact-checking 98 00:05:36,013 --> 00:05:38,520 which lead to uncontrolled false alarms 99 00:05:38,520 --> 00:05:39,626 and hate speeches. 100 00:05:39,626 --> 00:05:41,840 Everything that turns up on Internet, 101 00:05:41,840 --> 00:05:44,280 if not verified and checked, 102 00:05:44,280 --> 00:05:47,280 leads to a superficial use of technology. 103 00:05:48,360 --> 00:05:51,240 While a piece of news read incorrectly 104 00:05:51,240 --> 00:05:52,800 can have consequences, 105 00:05:52,800 --> 00:05:56,200 it can also lead, as we shall see, to even worse consequences. 106 00:05:59,320 --> 00:06:01,960 Nowadays, as I told you, we are ignorant 5.0, 107 00:06:01,960 --> 00:06:04,800 because both the information ignorance of the past 108 00:06:04,800 --> 00:06:06,772 and current processing one 109 00:06:07,360 --> 00:06:10,480 lead to the same result: 110 00:06:11,120 --> 00:06:14,080 sub-optimal actions, which are all alike. 111 00:06:14,080 --> 00:06:17,120 Very often we behave like sheep; 112 00:06:17,120 --> 00:06:20,720 we follow the mainstream, do whatever everyone else does. 113 00:06:21,640 --> 00:06:24,400 We do some things we shouldn't do. 114 00:06:24,400 --> 00:06:25,502 Why? 115 00:06:25,502 --> 00:06:28,240 Because sub-optimal actions lead to a lack of knowledge 116 00:06:28,240 --> 00:06:31,477 and a standardization of our behaviour on the web, 117 00:06:31,477 --> 00:06:34,360 certainly not to understanding how technology works. 118 00:06:34,880 --> 00:06:36,840 Understanding how blockchains work 119 00:06:36,840 --> 00:06:40,080 how innovative technologies, in general, work, 120 00:06:40,080 --> 00:06:42,134 is committing and time-consuming. 121 00:06:42,134 --> 00:06:44,840 We wouldn't even be able to get it, more often than not, 122 00:06:44,840 --> 00:06:46,768 because we lack a basic training. 123 00:06:46,768 --> 00:06:48,692 And here is where ethics come into play. 124 00:06:50,360 --> 00:06:53,680 Ethics is the glue that binds peoples together, 125 00:06:54,280 --> 00:06:56,240 the key that allows them to cooperate, 126 00:06:56,960 --> 00:07:01,200 and something that has helped our species to progress. 127 00:07:01,200 --> 00:07:04,400 If we are who we are today, for better or for worse, 128 00:07:04,400 --> 00:07:07,520 we owe it to our ability of moral reasoning, 129 00:07:08,080 --> 00:07:11,760 to our capacity and desire to approach situations 130 00:07:11,760 --> 00:07:15,560 with an ethics, and a willingness to stand out from the crowd. 131 00:07:20,381 --> 00:07:21,440 This is us today. 132 00:07:21,960 --> 00:07:24,480 Artificial intelligence is already here. 133 00:07:24,480 --> 00:07:26,520 It's in our autonomous driving robots; 134 00:07:27,040 --> 00:07:29,560 it's in the little robots that clean our floors 135 00:07:29,560 --> 00:07:32,880 and track every path they took in our homes. 136 00:07:33,563 --> 00:07:35,680 It's in hospitals, with surgical robots; 137 00:07:36,600 --> 00:07:39,960 it's with the robots that help care for the elderly; 138 00:07:39,960 --> 00:07:41,440 it's with those little robots 139 00:07:41,440 --> 00:07:43,462 that keep your child entertained 140 00:07:43,462 --> 00:07:46,920 while you have something else to do and can't look after them. 141 00:07:48,640 --> 00:07:50,760 What I want you to understand 142 00:07:50,760 --> 00:07:53,600 is that this type of intelligence is among us today; 143 00:07:53,600 --> 00:07:54,802 Since yesterday, in fact. 144 00:07:56,000 --> 00:08:00,966 And getting into ethics today is complex and difficult, 145 00:08:00,966 --> 00:08:03,130 and often one doesn't want to do it. 146 00:08:03,130 --> 00:08:05,634 Because AI manufacturers, most of the time, 147 00:08:05,634 --> 00:08:07,365 are not charities, 148 00:08:07,365 --> 00:08:09,000 but profit companies 149 00:08:09,000 --> 00:08:11,240 who legitimately want to make their money. 150 00:08:12,480 --> 00:08:18,040 So, how can we mandate an ethical code 151 00:08:18,040 --> 00:08:21,320 for whoever develops, programs or designs an application, 152 00:08:21,320 --> 00:08:24,400 an artificial intelligence or a software system? 153 00:08:25,230 --> 00:08:26,240 We'll see that later. 154 00:08:26,960 --> 00:08:28,852 What is certainly true today 155 00:08:28,852 --> 00:08:32,145 is that we are like children playing with a bomb. 156 00:08:32,760 --> 00:08:34,520 We don't realize 157 00:08:34,520 --> 00:08:36,919 that robots should have one main scope, 158 00:08:37,520 --> 00:08:42,080 which is assist us in improving our well-being, 159 00:08:42,080 --> 00:08:45,400 not promoting the evolution of technology as an end in itself. 160 00:08:46,200 --> 00:08:50,040 In 1947, when Asimov wrote "The Three Laws of Robotics", 161 00:08:51,240 --> 00:08:52,600 he said three main things: 162 00:08:53,227 --> 00:08:55,200 Dear robot, you must not kill humans, 163 00:08:55,200 --> 00:08:58,680 obey the orders given by humans beings 164 00:08:58,680 --> 00:09:00,171 and protect your own existence. 165 00:09:01,360 --> 00:09:03,400 Asimov's Three Laws are still today 166 00:09:03,400 --> 00:09:06,920 the foundation for anyone who deals with ethics of AI. 167 00:09:07,790 --> 00:09:08,800 That's because - 168 00:09:09,581 --> 00:09:10,880 Imagine this story: 169 00:09:12,640 --> 00:09:15,240 a robot tells a woman - its friend, a human- 170 00:09:15,840 --> 00:09:17,800 that the man of her life loves her. 171 00:09:18,760 --> 00:09:20,840 Actually, this isn't true, but it says so 172 00:09:20,840 --> 00:09:23,944 because otherwise, it thinks, she'd go mad, she would suffer, 173 00:09:23,944 --> 00:09:26,280 and therefore the first law would be violated: 174 00:09:26,280 --> 00:09:28,685 do not do harm to humans, don't make them suffer. 175 00:09:29,920 --> 00:09:33,680 Too bad, by telling a lie, it violates their trust relationship 176 00:09:33,680 --> 00:09:36,760 and as a result she suffers anyway. 177 00:09:37,360 --> 00:09:41,786 The robot, whose reasoning has always been static and cold, now goes crazy 178 00:09:41,786 --> 00:09:44,080 because it can't get out of a moral dilemma: 179 00:09:44,080 --> 00:09:46,436 "Shall I tell her or not? And how do I tell her?" 180 00:09:47,040 --> 00:09:49,920 Technology is neutral when it is created; 181 00:09:49,920 --> 00:09:53,080 we are the ones to decide how to apply it in the real world. 182 00:09:54,283 --> 00:09:56,400 So how do you teach ethics to a robot? 183 00:09:56,400 --> 00:09:58,930 How do you teach ethics 184 00:09:58,930 --> 00:10:01,080 to cold binary artificial intelligence? 185 00:10:02,400 --> 00:10:04,840 Do you just need to code it into its brain? 186 00:10:04,840 --> 00:10:07,520 Imagine a case 187 00:10:09,264 --> 00:10:12,920 where you are in the backseat of your self-driving car, 188 00:10:13,680 --> 00:10:17,840 and you programmed it never to go over the speed limit. 189 00:10:18,344 --> 00:10:21,760 Too bad that, on that day you're in the backseat of the car, 190 00:10:21,760 --> 00:10:23,154 bleeding to death, 191 00:10:23,802 --> 00:10:26,400 and you need to reach the hospital as soon as possible. 192 00:10:27,160 --> 00:10:30,480 But with its cold reasoning, the car answers you, 193 00:10:30,480 --> 00:10:33,800 "I can't speed up; you've coded me not to." 194 00:10:34,837 --> 00:10:36,040 We must pay attention 195 00:10:36,040 --> 00:10:38,760 to how we teach things to artificial intelligence. 196 00:10:38,760 --> 00:10:41,675 Obviously, we are often not the ones teaching to them. 197 00:10:41,675 --> 00:10:45,120 But often we are, because with machine learning technology 198 00:10:45,120 --> 00:10:46,498 humans provide data 199 00:10:46,498 --> 00:10:49,080 to a machine who thinks 200 00:10:49,080 --> 00:10:50,360 in algoritmic terms. 201 00:10:51,520 --> 00:10:52,997 There's a really funny anectode 202 00:10:52,997 --> 00:10:56,162 on a piece of artificial intelligence most of us keep in the house, 203 00:10:56,162 --> 00:10:58,480 like Alexa or Google Home, just to name a couple. 204 00:11:00,360 --> 00:11:03,880 During a dinner, one of them turned on and said, 205 00:11:03,880 --> 00:11:06,440 "Remember to buy cocaine tomorrow"! 206 00:11:08,790 --> 00:11:09,800 It wasn't true; 207 00:11:09,800 --> 00:11:12,400 he didn't have to buy cocaine; he wasn't a drug user. 208 00:11:12,400 --> 00:11:14,160 But the night before, 209 00:11:14,160 --> 00:11:17,600 on a tv show, 210 00:11:17,600 --> 00:11:18,800 a line of the script was, 211 00:11:18,800 --> 00:11:21,445 "Let's meet tomorrow to get cocaine". 212 00:11:22,129 --> 00:11:24,170 You understand what impact it can have? 213 00:11:24,170 --> 00:11:25,664 This is a funny example, 214 00:11:25,664 --> 00:11:30,600 but there are a lot of examples that can cause much worse damage. 215 00:11:30,600 --> 00:11:34,417 Imagine a case where a self-driving car must face a dilemma - 216 00:11:34,417 --> 00:11:37,720 none of us would want to face, much less leave it to a car. 217 00:11:38,019 --> 00:11:40,280 In order to save five pedestrians in the street, 218 00:11:40,280 --> 00:11:42,920 should a car think about steering sharply 219 00:11:43,440 --> 00:11:45,000 thus hitting and killing 220 00:11:45,520 --> 00:11:49,480 an unwitting pedestrian on the sidewalk? 221 00:11:50,400 --> 00:11:51,410 When must it do this? 222 00:11:51,410 --> 00:11:54,580 And if there are three people, or two, in the middle of the street? 223 00:11:54,580 --> 00:11:55,878 How does it calculate this? 224 00:11:55,878 --> 00:11:59,180 Should it take into account their average age, or life expectancy, 225 00:11:59,180 --> 00:12:01,180 and the expected loss for the State? 226 00:12:01,180 --> 00:12:03,029 Its own financial loss, maybe? 227 00:12:03,029 --> 00:12:05,722 You understand, we couldn't solve this dilemma ourselves, 228 00:12:05,722 --> 00:12:08,057 let alone teaching a robot what to do. 229 00:12:08,057 --> 00:12:10,120 Clearly, in order to teach it what to do, 230 00:12:10,120 --> 00:12:15,520 sooner or later we will have to define what a robot can do. 231 00:12:16,280 --> 00:12:17,840 Aristotle used to say, 232 00:12:19,640 --> 00:12:23,760 "To learn how to be good people, we must get used to doing good things." 233 00:12:24,680 --> 00:12:27,160 Well, maybe this could be the solution 234 00:12:27,680 --> 00:12:30,040 for teaching a robot something true, 235 00:12:30,040 --> 00:12:32,541 what to do, how to react in such a situation. 236 00:12:33,420 --> 00:12:36,280 But before beginning to think 237 00:12:36,280 --> 00:12:40,120 that artificial intelligence reasons according to our desires, 238 00:12:41,880 --> 00:12:43,800 we have to step back a bit 239 00:12:43,800 --> 00:12:46,520 and understand how to use ethics 240 00:12:46,520 --> 00:12:50,840 while programming or developing a system. 241 00:12:52,320 --> 00:12:55,800 These self-driving cars are already driving around many States in America. 242 00:12:55,800 --> 00:12:56,880 They're being tested. 243 00:12:57,541 --> 00:12:58,990 We're trying to understand - 244 00:12:58,990 --> 00:13:01,570 In the newspapers there's the classic story, 245 00:13:01,570 --> 00:13:04,260 "Driverless car hits and kills pedestrian". 246 00:13:04,260 --> 00:13:07,120 He would've been hit even with a human driven car, 247 00:13:07,120 --> 00:13:10,090 because it was dark and he wasn't over the crosswalk. 248 00:13:10,090 --> 00:13:15,240 There's always an attempt to limit technology, hinder its progress. 249 00:13:17,268 --> 00:13:19,570 We've got to a point 250 00:13:19,570 --> 00:13:22,240 where even the European Commission was led to consider 251 00:13:23,163 --> 00:13:25,520 the ethical aspects of artificial intelligence; 252 00:13:26,200 --> 00:13:28,680 just a few weeks ago the first guidelines came out: 253 00:13:28,680 --> 00:13:31,800 the first non-binding recommendations 254 00:13:33,840 --> 00:13:36,920 for anyone developing or programming artificial intelligences, 255 00:13:36,920 --> 00:13:39,240 which are based on the concept of "trustability" 256 00:13:39,240 --> 00:13:44,560 and the anthropocentric notion that humans must be at the center. 257 00:13:44,560 --> 00:13:47,640 Technology must not evolve for itself, 258 00:13:47,640 --> 00:13:50,160 but it must evolve to improve humans' well-being, 259 00:13:50,160 --> 00:13:53,080 such as in the sparrows' plan at the beginning. 260 00:13:55,440 --> 00:13:58,440 These rules are based on basic principles, 261 00:13:58,440 --> 00:14:00,720 many of which are taken from the laws of Asimov: 262 00:14:01,266 --> 00:14:03,960 Dear artificial intelligence, you must not kill humans. 263 00:14:03,960 --> 00:14:05,600 Dear artificial intelligence, 264 00:14:05,600 --> 00:14:08,197 you must not do them harm, you must follow their orders 265 00:14:08,197 --> 00:14:09,240 and protect yourself. 266 00:14:09,240 --> 00:14:10,640 Others have been added: 267 00:14:11,520 --> 00:14:16,200 you must guarantee equal treatment to every individual. 268 00:14:17,360 --> 00:14:20,400 And here a small digression comes into play: 269 00:14:21,219 --> 00:14:24,610 over the past few days there has been a great scandal 270 00:14:24,610 --> 00:14:28,720 because an application used all over the world 271 00:14:28,720 --> 00:14:30,120 was used in Arab countries 272 00:14:30,120 --> 00:14:34,226 to monitor a specific type of individual, women. 273 00:14:34,226 --> 00:14:35,760 Women were being tracked: 274 00:14:35,760 --> 00:14:38,886 where were they going, how long they stayed, what were they doing. 275 00:14:39,640 --> 00:14:41,960 As I told you before, technology is neutral. 276 00:14:42,800 --> 00:14:46,040 Ethics, on the other hand, is in constant evolution, 277 00:14:46,040 --> 00:14:49,400 are forever being debated, and are culturally specific. 278 00:14:49,400 --> 00:14:53,120 There's no single common notion of ethics. 279 00:14:53,840 --> 00:14:57,400 And this is a problem, as trying to understand 280 00:14:57,400 --> 00:15:00,040 and explain a developer, a programmer 281 00:15:00,040 --> 00:15:02,200 or a user interface technicial 282 00:15:02,920 --> 00:15:06,920 how to develop, introduce ethics into his work is already complicated. 283 00:15:06,920 --> 00:15:11,520 Imagine where a European code of ethics - not even global, just european - 284 00:15:12,520 --> 00:15:15,520 must deal with limiting 285 00:15:15,520 --> 00:15:18,320 scientific and technological progress. 286 00:15:18,840 --> 00:15:21,760 Then there's another problem. 287 00:15:21,760 --> 00:15:28,720 our anthropomorphism compels us to think about vulnerability. 288 00:15:30,400 --> 00:15:35,080 If we were to think that this robot, this artificial intelligence, 289 00:15:35,080 --> 00:15:37,080 was similar or identical as us, 290 00:15:37,720 --> 00:15:39,243 we would be making a mistake. 291 00:15:39,243 --> 00:15:42,400 It's not the case now, and it won't be for many years to come. 292 00:15:42,400 --> 00:15:44,500 Some people say that artificial intelligence 293 00:15:44,500 --> 00:15:47,404 will surpass our intelligence in a few decades. 294 00:15:47,404 --> 00:15:49,059 Others are skeptical, instead: 295 00:15:49,059 --> 00:15:51,272 AIs, they say, will always be bound and narrow. 296 00:15:51,272 --> 00:15:54,880 I'm not telling you which choice is right; I just ask you to think about it, 297 00:15:54,880 --> 00:16:00,035 Let's start considering, if we give these machines 298 00:16:00,035 --> 00:16:04,480 too much credibility, belief and moral thought 299 00:16:04,480 --> 00:16:06,432 without them really having it, 300 00:16:06,432 --> 00:16:07,520 we make a mistake. 301 00:16:07,520 --> 00:16:11,040 Imagine an especially attractive woman robot 302 00:16:12,188 --> 00:16:13,914 who tells her human companion, 303 00:16:13,914 --> 00:16:15,880 "If you want to keep dating me, 304 00:16:15,880 --> 00:16:19,960 you have to buy me flowers from this online shop, 305 00:16:19,960 --> 00:16:22,860 and you have to buy me clothes from that online shop. 306 00:16:22,860 --> 00:16:25,420 You have to take me on certain trips, 307 00:16:25,420 --> 00:16:27,400 otherwise our friendship is over." 308 00:16:29,320 --> 00:16:32,000 Too bad that robot isn't making human choice, 309 00:16:32,000 --> 00:16:36,749 where two people can decide what is good for both of them. 310 00:16:36,749 --> 00:16:39,480 Behind those algorithms and that intelligence, 311 00:16:39,480 --> 00:16:41,304 there are for profit companies. 312 00:16:41,304 --> 00:16:45,800 They could restrict and profile people, 313 00:16:45,800 --> 00:16:49,550 and persuade us in ways that are totally unthinkable now. 314 00:16:50,990 --> 00:16:52,000 But there is a hope: 315 00:16:52,000 --> 00:16:54,320 these machines do and will need us 316 00:16:54,320 --> 00:16:58,400 as much as we will need them. 317 00:16:59,200 --> 00:17:03,080 This notion of trustability, this notion of trusting in machines, 318 00:17:03,683 --> 00:17:06,040 must succeed, because if it doesn't, 319 00:17:06,040 --> 00:17:09,358 we won't implement AIs into our lives; 320 00:17:09,358 --> 00:17:13,598 we will never trust it, always looking for alternative solutions 321 00:17:13,598 --> 00:17:17,102 or we will block technology saying, "No, I don't want this thing" - 322 00:17:17,102 --> 00:17:21,280 in a superficial way, maybe without knowing anything about it - 323 00:17:22,200 --> 00:17:23,520 "because it's not for me". 324 00:17:23,520 --> 00:17:25,032 It's ok for me 325 00:17:25,032 --> 00:17:27,218 to manually turn on the light, 326 00:17:27,218 --> 00:17:29,720 instead of giving a voice command; 327 00:17:29,720 --> 00:17:33,240 it's no trouble driving a car rather than using a self-driving one. 328 00:17:33,240 --> 00:17:35,200 Too bad that progress will go forward, 329 00:17:35,770 --> 00:17:38,560 and growing niches of people will probably stay behind. 330 00:17:39,360 --> 00:17:40,720 The problem of trust will be 331 00:17:40,720 --> 00:17:45,800 that all this trust placed in machines, on the other hand, must find a limit. 332 00:17:45,800 --> 00:17:47,786 And the limit could be an ethical one: 333 00:17:47,786 --> 00:17:50,240 the guidelines I have just listed, 334 00:17:50,840 --> 00:17:52,520 or maybe the programmers, 335 00:17:52,520 --> 00:17:57,680 or the people who will thoughtfully use this intelligences 336 00:17:57,680 --> 00:17:59,074 knowing them, studying them, 337 00:17:59,074 --> 00:18:02,200 much like I tried to do with my son since he was a few months old, 338 00:18:02,200 --> 00:18:03,990 letting him see technology, 339 00:18:03,990 --> 00:18:06,200 letting him try it and putting it in his hands. 340 00:18:06,200 --> 00:18:12,560 A friend was telling me, the other day, robots are like 747 airplanes. 341 00:18:12,560 --> 00:18:15,975 That can cross the planet in a few hours, 342 00:18:17,600 --> 00:18:21,087 but they'll never be able to land on a tree. 343 00:18:21,087 --> 00:18:25,120 Think again, before completely trusting our machines, our technology 344 00:18:25,120 --> 00:18:28,280 in your plans, your ideas and your way of life. 345 00:18:28,882 --> 00:18:31,720 However, if we want the chance of having a choice, 346 00:18:31,720 --> 00:18:34,221 if we don't want them to overpower us in the future - 347 00:18:34,221 --> 00:18:36,400 a dystopian future, but in the future - 348 00:18:37,200 --> 00:18:42,600 rule of law won't be worth much. 349 00:18:42,600 --> 00:18:48,811 The ethics of developers, legal designers and entrepreneurs will be needed, 350 00:18:48,811 --> 00:18:51,113 who will have to think differently 351 00:18:51,113 --> 00:18:55,160 if they don't want to be overwhelmed by these innovations, too. 352 00:18:56,120 --> 00:18:59,240 In a nutshell, we need to look beyond the horizon 353 00:18:59,240 --> 00:19:02,082 to avoid ending up like those sparrows. 354 00:19:02,082 --> 00:19:03,092 Thank you. 355 00:19:03,092 --> 00:19:04,640 (Applause)