< Return to Video

How humans and AI can work together to create better businesses

  • 0:01 - 0:03
    Let me share a paradox.
  • 0:04 - 0:06
    For the last 10 years,
  • 0:06 - 0:10
    many companies have been trying
    to become less bureaucratic,
  • 0:10 - 0:13
    to have fewer central rules
    and procedures,
  • 0:13 - 0:16
    more autonomy for their local
    teams to be more agile.
  • 0:16 - 0:21
    And now they are pushing
    artificial intelligence, AI,
  • 0:21 - 0:23
    unaware that cool technology
  • 0:23 - 0:27
    might make them
    more bureaucratic than ever.
  • 0:27 - 0:29
    Why?
  • 0:29 - 0:32
    Because AI operates
    just like bureaucracies.
  • 0:32 - 0:35
    The essence of bureaucracy
  • 0:35 - 0:39
    is to favor rules and procedures
    over human judgment.
  • 0:40 - 0:44
    And AI decides solely based on rules.
  • 0:44 - 0:47
    Many rules inferred from past data
  • 0:47 - 0:49
    but only rules.
  • 0:49 - 0:53
    And if human judgment
    is not kept in the loop,
  • 0:53 - 0:58
    AI will bring a terrifying form
    of new bureaucracy --
  • 0:58 - 1:01
    I call it algocracy --
  • 1:01 - 1:05
    where AI will take more and more
    critical decisions by the rules
  • 1:05 - 1:07
    outside of any human control.
  • 1:08 - 1:10
    Is there a real risk?
  • 1:11 - 1:12
    Yes.
  • 1:12 - 1:15
    I'm leading a team of 800 AI specialists.
  • 1:15 - 1:19
    We have deployed
    over 100 customized AI solutions
  • 1:19 - 1:21
    for large companies around the world.
  • 1:21 - 1:27
    And I see too many corporate executives
    behaving like bureaucrats from the past.
  • 1:28 - 1:33
    They want to take costly,
    old-fashioned humans out of the loop
  • 1:33 - 1:37
    and rely only upon AI to take decisions.
  • 1:37 - 1:41
    I call this the human-zero mind-set.
  • 1:42 - 1:44
    And why is it so tempting?
  • 1:45 - 1:50
    Because the other route,
    "Human plus AI," is long,
  • 1:50 - 1:53
    costly and difficult.
  • 1:53 - 1:56
    Business teams, tech teams,
    data-science teams
  • 1:56 - 1:58
    have to iterate for months
  • 1:58 - 2:04
    to craft exactly how humans and AI
    can best work together.
  • 2:04 - 2:08
    Long, costly and difficult.
  • 2:08 - 2:10
    But the reward is huge.
  • 2:10 - 2:14
    A recent survey from BCG and MIT
  • 2:14 - 2:18
    shows that 18 percent
    of companies in the world
  • 2:18 - 2:20
    are pioneering AI,
  • 2:20 - 2:23
    making money with it.
  • 2:23 - 2:29
    Those companies focus 80 percent
    of their AI initiatives
  • 2:29 - 2:31
    on effectiveness and growth,
  • 2:31 - 2:33
    taking better decisions --
  • 2:33 - 2:36
    not replacing humans with AI
    to save costs.
  • 2:38 - 2:41
    Why is it important
    to keep humans in the loop?
  • 2:42 - 2:47
    Simply because, left alone,
    AI can do very dumb things.
  • 2:47 - 2:51
    Sometimes with no consequences,
    like in this tweet.
  • 2:51 - 2:53
    "Dear Amazon,
  • 2:53 - 2:54
    I bought a toilet seat.
  • 2:54 - 2:56
    Necessity, not desire.
  • 2:56 - 2:57
    I do not collect them,
  • 2:57 - 3:00
    I'm not a toilet-seat addict.
  • 3:00 - 3:02
    No matter how temptingly you email me,
  • 3:02 - 3:04
    I am not going to think, 'Oh, go on, then,
  • 3:04 - 3:06
    one more toilet seat,
    I'll treat myself.' "
  • 3:06 - 3:08
    (Laughter)
  • 3:08 - 3:12
    Sometimes, with more consequence,
    like in this other tweet.
  • 3:13 - 3:15
    "Had the same situation
  • 3:15 - 3:17
    with my mother's burial urn."
  • 3:17 - 3:18
    (Laughter)
  • 3:18 - 3:20
    "For months after her death,
  • 3:20 - 3:23
    I got messages from Amazon,
    saying, 'If you liked that ...' "
  • 3:23 - 3:25
    (Laughter)
  • 3:25 - 3:28
    Sometimes with worse consequences.
  • 3:28 - 3:33
    Take an AI engine rejecting
    a student application for university.
  • 3:33 - 3:34
    Why?
  • 3:34 - 3:36
    Because it has "learned," on past data,
  • 3:36 - 3:40
    characteristics of students
    that will pass and fail.
  • 3:40 - 3:42
    Some are obvious, like GPAs.
  • 3:42 - 3:47
    But if, in the past, all students
    from a given postal code have failed,
  • 3:47 - 3:51
    it is very likely
    that AI will make this a rule
  • 3:51 - 3:55
    and will reject every student
    with this postal code,
  • 3:55 - 3:59
    not giving anyone the opportunity
    to prove the rule wrong.
  • 4:00 - 4:02
    And no one can check all the rules,
  • 4:02 - 4:06
    because advanced AI
    is constantly learning.
  • 4:06 - 4:09
    And if humans are kept out of the room,
  • 4:09 - 4:12
    there comes the algocratic nightmare.
  • 4:12 - 4:15
    Who is accountable
    for rejecting the student?
  • 4:15 - 4:17
    No one, AI did.
  • 4:17 - 4:19
    Is it fair? Yes.
  • 4:19 - 4:22
    The same set of objective rules
    has been applied to everyone.
  • 4:22 - 4:26
    Could we reconsider for this bright kid
    with the wrong postal code?
  • 4:27 - 4:30
    No, algos don't change their mind.
  • 4:31 - 4:33
    We have a choice here.
  • 4:34 - 4:36
    Carry on with algocracy
  • 4:36 - 4:39
    or decide to go to "Human plus AI."
  • 4:39 - 4:41
    And to do this,
  • 4:41 - 4:44
    we need to stop thinking tech first,
  • 4:44 - 4:48
    and we need to start applying
    the secret formula.
  • 4:49 - 4:51
    To deploy Human plus AI,
  • 4:51 - 4:54
    10 percent of the effort is to code algos.
  • 4:54 - 4:57
    Twenty percent to build tech
    around the algos,
  • 4:57 - 5:01
    collecting data, building UI,
    integrating into legacy system.
  • 5:01 - 5:04
    But 70 percent, the bulk of the effort,
  • 5:04 - 5:09
    is about weaving together AI
    with people and processes
  • 5:09 - 5:11
    to maximize real outcome.
  • 5:12 - 5:17
    AI fails when cutting short
    on the 70 percent.
  • 5:17 - 5:20
    The price tag for that can be small,
  • 5:20 - 5:24
    wasting many, many millions
    of dollars on useless technology.
  • 5:24 - 5:25
    Anyone cares?
  • 5:26 - 5:28
    Or real tragedies.
  • 5:29 - 5:32
    Three hundred and forty-six casualties
  • 5:32 - 5:37
    in the recent crashes
    of two B-737 aircrafts
  • 5:37 - 5:40
    when pilots could not interact properly
  • 5:40 - 5:43
    with a computerized command system.
  • 5:44 - 5:46
    For a successful 70 percent,
  • 5:46 - 5:51
    the first step is to make sure
    that algos are coded by data scientists
  • 5:51 - 5:53
    and domain experts together.
  • 5:53 - 5:56
    Take health care for example.
  • 5:56 - 6:00
    One of our teams worked on a new drug
    with a slight problem.
  • 6:01 - 6:02
    When taking their first dose,
  • 6:02 - 6:06
    some patients, very few,
    have heart attacks.
  • 6:06 - 6:09
    So, all patients,
    when taking their first dose,
  • 6:09 - 6:12
    have to spend one day in hospital,
  • 6:12 - 6:14
    for monitoring, just in case.
  • 6:15 - 6:20
    Our objective was to identify patients
    who were at zero risk of heart attacks,
  • 6:20 - 6:23
    who could skip the day in hospital.
  • 6:23 - 6:27
    We used AI to analyze data
    from clinical trials,
  • 6:28 - 6:33
    to correlate ECG signal,
    blood composition, biomarkers,
  • 6:33 - 6:35
    with the risk of heart attack.
  • 6:35 - 6:37
    In one month,
  • 6:37 - 6:43
    our model could flag 62 percent
    of patients at zero risk.
  • 6:43 - 6:45
    They could skip the day in hospital.
  • 6:46 - 6:49
    Would you be comfortable
    staying at home for your first dose
  • 6:49 - 6:51
    if the algo said so?
  • 6:51 - 6:52
    (Laughter)
  • 6:52 - 6:54
    Doctors were not.
  • 6:54 - 6:56
    What if we had false negatives,
  • 6:56 - 7:02
    meaning people who are told by AI
    they can stay at home, and die?
  • 7:02 - 7:03
    (Laughter)
  • 7:03 - 7:05
    There started our 70 percent.
  • 7:05 - 7:07
    We worked with a team of doctors
  • 7:07 - 7:11
    to check the medical logic
    of each variable in our model.
  • 7:12 - 7:16
    For instance, we were using
    the concentration of a liver enzyme
  • 7:16 - 7:17
    as a predictor,
  • 7:17 - 7:21
    for which the medical logic
    was not obvious.
  • 7:21 - 7:24
    The statistical signal was quite strong.
  • 7:24 - 7:27
    But what if it was a bias in our sample?
  • 7:27 - 7:30
    That predictor was taken out of the model.
  • 7:30 - 7:34
    We also took out predictors
    for which experts told us
  • 7:34 - 7:38
    they cannot be rigorously measured
    by doctors in real life.
  • 7:38 - 7:40
    After four months,
  • 7:40 - 7:43
    we had a model and a medical protocol.
  • 7:44 - 7:45
    They both got approved
  • 7:45 - 7:48
    my medical authorities
    in the US last spring,
  • 7:48 - 7:52
    resulting in far less stress
    for half of the patients
  • 7:52 - 7:54
    and better quality of life.
  • 7:54 - 7:59
    And an expected upside on sales
    over 100 million for that drug.
  • 8:00 - 8:04
    Seventy percent weaving AI
    with team and processes
  • 8:04 - 8:07
    also means building powerful interfaces
  • 8:07 - 8:13
    for humans and AI to solve
    the most difficult problems together.
  • 8:13 - 8:18
    Once, we got challenged
    by a fashion retailer.
  • 8:19 - 8:22
    "We have the best buyers in the world.
  • 8:22 - 8:27
    Could you build an AI engine
    that would beat them at forecasting sales?
  • 8:27 - 8:31
    At telling how many high-end,
    light-green, men XL shirts
  • 8:31 - 8:33
    we need to buy for next year?
  • 8:33 - 8:36
    At predicting better what will sell or not
  • 8:36 - 8:38
    than our designers."
  • 8:38 - 8:42
    Our team trained a model in a few weeks,
    on past sales data,
  • 8:42 - 8:46
    and the competition was organized
    with human buyers.
  • 8:46 - 8:47
    Result?
  • 8:48 - 8:53
    AI wins, reducing forecasting
    errors by 25 percent.
  • 8:54 - 8:59
    Human-zero champions could have tried
    to implement this initial model
  • 8:59 - 9:02
    and create a fight with all human buyers.
  • 9:02 - 9:03
    Have fun.
  • 9:03 - 9:08
    But we knew that human buyers
    had insights on fashion trends
  • 9:08 - 9:11
    that could not be found in past data.
  • 9:12 - 9:15
    There started our 70 percent.
  • 9:15 - 9:17
    We went for a second test,
  • 9:17 - 9:20
    where human buyers
    were reviewing quantities
  • 9:20 - 9:21
    suggested by AI
  • 9:21 - 9:24
    and could correct them if needed.
  • 9:24 - 9:25
    Result?
  • 9:26 - 9:28
    Humans using AI ...
  • 9:28 - 9:29
    lose.
  • 9:30 - 9:34
    Seventy-five percent
    of the corrections made by a human
  • 9:34 - 9:36
    were reducing accuracy.
  • 9:37 - 9:40
    Was it time to get rid of human buyers?
  • 9:40 - 9:41
    No.
  • 9:41 - 9:44
    It was time to recreate a model
  • 9:44 - 9:49
    where humans would not try
    to guess when AI is wrong,
  • 9:49 - 9:54
    but where AI would take real input
    from human buyers.
  • 9:55 - 9:57
    We fully rebuilt the model
  • 9:57 - 10:03
    and went away from our initial interface,
    which was, more or less,
  • 10:03 - 10:05
    "Hey, human! This is what I forecast,
  • 10:05 - 10:07
    correct whatever you want,"
  • 10:07 - 10:10
    and moved to a much richer one, more like,
  • 10:10 - 10:12
    "Hey, humans!
  • 10:12 - 10:14
    I don't know the trends for next year.
  • 10:14 - 10:17
    Could you share with me
    your top creative bets?"
  • 10:18 - 10:20
    "Hey, humans!
  • 10:20 - 10:22
    Could you help me quantify
    those few big items?
  • 10:22 - 10:26
    I cannot find any good comparables
    in the past for them."
  • 10:26 - 10:28
    Result?
  • 10:28 - 10:30
    Human plus AI wins,
  • 10:30 - 10:34
    reducing forecast errors by 50 percent.
  • 10:36 - 10:39
    It took one year to finalize the tool.
  • 10:39 - 10:42
    Long, costly and difficult.
  • 10:43 - 10:45
    But profits and benefits
  • 10:45 - 10:51
    were in excess of 100 million of savings
    per year for that retailer.
  • 10:51 - 10:54
    Seventy percent on very sensitive topics
  • 10:54 - 10:58
    also means human have to decide
    what is right or wrong
  • 10:58 - 11:02
    and define rules
    for what AI can do or not,
  • 11:02 - 11:06
    like setting caps on prices
    to prevent pricing engines
  • 11:06 - 11:10
    [from charging] outrageously high prices
    to uneducated customers
  • 11:10 - 11:12
    who would accept them.
  • 11:13 - 11:15
    Only humans can define those boundaries --
  • 11:15 - 11:19
    there is no way AI
    can find them in past data.
  • 11:19 - 11:22
    Some situations are in the gray zone.
  • 11:22 - 11:25
    We worked with a health insurer.
  • 11:25 - 11:30
    He developed an AI engine
    to identify, among his clients,
  • 11:30 - 11:32
    people who are just about
    to go to hospital
  • 11:32 - 11:34
    to sell them premium services.
  • 11:35 - 11:36
    And the problem is,
  • 11:36 - 11:39
    some prospects were called
    by the commercial team
  • 11:39 - 11:42
    while they did not know yet
  • 11:42 - 11:45
    they would have to go
    to hospital very soon.
  • 11:46 - 11:48
    You are the CEO of this company.
  • 11:48 - 11:50
    Do you stop that program?
  • 11:51 - 11:52
    Not an easy question.
  • 11:53 - 11:56
    And to tackle this question,
    some companies are building teams,
  • 11:56 - 12:02
    defining ethical rules and standards
    to help business and tech teams set limits
  • 12:02 - 12:06
    between personalization and manipulation,
  • 12:06 - 12:09
    customization of offers
    and discrimination,
  • 12:09 - 12:11
    targeting and intrusion.
  • 12:13 - 12:16
    I am convinced that in every company,
  • 12:16 - 12:21
    applying AI where it really matters
    has massive payback.
  • 12:21 - 12:24
    Business leaders need to be bold
  • 12:24 - 12:26
    and select a few topics,
  • 12:26 - 12:31
    and for each of them, mobilize
    10, 20, 30 people from their best teams --
  • 12:31 - 12:34
    tech, AI, data science, ethics --
  • 12:34 - 12:38
    and go through the full
    10-, 20-, 70-percent cycle
  • 12:38 - 12:40
    of Human plus AI,
  • 12:40 - 12:44
    if they want to land AI effectively
    in their teams and processes.
  • 12:45 - 12:47
    There is no other way.
  • 12:47 - 12:52
    Citizens in developed economies
    already fear algocracy.
  • 12:52 - 12:56
    Seven thousand were interviewed
    in a recent survey.
  • 12:56 - 13:00
    More than 75 percent
    expressed real concerns
  • 13:00 - 13:04
    on the impact of AI
    on the workforce, on privacy,
  • 13:04 - 13:07
    on the risk of a dehumanized society.
  • 13:07 - 13:13
    Pushing algocracy creates a real risk
    of severe backlash against AI
  • 13:13 - 13:17
    within companies or in society at large.
  • 13:17 - 13:20
    Human plus AI is our only option
  • 13:20 - 13:23
    to bring the benefits of AI
    to the real world.
  • 13:24 - 13:25
    And in the end,
  • 13:25 - 13:29
    the winning organizations
    will invest in human knowledge,
  • 13:29 - 13:32
    not just AI and data.
  • 13:33 - 13:36
    Recruiting, training,
    rewarding human experts.
  • 13:37 - 13:40
    Data is said to be the new oil,
  • 13:40 - 13:44
    but believe me, human knowledge
    will make the difference,
  • 13:44 - 13:48
    because it is the only derrick available
  • 13:48 - 13:51
    to pump the oil hidden in the data.
  • 13:53 - 13:54
    Thank you.
  • 13:54 - 13:58
    (Applause)
Title:
How humans and AI can work together to create better businesses
Speaker:
Sylvain Duranton
Description:

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
14:10

English subtitles

Revisions Compare revisions