Return to Video

The wonderful and terrifying implications of computers that can learn | Jeremy Howard | TEDxBrussels

  • 0:10 - 0:13
    It used to be that if you wanted
    to get a computer to do something new,
  • 0:13 - 0:15
    you would have to program it.
  • 0:15 - 0:19
    Now, programming, for those of you here
    that haven't done it yourself,
  • 0:19 - 0:22
    requires laying out in excruciating detail
  • 0:22 - 0:25
    every single step that you want
    the computer to do
  • 0:25 - 0:27
    in order to achieve your goal.
  • 0:27 - 0:31
    Now, if you want to do something
    that you don't know how to do yourself,
  • 0:31 - 0:33
    then this is going to be
    a great challenge.
  • 0:33 - 0:37
    So this was the challenge faced
    by this man, Arthur Samuel.
  • 0:37 - 0:43
    In 1956, he wanted to get this computer
    to be able to beat him at checkers.
  • 0:43 - 0:45
    How can you write a program,
  • 0:45 - 0:49
    lay out in excruciating detail,
    how to be better than you at checkers?
  • 0:49 - 0:51
    So he came up with an idea:
  • 0:51 - 0:54
    he had the computer play
    against itself thousands of times
  • 0:54 - 0:57
    and learn how to play checkers.
  • 0:57 - 1:00
    And indeed it worked,
    and in fact, by 1962,
  • 1:00 - 1:03
    this computer had beaten
    the Connecticut state champion.
  • 1:03 - 1:07
    So Arthur Samuel was
    the father of machine learning,
  • 1:07 - 1:08
    and I have a great debt to him,
  • 1:08 - 1:11
    because I am
    a machine learning practitioner.
  • 1:11 - 1:13
    I was the president of Kaggle,
  • 1:13 - 1:16
    a community of over 200,000
    machine learning practitioners.
  • 1:16 - 1:18
    Kaggle puts up competitions
  • 1:18 - 1:22
    to try and get them to solve
    previously unsolved problems,
  • 1:22 - 1:25
    and it's been successful
    hundreds of times.
  • 1:26 - 1:28
    So from this vantage point,
    I was able to find out
  • 1:28 - 1:32
    a lot about what machine learning
    can do in the past, can do today,
  • 1:32 - 1:34
    and what it could do in the future.
  • 1:34 - 1:37
    Perhaps the first big success
  • 1:37 - 1:39
    of machine learning commercially
    was Google.
  • 1:39 - 1:42
    Google showed that it is possible
    to find information
  • 1:42 - 1:44
    by using a computer algorithm,
  • 1:44 - 1:47
    and this algorithm is based
    on machine learning.
  • 1:47 - 1:51
    Since that time, there have been many
    commercial successes of machine learning.
  • 1:51 - 1:53
    Companies like Amazon and Netflix
  • 1:53 - 1:56
    use machine learning to suggest
    products that you might like to buy,
  • 1:56 - 1:58
    movies that you might like to watch.
  • 1:58 - 2:00
    Sometimes, it's almost creepy.
  • 2:00 - 2:02
    Companies like LinkedIn and Facebook
  • 2:02 - 2:04
    sometimes will tell you about
    who your friends might be
  • 2:04 - 2:06
    and you have no idea how it did it,
  • 2:06 - 2:09
    and this is because it's using
    the power of machine learning.
  • 2:09 - 2:13
    These are algorithms that have learned
    how to do this from data
  • 2:13 - 2:15
    rather than being programmed by hand.
  • 2:16 - 2:18
    This is also how IBM was successful
  • 2:18 - 2:21
    in getting Watson to beat
    two world champions at "Jeopardy,"
  • 2:21 - 2:24
    answering incredibly subtle
    and complex questions like this one.
  • 2:24 - 2:29
    [The ancient "Lion of Nimrud" went missing
    from this city's national museum in 2003]
  • 2:29 - 2:32
    This is also why we are now able
    to see the first self-driving cars.
  • 2:33 - 2:35
    If you want to be able to tell
    the difference between, say,
  • 2:35 - 2:38
    a tree and a pedestrian,
    well, that's pretty important.
  • 2:38 - 2:41
    We don't know how to write
    those programs by hand,
  • 2:41 - 2:44
    but with machine learning,
    this is now possible.
  • 2:44 - 2:46
    And in fact, this car has driven
    over a million miles
  • 2:46 - 2:49
    without any accidents on regular roads.
  • 2:49 - 2:52
    So we now know that computers can learn,
  • 2:52 - 2:55
    and computers can learn to do things
  • 2:55 - 2:57
    that we actually sometimes
    don't know how to do ourselves,
  • 2:57 - 3:00
    or maybe can do them better than us.
  • 3:00 - 3:04
    One of the most amazing examples
    I've seen of machine learning
  • 3:04 - 3:07
    happened on a project that I ran at Kaggle
  • 3:07 - 3:11
    where a team run by a guy
    called Geoffrey Hinton
  • 3:11 - 3:12
    from the University of Toronto
  • 3:12 - 3:15
    won a competition
    for automatic drug discovery.
  • 3:15 - 3:17
    Now, what was extraordinary here
    is not just that they beat
  • 3:17 - 3:22
    all of the algorithms developed by Merck
    or the international academic community,
  • 3:22 - 3:27
    but nobody on the team had any background
    in chemistry or biology or life sciences,
  • 3:27 - 3:29
    and they did it in two weeks.
  • 3:29 - 3:31
    How did they do this?
  • 3:31 - 3:34
    They used an extraordinary algorithm
    called deep learning.
  • 3:34 - 3:37
    So important was this that in fact
    the success was covered
  • 3:37 - 3:40
    in The New York Times
    in a front page article a few weeks later.
  • 3:40 - 3:43
    This is Geoffrey Hinton
    here on the left-hand side.
  • 3:43 - 3:47
    Deep learning is an algorithm
    inspired by how the human brain works,
  • 3:47 - 3:49
    and as a result it's an algorithm
  • 3:49 - 3:52
    which has no theoretical limitations
    on what it can do.
  • 3:52 - 3:56
    The more data you give it and the more
    computation time you give it,
  • 3:56 - 3:57
    the better it gets.
  • 3:57 - 3:59
    The New York Times
    also showed in this article
  • 3:59 - 4:02
    another extraordinary result
    of deep learning
  • 4:02 - 4:04
    which I'm going to show you now.
  • 4:04 - 4:08
    It shows that computers
    can listen and understand.
  • 4:09 - 4:11
    (Video) Richard Rashid: Now, the last step
  • 4:11 - 4:14
    that I want to be able
    to take in this process
  • 4:14 - 4:17
    is to actually speak to you in Chinese.
  • 4:19 - 4:22
    Now the key thing there is,
  • 4:22 - 4:27
    we've been able to take a large amount
    of information from many Chinese speakers
  • 4:27 - 4:30
    and produce a text-to-speech system
  • 4:30 - 4:34
    that takes Chinese text
    and converts it into Chinese language,
  • 4:35 - 4:39
    and then we've taken
    an hour or so of my own voice
  • 4:39 - 4:41
    and we've used that to modulate
  • 4:41 - 4:45
    the standard text-to-speech system
    so that it would sound like me.
  • 4:45 - 4:48
    Again, the result's not perfect.
  • 4:48 - 4:51
    There are in fact quite a few errors.
  • 4:51 - 4:53
    (In Chinese)
  • 4:53 - 4:57
    (Applause)
  • 4:58 - 5:01
    There's much work to be done in this area.
  • 5:01 - 5:05
    (In Chinese)
  • 5:05 - 5:08
    (Applause)
  • 5:10 - 5:14
    Jeremy Howard: Well, that was
    at a machine learning conference in China.
  • 5:14 - 5:17
    It's not often, actually,
    at academic conferences
  • 5:17 - 5:19
    that you do hear spontaneous applause,
  • 5:19 - 5:22
    although of course sometimes
    at TEDx conferences, feel free.
  • 5:22 - 5:25
    Everything you saw there
    was happening with deep learning.
  • 5:25 - 5:27
    (Applause) Thank you.
  • 5:27 - 5:29
    The transcription in English
    was deep learning.
  • 5:29 - 5:32
    The translation to Chinese and the text
    in the top right, deep learning,
  • 5:32 - 5:36
    and the construction of the voice
    was deep learning as well.
  • 5:36 - 5:39
    So deep learning
    is this extraordinary thing.
  • 5:39 - 5:42
    It's a single algorithm
    that can seem to do almost anything,
  • 5:42 - 5:45
    and I discovered that a year earlier,
    it had also learned to see.
  • 5:45 - 5:47
    In this obscure competition from Germany
  • 5:47 - 5:50
    called the German Traffic Sign
    Recognition Benchmark,
  • 5:50 - 5:53
    deep learning had learned
    to recognize traffic signs like this one.
  • 5:53 - 5:55
    Not only could it recognize
    the traffic signs
  • 5:55 - 5:57
    better than any other algorithm,
  • 5:57 - 6:00
    the leaderboard actually showed
    it was better than people,
  • 6:00 - 6:02
    about twice as good as people.
  • 6:02 - 6:04
    So by 2011, we had the first example
  • 6:04 - 6:07
    of computers that can see
    better than people.
  • 6:07 - 6:09
    Since that time, a lot has happened.
  • 6:09 - 6:13
    In 2012, Google announced that they had
    a deep learning algorithm
  • 6:13 - 6:14
    watch YouTube videos
  • 6:14 - 6:17
    and crunched the data
    on 16,000 computers for a month,
  • 6:17 - 6:22
    and the computer independently learned
    about concepts such as people and cats
  • 6:22 - 6:24
    just by watching the videos.
  • 6:24 - 6:26
    This is much like the way
    that humans learn.
  • 6:26 - 6:29
    Humans don't learn
    by being told what they see,
  • 6:29 - 6:32
    but by learning for themselves
    what these things are.
  • 6:32 - 6:35
    Also in 2012, Geoffrey Hinton,
    who we saw earlier,
  • 6:35 - 6:38
    won the very popular ImageNet competition,
  • 6:38 - 6:42
    looking to try to figure out
    from one and a half million images
  • 6:42 - 6:44
    what they're pictures of.
  • 6:44 - 6:47
    As of 2014, we're now
    down to a six percent error rate
  • 6:47 - 6:49
    in image recognition.
  • 6:49 - 6:51
    This is better than people, again.
  • 6:51 - 6:55
    So machines really are doing
    an extraordinarily good job of this,
  • 6:55 - 6:57
    and it is now being used in industry.
  • 6:57 - 7:00
    For example, Google announced last year
  • 7:00 - 7:04
    that they had mapped every single location
    in France in two hours,
  • 7:04 - 7:08
    and the way they did it
    wast hat they fed street view images
  • 7:08 - 7:12
    into a deep learning algorithm
    to recognize and read street numbers.
  • 7:12 - 7:14
    Imagine how long
    it would have taken before:
  • 7:14 - 7:18
    dozens of people, many years.
  • 7:18 - 7:20
    This is also happening in China.
  • 7:20 - 7:24
    Baidu is kind of
    the Chinese Google, I guess,
  • 7:24 - 7:26
    and what you see here in the top left
  • 7:26 - 7:30
    is an example of a picture that I uploaded
    to Baidu's deep learning system,
  • 7:30 - 7:34
    and underneath you can see that the system
    has understood what that picture is
  • 7:34 - 7:36
    and found similar images.
  • 7:36 - 7:39
    The similar images
    actually have similar backgrounds,
  • 7:39 - 7:42
    similar directions of the faces,
    even some with their tongue out.
  • 7:42 - 7:45
    This is not clearly looking
    at the text of a web page.
  • 7:45 - 7:47
    All I uploaded was an image.
  • 7:47 - 7:51
    So we now have computers
    which really understand what they see
  • 7:51 - 7:52
    and can therefore search databases
  • 7:52 - 7:56
    of hundreds of millions
    of images in real time.
  • 7:56 - 7:59
    So what does it mean
    now that computers can see?
  • 7:59 - 8:01
    Well, it's not just
    that computers can see.
  • 8:01 - 8:03
    In fact, deep learning
    has done more than that.
  • 8:03 - 8:06
    Complex, nuanced sentences like this one
  • 8:06 - 8:09
    are now understandable
    with deep learning algorithms.
  • 8:09 - 8:10
    As you can see here,
  • 8:10 - 8:13
    this Stanford-based system
    showing the red dot at the top
  • 8:13 - 8:17
    has figured out that this sentence
    is expressing negative sentiment.
  • 8:17 - 8:20
    Deep learning now in fact
    is near human performance
  • 8:20 - 8:25
    at understanding what sentences are about
    and what it is saying about those things.
  • 8:25 - 8:28
    Also, deep learning
    has been used to read Chinese,
  • 8:28 - 8:31
    again at about
    native Chinese speaker level.
  • 8:31 - 8:33
    This algorithm developed
    out of Switzerland
  • 8:33 - 8:37
    by people, none of whom speak
    or understand any Chinese.
  • 8:37 - 8:39
    As I say, using deep learning
  • 8:39 - 8:41
    is about the best system
    in the world for this,
  • 8:41 - 8:46
    even compared
    to native human understanding.
  • 8:46 - 8:49
    This is a system that we put together
    at my company
  • 8:49 - 8:51
    which shows putting
    all this stuff together.
  • 8:51 - 8:54
    These are pictures which have
    no text attached,
  • 8:54 - 8:56
    and as I'm typing in here sentences,
  • 8:56 - 8:59
    in real time it's understanding
    these pictures
  • 8:59 - 9:01
    and figuring out what they're about
  • 9:01 - 9:04
    and finding pictures that are similar
    to the text that I'm writing.
  • 9:04 - 9:07
    So you can see, it's actually
    understanding my sentences
  • 9:07 - 9:09
    and actually understanding these pictures.
  • 9:09 - 9:11
    I know that you've seen
    something like this on Google,
  • 9:11 - 9:14
    where you can type in things
    and it will show you pictures,
  • 9:14 - 9:18
    but actually what it's doing is it's
    searching the webpage for the text.
  • 9:18 - 9:21
    This is very different from
    actually understanding the images.
  • 9:21 - 9:23
    This is something that computers
    have only been able to do
  • 9:23 - 9:27
    for the first time in the last few months.
  • 9:27 - 9:31
    So we can see now that computers
    cannot only see but they can also read,
  • 9:31 - 9:34
    and, of course, we've shown that they
    can understand what they hear.
  • 9:34 - 9:38
    Perhaps not surprising now
    that I'm going to tell you they can write.
  • 9:38 - 9:43
    Here is some text that I generated
    using a deep learning algorithm yesterday.
  • 9:43 - 9:47
    And here is some text that an algorithm
    out of Stanford generated.
  • 9:47 - 9:48
    Each of these sentences was generated
  • 9:48 - 9:53
    by a deep learning algorithm
    to describe each of those pictures.
  • 9:53 - 9:57
    This algorithm before has never seen
    a man in a black shirt playing a guitar.
  • 9:57 - 9:59
    It's seen a man before,
    it's seen black before,
  • 9:59 - 10:01
    it's seen a guitar before,
  • 10:01 - 10:05
    but it has independently generated
    this novel description of this picture.
  • 10:05 - 10:09
    We're still not quite at human performance
    here, but we're close.
  • 10:09 - 10:13
    In tests, humans prefer
    the computer-generated caption
  • 10:13 - 10:14
    one out of four times.
  • 10:14 - 10:16
    Now this system is now only two weeks old,
  • 10:16 - 10:18
    so probably within the next year,
  • 10:18 - 10:21
    the computer algorithm will be
    well past human performance
  • 10:21 - 10:23
    at the rate things are going.
  • 10:23 - 10:26
    So computers can also write.
  • 10:26 - 10:29
    So we put all this together and it leads
    to very exciting opportunities.
  • 10:29 - 10:31
    For example, in medicine,
  • 10:31 - 10:33
    a team in Boston announced
    that they had discovered
  • 10:33 - 10:36
    dozens of new clinically relevant features
  • 10:36 - 10:41
    of tumors which help doctors
    make a prognosis of a cancer.
  • 10:42 - 10:44
    Very similarly, in Stanford,
  • 10:44 - 10:48
    a group there announced that,
    looking at tissues under magnification,
  • 10:48 - 10:50
    they've developed
    a machine learning-based system
  • 10:50 - 10:53
    which in fact is better
    than human pathologists
  • 10:53 - 10:56
    at predicting survival rates
    for cancer sufferers.
  • 10:57 - 11:00
    In both of these cases, not only
    were the predictions more accurate,
  • 11:00 - 11:03
    but they generated new insightful science.
  • 11:03 - 11:04
    In the radiology case,
  • 11:04 - 11:07
    they were new clinical indicators
    that humans can understand.
  • 11:07 - 11:09
    In this pathology case,
  • 11:09 - 11:14
    the computer system actually discovered
    that the cells around the cancer
  • 11:14 - 11:17
    are as important
    as the cancer cells themselves
  • 11:17 - 11:19
    in making a diagnosis.
  • 11:19 - 11:24
    This is the opposite of what pathologists
    had been taught for decades.
  • 11:24 - 11:27
    In each of those two cases,
    they were systems developed
  • 11:27 - 11:31
    by a combination of medical experts
    and machine learning experts,
  • 11:31 - 11:34
    but as of last year,
    we're now beyond that too.
  • 11:34 - 11:37
    This is an example
    of identifying cancerous areas
  • 11:37 - 11:40
    of human tissue under a microscope.
  • 11:40 - 11:44
    The system being shown here
    can identify those areas more accurately,
  • 11:44 - 11:47
    or about as accurately,
    as human pathologists,
  • 11:47 - 11:51
    but was built entirely with deep learning
    using no medical expertise
  • 11:51 - 11:53
    by people who have
    no background in the field.
  • 11:54 - 11:57
    Similarly, here, this neuron segmentation.
  • 11:57 - 12:00
    We can now segment neurons
    about as accurately as humans can,
  • 12:00 - 12:03
    but this system was developed
    with deep learning
  • 12:03 - 12:06
    using people with no previous background
    in medicine.
  • 12:06 - 12:10
    So myself, as somebody with
    no previous background in medicine,
  • 12:10 - 12:13
    I seem to be entirely well qualified
    to start a new medical company,
  • 12:13 - 12:16
    which I did.
  • 12:16 - 12:17
    I was kind of terrified of doing it,
  • 12:17 - 12:20
    but the theory seemed to suggest
    that it ought to be possible
  • 12:20 - 12:26
    to do very useful medicine
    using just these data analytic techniques.
  • 12:26 - 12:28
    And thankfully, the feedback
    has been fantastic,
  • 12:28 - 12:31
    not just from the media
    but from the medical community,
  • 12:31 - 12:33
    who have been very supportive.
  • 12:33 - 12:37
    The theory is that we can take
    the middle part of the medical process
  • 12:37 - 12:40
    and turn that into data analysis
    as much as possible,
  • 12:40 - 12:43
    leaving doctors to do
    what they're best at.
  • 12:43 - 12:45
    I want to give you an example.
  • 12:45 - 12:49
    It now takes us about 15 minutes
    to generate a new medical diagnostic test
  • 12:49 - 12:51
    and I'll show you that in real time now,
  • 12:51 - 12:54
    but I've compressed it down
    to three minutes
  • 12:54 - 12:56
    by cutting some pieces out.
  • 12:56 - 12:59
    Rather than showing you
    creating a medical diagnostic test,
  • 12:59 - 13:01
    I'm going to show you
    a diagnostic test of car images,
  • 13:01 - 13:04
    because that's something
    we can all understand.
  • 13:04 - 13:07
    So here we're starting
    with about 1.5 million car images,
  • 13:07 - 13:10
    and I want to create something
    that can split them into the angle
  • 13:10 - 13:12
    of the photo that's being taken.
  • 13:12 - 13:16
    So these images are entirely unlabeled,
    so I have to start from scratch.
  • 13:16 - 13:18
    With our deep learning algorithm,
  • 13:18 - 13:22
    it can automatically identify
    areas of structure in these images.
  • 13:22 - 13:25
    So the nice thing is that the human
    and the computer can now work together.
  • 13:25 - 13:27
    So the human, as you can see here,
  • 13:27 - 13:30
    is telling the computer
    about areas of interest
  • 13:30 - 13:35
    which it wants the computer then
    to try and use to improve its algorithm.
  • 13:35 - 13:39
    Now, these deep learning systems actually
    are in 16,000-dimensional space,
  • 13:39 - 13:43
    so you can see here the computer
    rotating this through that space,
  • 13:43 - 13:45
    trying to find new areas of structure.
  • 13:45 - 13:46
    And when it does so successfully,
  • 13:46 - 13:50
    the human who is driving it can then
    point out the areas that are interesting.
  • 13:50 - 13:53
    So here, the computer
    has successfully found areas,
  • 13:53 - 13:55
    for example, angles.
  • 13:55 - 13:57
    So as we go through this process,
  • 13:57 - 13:59
    we're gradually telling
    the computer more and more
  • 13:59 - 14:02
    about the kinds of structures
    we're looking for.
  • 14:02 - 14:03
    You can imagine in a diagnostic test
  • 14:03 - 14:07
    this would be a pathologist identifying
    areas of pathosis, for example,
  • 14:07 - 14:12
    or a radiologist indicating
    potentially troublesome nodules.
  • 14:12 - 14:14
    And sometimes it can be difficult
    for the algorithm.
  • 14:14 - 14:16
    In this case, it got kind of confused.
  • 14:16 - 14:19
    The fronts and the backs
    of the cars are all mixed up.
  • 14:19 - 14:21
    So here we have to be a bit more careful,
  • 14:21 - 14:24
    manually selecting these fronts
    as opposed to the backs,
  • 14:24 - 14:30
    then telling the computer
    that this is a type of group
  • 14:30 - 14:31
    that we're interested in.
  • 14:31 - 14:34
    So we do that for a while,
    we skip over a little bit,
  • 14:34 - 14:36
    and then we train
    the machine learning algorithm
  • 14:36 - 14:38
    based on these couple of hundred things,
  • 14:38 - 14:40
    and we hope that it's gotten a lot better.
  • 14:40 - 14:43
    You can see, it's now started to fade
    some of these pictures out,
  • 14:43 - 14:48
    showing us that it already is recognizing
    how to understand some of these itself.
  • 14:48 - 14:51
    We can then use this concept
    of similar images,
  • 14:51 - 14:53
    and using similar images, you can now see,
  • 14:53 - 14:57
    the computer at this point is able
    to entirely find just the fronts of cars.
  • 14:57 - 15:00
    So at this point, the human
    can tell the computer,
  • 15:00 - 15:02
    okay, yes, you've done a good job of that.
  • 15:03 - 15:05
    Sometimes, of course, even at this point
  • 15:05 - 15:09
    it's still difficult
    to separate out groups.
  • 15:09 - 15:11
    In this case, even after we let
  • 15:11 - 15:14
    the computer
    try to rotate this for a while,
  • 15:14 - 15:16
    we still find that the left side
    sand the right sides pictures
  • 15:16 - 15:18
    are all mixed up together.
  • 15:18 - 15:20
    So we can again give
    the computer some hints,
  • 15:20 - 15:23
    and we say, okay, try and find
    a projection that separates out
  • 15:23 - 15:25
    the left sides and the right sides
    as much as possible
  • 15:25 - 15:28
    using this deep learning algorithm.
  • 15:28 - 15:31
    And giving it that hint --
    ah, okay, it's been successful.
  • 15:31 - 15:33
    It's managed to find a way
    of thinking about these objects
  • 15:33 - 15:36
    that's separated out these together.
  • 15:36 - 15:38
    So you get the idea here.
  • 15:38 - 15:44
    This is a case not where the human
    is being replaced by a computer,
  • 15:46 - 15:49
    but where they're working together.
  • 15:49 - 15:53
    What we're doing here is we're replacing
    something that used to take a team
  • 15:53 - 15:55
    of five or six people about seven years
  • 15:55 - 15:57
    and replacing it with something
    that takes 15 minutes
  • 15:57 - 16:00
    for one person acting alone.
  • 16:00 - 16:04
    So this process takes
    about four or five iterations.
  • 16:04 - 16:06
    You can see we now have 62 percent
  • 16:06 - 16:08
    of our 1.5 million images
    classified correctly.
  • 16:08 - 16:11
    And at this point,
    we can start to quite quickly
  • 16:11 - 16:12
    grab whole big sections,
  • 16:12 - 16:15
    check through them to make sure
    that there's no mistakes.
  • 16:15 - 16:19
    Where there are mistakes, we can let
    the computer know about them.
  • 16:19 - 16:22
    And using this kind of process
    for each of the different groups,
  • 16:22 - 16:25
    we are now up to an 80 percent
    success rate
  • 16:25 - 16:27
    in classifying the 1.5 million images.
  • 16:27 - 16:29
    And at this point, it's just a case
  • 16:29 - 16:33
    of finding the small number
    that aren't classified correctly,
  • 16:33 - 16:36
    and trying to understand why.
  • 16:36 - 16:37
    And using that approach,
  • 16:37 - 16:41
    by 15 minutes we get
    to 97 percent classification rates.
  • 16:41 - 16:46
    So this kind of technique
    could allow us to fix a major problem,
  • 16:46 - 16:49
    which is that there's a lack
    of medical expertise in the world.
  • 16:49 - 16:53
    The World Economic Forum says
    that there's between a 10x and a 20x
  • 16:53 - 16:55
    shortage of physicians
    in the developing world,
  • 16:55 - 16:57
    and it would take about 300 years
  • 16:57 - 17:00
    to train enough people
    to fix that problem.
  • 17:00 - 17:03
    So imagine if we can help
    enhance their efficiency
  • 17:03 - 17:06
    using these deep learning approaches?
  • 17:06 - 17:08
    So I'm very excited
    about the opportunities.
  • 17:08 - 17:11
    I'm also concerned about the problems.
  • 17:11 - 17:14
    The problem here
    is that every area in blue on this map
  • 17:14 - 17:18
    is somewhere where services
    are over 80 percent of employment.
  • 17:18 - 17:19
    What are services?
  • 17:19 - 17:21
    These are services.
  • 17:21 - 17:23
    These are also the exact things
  • 17:23 - 17:26
    that computers
    have just learned how to do.
  • 17:26 - 17:29
    So 80 percent of the world's employment
    in the developed world
  • 17:29 - 17:31
    is stuff that computers
    have just learned how to do.
  • 17:31 - 17:33
    What does that mean?
  • 17:33 - 17:35
    Well, it'll be fine.
    They'll be replaced by other jobs.
  • 17:35 - 17:38
    For example, there will be
    more jobs for data scientists.
  • 17:38 - 17:39
    Well, not really.
  • 17:39 - 17:42
    It doesn't take data scientists
    very long to build these things.
  • 17:42 - 17:45
    For example, these four algorithms
    were all built by the same guy.
  • 17:45 - 17:48
    So if you think, oh,
    it's all happened before,
  • 17:48 - 17:52
    we've seen the results in the past
    of when new things come along
  • 17:52 - 17:54
    and they get replaced by new jobs,
  • 17:54 - 17:56
    what are these new jobs going to be?
  • 17:56 - 17:58
    It's very hard for us to estimate this,
  • 17:58 - 18:01
    because human performance
    grows at this gradual rate,
  • 18:01 - 18:03
    but we now have a system, deep learning,
  • 18:03 - 18:06
    that we know actually grows
    in capability exponentially.
  • 18:06 - 18:08
    And we're here.
  • 18:08 - 18:10
    So currently, we see the things around us
  • 18:10 - 18:13
    and we say, "Oh, computers
    are still pretty dumb." Right?
  • 18:13 - 18:16
    But in five years' time,
    computers will be off this chart.
  • 18:16 - 18:20
    So we need to be starting to think
    about this capability right now.
  • 18:20 - 18:22
    We have seen this once before, of course.
  • 18:22 - 18:23
    In the Industrial Revolution,
  • 18:23 - 18:26
    we saw a step change
    in capability thanks to engines.
  • 18:27 - 18:30
    The thing is, though,
    that after a while, things flattened out.
  • 18:30 - 18:32
    There was social disruption,
  • 18:32 - 18:35
    but once engines were used
    to generate power in all the situations,
  • 18:35 - 18:38
    things really settled down.
  • 18:38 - 18:39
    The Machine Learning Revolution
  • 18:39 - 18:42
    is going to be very different
    from the Industrial Revolution,
  • 18:42 - 18:45
    because the Machine Learning Revolution,
    it never settles down.
  • 18:45 - 18:48
    The better computers get
    at intellectual activities,
  • 18:48 - 18:52
    the more they can build better computers
    to be better at intellectual capabilities,
  • 18:52 - 18:54
    so this is going to be a kind of change
  • 18:54 - 18:57
    that the world has actually
    never experienced before,
  • 18:57 - 19:00
    so your previous understanding
    of what's possible is different.
  • 19:00 - 19:02
    This is already impacting us.
  • 19:02 - 19:06
    In the last 25 years,
    as capital productivity has increased,
  • 19:06 - 19:10
    labor productivity has been flat,
    in fact even a little bit down.
  • 19:11 - 19:14
    So I want us to start
    having this discussion now.
  • 19:14 - 19:17
    I know that when I often tell people
    about this situation,
  • 19:17 - 19:18
    people can be quite dismissive.
  • 19:18 - 19:20
    Well, computers can't really think,
  • 19:20 - 19:23
    they don't emote,
    they don't understand poetry,
  • 19:23 - 19:25
    we don't really understand how they work.
  • 19:25 - 19:27
    So what?
  • 19:27 - 19:29
    Computers right now can do the things
  • 19:29 - 19:31
    that humans spend
    most of their time being paid to do,
  • 19:31 - 19:35
    so now's the time to start thinking
    about how we're going to adjust
  • 19:35 - 19:37
    our social structures
    and economic structures
  • 19:37 - 19:39
    to be aware of this new reality.
  • 19:39 - 19:41
    Thank you.
  • 19:41 - 19:42
    (Applause)
Title:
The wonderful and terrifying implications of computers that can learn | Jeremy Howard | TEDxBrussels
Description:

This talk was given at a local TEDx event, produced independently of the TED Conferences. What happens when we teach a computer how to learn? Technologist Jeremy Howard shares some surprising new developments in the fast-moving field of deep learning, a technique that can give computers the ability to learn Chinese, or to recognize objects in photos, or to help think through a medical diagnosis. (One deep learning tool, after watching hours of YouTube, taught itself the concept of “cats.”) Get caught up on a field that will change the way the computers around you behave… sooner than you probably think.

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDxTalks
Duration:
19:47

English subtitles

Revisions