Return to Video

The wonderful and terrifying implications of computers that can learn

  • 0:01 - 0:05
    It used to be that if you wanted
    to get a computer to do something new,
  • 0:05 - 0:06
    you would have to program it.
  • 0:06 - 0:10
    Now, programming, for those of you here
    that haven't done it yourself,
  • 0:10 - 0:13
    requires laying out in excruciating detail
  • 0:13 - 0:17
    every single step that you want
    the computer to do
  • 0:17 - 0:19
    in order to achieve your goal.
  • 0:19 - 0:23
    Now, if you want to do something
    that you don't know how to do yourself,
  • 0:23 - 0:25
    then this is going
    to be a great challenge.
  • 0:25 - 0:28
    So this was the challenge faced
    by this man, Arthur Samuel.
  • 0:28 - 0:32
    In 1956, he wanted to get this computer
  • 0:32 - 0:35
    to be able to beat him at checkers.
  • 0:35 - 0:37
    How can you write a program,
  • 0:37 - 0:40
    lay out in excruciating detail,
    how to be better than you at checkers?
  • 0:40 - 0:42
    So he came up with an idea:
  • 0:42 - 0:46
    he had the computer play
    against itself thousands of times
  • 0:46 - 0:48
    and learn how to play checkers.
  • 0:48 - 0:52
    And indeed it worked,
    and in fact, by 1962,
  • 0:52 - 0:56
    this computer had beaten
    the Connecticut state champion.
  • 0:56 - 0:59
    So Arthur Samuel was
    the father of machine learning,
  • 0:59 - 1:00
    and I have a great debt to him,
  • 1:00 - 1:03
    because I am a machine
    learning practitioner.
  • 1:03 - 1:04
    I was the president of Kaggle,
  • 1:04 - 1:08
    a community of over 200,000
    machine learning practictioners.
  • 1:08 - 1:10
    Kaggle puts up competitions
  • 1:10 - 1:14
    to try and get them to solve
    previously unsolved problems,
  • 1:14 - 1:17
    and it's been successful
    hundreds of times.
  • 1:17 - 1:20
    So from this vantage point,
    I was able to find out
  • 1:20 - 1:24
    a lot about what machine learning
    can do in the past, can do today,
  • 1:24 - 1:26
    and what it could do in the future.
  • 1:26 - 1:31
    Perhaps the first big success of
    machine learning commercially was Google.
  • 1:31 - 1:34
    Google showed that it is
    possible to find information
  • 1:34 - 1:36
    by using a computer algorithm,
  • 1:36 - 1:38
    and this algorithm is based
    on machine learning.
  • 1:38 - 1:42
    Since that time, there have been many
    commercial successes of machine learning.
  • 1:42 - 1:44
    Companies like Amazon and Netflix
  • 1:44 - 1:48
    use machine learning to suggest
    products that you might like to buy,
  • 1:48 - 1:50
    movies that you might like to watch.
  • 1:50 - 1:52
    Sometimes, it's almost creepy.
  • 1:52 - 1:54
    Companies like LinkedIn and Facebook
  • 1:54 - 1:56
    sometimes will tell you about
    who your friends might be
  • 1:56 - 1:58
    and you have no idea how it did it,
  • 1:58 - 2:01
    and this is because it's using
    the power of machine learning.
  • 2:01 - 2:04
    These are algorithms that have
    learned how to do this from data
  • 2:04 - 2:07
    rather than being programmed by hand.
  • 2:07 - 2:10
    This is also how IBM was successful
  • 2:10 - 2:14
    in getting Watson to beat
    the two world champions at "Jeopardy,"
  • 2:14 - 2:17
    answering incredibly subtle
    and complex questions like this one.
  • 2:17 - 2:20
    ["The ancient 'Lion of Nimrud' went missing
    from this city's national museum in 2003
    (along with a lot of other stuff)"]
  • 2:20 - 2:23
    This is also why we are now able
    to see the first self-driving cars.
  • 2:23 - 2:26
    If you want to be able to tell
    the difference between, say,
  • 2:26 - 2:28
    a tree and a pedestrian,
    well, that's pretty important.
  • 2:28 - 2:31
    We don't know how to write
    those programs by hand,
  • 2:31 - 2:34
    but with machine learning,
    this is now possible.
  • 2:34 - 2:37
    And in fact, this car has driven
    over a million miles
  • 2:37 - 2:40
    without any accidents on regular roads.
  • 2:40 - 2:44
    So we now know that computers can learn,
  • 2:44 - 2:46
    and computers can learn to do things
  • 2:46 - 2:49
    that we actually sometimes
    don't know how to do ourselves,
  • 2:49 - 2:52
    or maybe can do them better than us.
  • 2:52 - 2:56
    One of the most amazing examples
    I've seen of machine learning
  • 2:56 - 2:58
    happened on a project that I ran at Kaggle
  • 2:58 - 3:02
    where a team run by a guy
    called Geoffrey Hinton
  • 3:02 - 3:03
    from the University of Toronto
  • 3:03 - 3:06
    won a competition for
    automatic drug discovery.
  • 3:06 - 3:09
    Now, what was extraordinary here
    is not just that they beat
  • 3:09 - 3:13
    all of the algorithms developed by Merck
    or the international academic community,
  • 3:13 - 3:18
    but nobody on the team had any background
    in chemistry or biology or life sciences,
  • 3:18 - 3:20
    and they did it in two weeks.
  • 3:20 - 3:22
    How did they do this?
  • 3:22 - 3:25
    They used an extraordinary algorithm
    called deep learning.
  • 3:25 - 3:28
    So important was this that in fact
    the success was covered
  • 3:28 - 3:31
    in The New York Times in a front page
    article a few weeks later.
  • 3:31 - 3:34
    This is Geoffrey Hinton
    here on the left-hand side.
  • 3:34 - 3:38
    Deep learning is an algorithm
    inspired by how the human brain works,
  • 3:38 - 3:40
    and as a result it's an algorithm
  • 3:40 - 3:44
    which has no theoretical limitations
    on what it can do.
  • 3:44 - 3:47
    The more data you give it and the more
    computation time you give it,
  • 3:47 - 3:48
    the better it gets.
  • 3:48 - 3:51
    The New York Times also
    showed in this article
  • 3:51 - 3:53
    another extraordinary
    result of deep learning
  • 3:53 - 3:56
    which I'm going to show you now.
  • 3:56 - 4:01
    It shows that computers
    can listen and understand.
  • 4:01 - 4:03
    (Video) Richard Rashid: Now, the last step
  • 4:03 - 4:06
    that I want to be able
    to take in this process
  • 4:06 - 4:11
    is to actually speak to you in Chinese.
  • 4:11 - 4:14
    Now the key thing there is,
  • 4:14 - 4:19
    we've been able to take a large amount
    of information from many Chinese speakers
  • 4:19 - 4:21
    and produce a text-to-speech system
  • 4:21 - 4:26
    that takes Chinese text
    and converts it into Chinese language,
  • 4:26 - 4:30
    and then we've taken
    an hour or so of my own voice
  • 4:30 - 4:32
    and we've used that to modulate
  • 4:32 - 4:36
    the standard text-to-speech system
    so that it would sound like me.
  • 4:36 - 4:39
    Again, the result's not perfect.
  • 4:39 - 4:42
    There are in fact quite a few errors.
  • 4:42 - 4:44
    (In Chinese)
  • 4:44 - 4:47
    (Applause)
  • 4:49 - 4:53
    There's much work to be done in this area.
  • 4:53 - 4:57
    (In Chinese)
  • 4:57 - 5:00
    (Applause)
  • 5:01 - 5:05
    Jeremy Howard: Well, that was at
    a machine learning conference in China.
  • 5:05 - 5:07
    It's not often, actually,
    at academic conferences
  • 5:07 - 5:09
    that you do hear spontaneous applause,
  • 5:09 - 5:13
    although of course sometimes
    at TEDx conferences, feel free.
  • 5:13 - 5:15
    Everything you saw there
    was happening with deep learning.
  • 5:15 - 5:17
    (Applause) Thank you.
  • 5:17 - 5:19
    The transcription in English
    was deep learning.
  • 5:19 - 5:23
    The translation to Chinese and the text
    in the top right, deep learning,
  • 5:23 - 5:26
    and the construction of the voice
    was deep learning as well.
  • 5:26 - 5:29
    So deep learning is
    this extraordinary thing.
  • 5:29 - 5:32
    It's a single algorithm that
    can seem to do almost anything,
  • 5:32 - 5:35
    and I discovered that a year earlier,
    it had also learned to see.
  • 5:35 - 5:38
    In this obscure competition from Germany
  • 5:38 - 5:40
    called the German Traffic Sign
    Recognition Benchmark,
  • 5:40 - 5:44
    deep learning had learned
    to recognize traffic signs like this one.
  • 5:44 - 5:46
    Not only could it
    recognize the traffic signs
  • 5:46 - 5:47
    better than any other algorithm,
  • 5:47 - 5:50
    the leaderboard actually showed
    it was better than people,
  • 5:50 - 5:52
    about twice as good as people.
  • 5:52 - 5:54
    So by 2011, we had the first example
  • 5:54 - 5:57
    of computers that can see
    better than people.
  • 5:57 - 5:59
    Since that time, a lot has happened.
  • 5:59 - 6:03
    In 2012, Google announced that
    they had a deep learning algorithm
  • 6:03 - 6:04
    watch YouTube videos
  • 6:04 - 6:08
    and crunched the data
    on 16,000 computers for a month,
  • 6:08 - 6:12
    and the computer independently learned
    about concepts such as people and cats
  • 6:12 - 6:14
    just by watching the videos.
  • 6:14 - 6:16
    This is much like the way
    that humans learn.
  • 6:16 - 6:19
    Humans don't learn
    by being told what they see,
  • 6:19 - 6:22
    but by learning for themselves
    what these things are.
  • 6:22 - 6:26
    Also in 2012, Geoffrey Hinton,
    who we saw earlier,
  • 6:26 - 6:29
    won the very popular ImageNet competition,
  • 6:29 - 6:33
    looking to try to figure out
    from one and a half million images
  • 6:33 - 6:34
    what they're pictures of.
  • 6:34 - 6:38
    As of 2014, we're now down
    to a six percent error rate
  • 6:38 - 6:39
    in image recognition.
  • 6:39 - 6:41
    This is better than people, again.
  • 6:41 - 6:45
    So machines really are doing
    an extraordinarily good job of this,
  • 6:45 - 6:47
    and it is now being used in industry.
  • 6:47 - 6:50
    For example, Google announced last year
  • 6:50 - 6:55
    that they had mapped every single
    location in France in two hours,
  • 6:55 - 6:58
    and the way they did it was
    that they fed street view images
  • 6:58 - 7:03
    into a deep learning algorithm
    to recognize and read street numbers.
  • 7:03 - 7:05
    Imagine how long
    it would have taken before:
  • 7:05 - 7:08
    dozens of people, many years.
  • 7:08 - 7:10
    This is also happening in China.
  • 7:10 - 7:14
    Baidu is kind of
    the Chinese Google, I guess,
  • 7:14 - 7:17
    and what you see here in the top left
  • 7:17 - 7:20
    is an example of a picture that I uploaded
    to Baidu's deep learning system,
  • 7:20 - 7:24
    and underneath you can see that the system
    has understood what that picture is
  • 7:24 - 7:26
    and found similar images.
  • 7:26 - 7:29
    The similar images actually
    have similar backgrounds,
  • 7:29 - 7:31
    similar directions of the faces,
  • 7:31 - 7:33
    even some with their tongue out.
  • 7:33 - 7:36
    This is not clearly looking
    at the text of a web page.
  • 7:36 - 7:37
    All I uploaded was an image.
  • 7:37 - 7:41
    So we now have computers which
    really understand what they see
  • 7:41 - 7:43
    and can therefore search databases
  • 7:43 - 7:46
    of hundreds of millions
    of images in real time.
  • 7:46 - 7:50
    So what does it mean
    now that computers can see?
  • 7:50 - 7:52
    Well, it's not just
    that computers can see.
  • 7:52 - 7:54
    In fact, deep learning
    has done more than that.
  • 7:54 - 7:57
    Complex, nuanced sentences like this one
  • 7:57 - 7:59
    are now understandable
    with deep learning algorithms.
  • 7:59 - 8:01
    As you can see here,
  • 8:01 - 8:03
    this Stanford-based system
    showing the red dot at the top
  • 8:03 - 8:07
    has figured out that this sentence
    is expressing negative sentiment.
  • 8:07 - 8:11
    Deep learning now in fact
    is near human performance
  • 8:11 - 8:16
    at understanding what sentences are about
    and what it is saying about those things.
  • 8:16 - 8:19
    Also, deep learning has
    been used to read Chinese,
  • 8:19 - 8:22
    again at about native
    Chinese speaker level.
  • 8:22 - 8:24
    This algorithm developed
    out of Switzerland
  • 8:24 - 8:27
    by people, none of whom speak
    or understand any Chinese.
  • 8:27 - 8:29
    As I say, using deep learning
  • 8:29 - 8:32
    is about the best system
    in the world for this,
  • 8:32 - 8:37
    even compared to native
    human understanding.
  • 8:37 - 8:40
    This is a system that we
    put together at my company
  • 8:40 - 8:42
    which shows putting
    all this stuff together.
  • 8:42 - 8:44
    These are pictures which
    have no text attached,
  • 8:44 - 8:47
    and as I'm typing in here sentences,
  • 8:47 - 8:50
    in real time it's understanding
    these pictures
  • 8:50 - 8:51
    and figuring out what they're about
  • 8:51 - 8:54
    and finding pictures that are similar
    to the text that I'm writing.
  • 8:54 - 8:57
    So you can see, it's actually
    understanding my sentences
  • 8:57 - 8:59
    and actually understanding these pictures.
  • 8:59 - 9:02
    I know that you've seen
    something like this on Google,
  • 9:02 - 9:05
    where you can type in things
    and it will show you pictures,
  • 9:05 - 9:08
    but actually what it's doing is it's
    searching the webpage for the text.
  • 9:08 - 9:11
    This is very different from actually
    understanding the images.
  • 9:11 - 9:14
    This is something that computers
    have only been able to do
  • 9:14 - 9:17
    for the first time in the last few months.
  • 9:17 - 9:21
    So we can see now that computers
    can not only see but they can also read,
  • 9:21 - 9:25
    and, of course, we've shown that they
    can understand what they hear.
  • 9:25 - 9:28
    Perhaps not surprising now that
    I'm going to tell you they can write.
  • 9:28 - 9:33
    Here is some text that I generated
    using a deep learning algorithm yesterday.
  • 9:33 - 9:37
    And here is some text that an algorithm
    out of Stanford generated.
  • 9:37 - 9:39
    Each of these sentences was generated
  • 9:39 - 9:43
    by a deep learning algorithm
    to describe each of those pictures.
  • 9:43 - 9:48
    This algorithm before has never seen
    a man in a black shirt playing a guitar.
  • 9:48 - 9:50
    It's seen a man before,
    it's seen black before,
  • 9:50 - 9:51
    it's seen a guitar before,
  • 9:51 - 9:56
    but it has independently generated
    this novel description of this picture.
  • 9:56 - 9:59
    We're still not quite at human
    performance here, but we're close.
  • 9:59 - 10:03
    In tests, humans prefer
    the computer-generated caption
  • 10:03 - 10:05
    one out of four times.
  • 10:05 - 10:07
    Now this system is now only two weeks old,
  • 10:07 - 10:09
    so probably within the next year,
  • 10:09 - 10:12
    the computer algorithm will be
    well past human performance
  • 10:12 - 10:13
    at the rate things are going.
  • 10:13 - 10:16
    So computers can also write.
  • 10:16 - 10:20
    So we put all this together and it leads
    to very exciting opportunities.
  • 10:20 - 10:21
    For example, in medicine,
  • 10:21 - 10:24
    a team in Boston announced
    that they had discovered
  • 10:24 - 10:27
    dozens of new clinically relevant features
  • 10:27 - 10:31
    of tumors which help doctors
    make a prognosis of a cancer.
  • 10:32 - 10:35
    Very similarly, in Stanford,
  • 10:35 - 10:38
    a group there announced that,
    looking at tissues under magnification,
  • 10:38 - 10:41
    they've developed
    a machine learning-based system
  • 10:41 - 10:43
    which in fact is better
    than human pathologists
  • 10:43 - 10:48
    at predicting survival rates
    for cancer sufferers.
  • 10:48 - 10:51
    In both of these cases, not only
    were the predictions more accurate,
  • 10:51 - 10:53
    but they generated new insightful science.
  • 10:53 - 10:55
    In the radiology case,
  • 10:55 - 10:58
    they were new clinical indicators
    that humans can understand.
  • 10:58 - 11:00
    In this pathology case,
  • 11:00 - 11:04
    the computer system actually discovered
    that the cells around the cancer
  • 11:04 - 11:08
    are as important as
    the cancer cells themselves
  • 11:08 - 11:09
    in making a diagnosis.
  • 11:09 - 11:15
    This is the opposite of what pathologists
    had been taught for decades.
  • 11:15 - 11:18
    In each of those two cases,
    they were systems developed
  • 11:18 - 11:22
    by a combination of medical experts
    and machine learning experts,
  • 11:22 - 11:24
    but as of last year,
    we're now beyond that too.
  • 11:24 - 11:28
    This is an example of
    identifying cancerous areas
  • 11:28 - 11:30
    of human tissue under a microscope.
  • 11:30 - 11:35
    The system being shown here
    can identify those areas more accurately,
  • 11:35 - 11:38
    or about as accurately,
    as human pathologists,
  • 11:38 - 11:41
    but was built entirely with deep learning
    using no medical expertise
  • 11:41 - 11:44
    by people who have
    no background in the field.
  • 11:45 - 11:47
    Similarly, here, this neuron segmentation.
  • 11:47 - 11:51
    We can now segment neurons
    about as accurately as humans can,
  • 11:51 - 11:54
    but this system was developed
    with deep learning
  • 11:54 - 11:57
    using people with no previous
    background in medicine.
  • 11:57 - 12:00
    So myself, as somebody with
    no previous background in medicine,
  • 12:00 - 12:04
    I seem to be entirely well qualified
    to start a new medical company,
  • 12:04 - 12:06
    which I did.
  • 12:06 - 12:08
    I was kind of terrified of doing it,
  • 12:08 - 12:11
    but the theory seemed to suggest
    that it ought to be possible
  • 12:11 - 12:16
    to do very useful medicine
    using just these data analytic techniques.
  • 12:16 - 12:19
    And thankfully, the feedback
    has been fantastic,
  • 12:19 - 12:21
    not just from the media
    but from the medical community,
  • 12:21 - 12:23
    who have been very supportive.
  • 12:23 - 12:27
    The theory is that we can take
    the middle part of the medical process
  • 12:27 - 12:30
    and turn that into data analysis
    as much as possible,
  • 12:30 - 12:33
    leaving doctors to do
    what they're best at.
  • 12:33 - 12:35
    I want to give you an example.
  • 12:35 - 12:40
    It now takes us about 15 minutes
    to generate a new medical diagnostic test
  • 12:40 - 12:42
    and I'll show you that in real time now,
  • 12:42 - 12:45
    but I've compressed it down to
    three minutes by cutting some pieces out.
  • 12:45 - 12:48
    Rather than showing you
    creating a medical diagnostic test,
  • 12:48 - 12:52
    I'm going to show you
    a diagnostic test of car images,
  • 12:52 - 12:54
    because that's something
    we can all understand.
  • 12:54 - 12:57
    So here we're starting with
    about 1.5 million car images,
  • 12:57 - 13:00
    and I want to create something
    that can split them into the angle
  • 13:00 - 13:03
    of the photo that's being taken.
  • 13:03 - 13:07
    So these images are entirely unlabeled,
    so I have to start from scratch.
  • 13:07 - 13:08
    With our deep learning algorithm,
  • 13:08 - 13:12
    it can automatically identify
    areas of structure in these images.
  • 13:12 - 13:16
    So the nice thing is that the human
    and the computer can now work together.
  • 13:16 - 13:18
    So the human, as you can see here,
  • 13:18 - 13:21
    is telling the computer
    about areas of interest
  • 13:21 - 13:25
    which it wants the computer then
    to try and use to improve its algorithm.
  • 13:25 - 13:30
    Now, these deep learning systems actually
    are in 16,000-dimensional space,
  • 13:30 - 13:33
    so you can see here the computer
    rotating this through that space,
  • 13:33 - 13:35
    trying to find new areas of structure.
  • 13:35 - 13:37
    And when it does so successfully,
  • 13:37 - 13:41
    the human who is driving it can then
    point out the areas that are interesting.
  • 13:41 - 13:43
    So here, the computer has
    successfully found areas,
  • 13:43 - 13:46
    for example, angles.
  • 13:46 - 13:47
    So as we go through this process,
  • 13:47 - 13:50
    we're gradually telling
    the computer more and more
  • 13:50 - 13:52
    about the kinds of structures
    we're looking for.
  • 13:52 - 13:54
    You can imagine in a diagnostic test
  • 13:54 - 13:57
    this would be a pathologist identifying
    areas of pathosis, for example,
  • 13:57 - 14:02
    or a radiologist indicating
    potentially troublesome nodules.
  • 14:02 - 14:05
    And sometimes it can be
    difficult for the algorithm.
  • 14:05 - 14:07
    In this case, it got kind of confused.
  • 14:07 - 14:09
    The fronts and the backs
    of the cars are all mixed up.
  • 14:09 - 14:11
    So here we have to be a bit more careful,
  • 14:11 - 14:15
    manually selecting these fronts
    as opposed to the backs,
  • 14:15 - 14:20
    then telling the computer
    that this is a type of group
  • 14:20 - 14:22
    that we're interested in.
  • 14:22 - 14:24
    So we do that for a while,
    we skip over a little bit,
  • 14:24 - 14:26
    and then we train the
    machine learning algorithm
  • 14:26 - 14:28
    based on these couple of hundred things,
  • 14:28 - 14:30
    and we hope that it's gotten a lot better.
  • 14:30 - 14:34
    You can see, it's now started to fade
    some of these pictures out,
  • 14:34 - 14:38
    showing us that it already is recognizing
    how to understand some of these itself.
  • 14:38 - 14:41
    We can then use this concept
    of similar images,
  • 14:41 - 14:43
    and using similar images, you can now see,
  • 14:43 - 14:47
    the computer at this point is able to
    entirely find just the fronts of cars.
  • 14:47 - 14:50
    So at this point, the human
    can tell the computer,
  • 14:50 - 14:52
    okay, yes, you've done
    a good job of that.
  • 14:54 - 14:56
    Sometimes, of course, even at this point
  • 14:56 - 15:00
    it's still difficult
    to separate out groups.
  • 15:00 - 15:03
    In this case, even after we let the
    computer try to rotate this for a while,
  • 15:03 - 15:07
    we still find that the left sides
    and the right sides pictures
  • 15:07 - 15:08
    are all mixed up together.
  • 15:08 - 15:10
    So we can again give
    the computer some hints,
  • 15:10 - 15:13
    and we say, okay, try and find
    a projection that separates out
  • 15:13 - 15:16
    the left sides and the right sides
    as much as possible
  • 15:16 - 15:18
    using this deep learning algorithm.
  • 15:18 - 15:21
    And giving it that hint --
    ah, okay, it's been successful.
  • 15:21 - 15:24
    It's managed to find a way
    of thinking about these objects
  • 15:24 - 15:26
    that's separated out these together.
  • 15:26 - 15:29
    So you get the idea here.
  • 15:29 - 15:37
    This is a case not where the human
    is being replaced by a computer,
  • 15:37 - 15:40
    but where they're working together.
  • 15:40 - 15:43
    What we're doing here is we're replacing
    something that used to take a team
  • 15:43 - 15:45
    of five or six people about seven years
  • 15:45 - 15:48
    and replacing it with something
    that takes 15 minutes
  • 15:48 - 15:50
    for one person acting alone.
  • 15:50 - 15:54
    So this process takes about
    four or five iterations.
  • 15:54 - 15:56
    You can see we now have 62 percent
  • 15:56 - 15:59
    of our 1.5 million images
    classified correctly.
  • 15:59 - 16:01
    And at this point, we
    can start to quite quickly
  • 16:01 - 16:03
    grab whole big sections,
  • 16:03 - 16:06
    check through them to make sure
    that there's no mistakes.
  • 16:06 - 16:10
    Where there are mistakes, we can
    let the computer know about them.
  • 16:10 - 16:13
    And using this kind of process
    for each of the different groups,
  • 16:13 - 16:15
    we are now up to
    an 80 percent success rate
  • 16:15 - 16:18
    in classifying the 1.5 million images.
  • 16:18 - 16:20
    And at this point, it's just a case
  • 16:20 - 16:23
    of finding the small number
    that aren't classified correctly,
  • 16:23 - 16:26
    and trying to understand why.
  • 16:26 - 16:28
    And using that approach,
  • 16:28 - 16:32
    by 15 minutes we get
    to 97 percent classification rates.
  • 16:32 - 16:37
    So this kind of technique
    could allow us to fix a major problem,
  • 16:37 - 16:40
    which is that there's a lack
    of medical expertise in the world.
  • 16:40 - 16:43
    The World Economic Forum says
    that there's between a 10x and a 20x
  • 16:43 - 16:46
    shortage of physicians
    in the developing world,
  • 16:46 - 16:48
    and it would take about 300 years
  • 16:48 - 16:51
    to train enough people
    to fix that problem.
  • 16:51 - 16:54
    So imagine if we can help
    enhance their efficiency
  • 16:54 - 16:56
    using these deep learning approaches?
  • 16:56 - 16:59
    So I'm very excited
    about the opportunities.
  • 16:59 - 17:01
    I'm also concerned about the problems.
  • 17:01 - 17:04
    The problem here is that
    every area in blue on this map
  • 17:04 - 17:08
    is somewhere where services
    are over 80 percent of employment.
  • 17:08 - 17:10
    What are services?
  • 17:10 - 17:11
    These are services.
  • 17:11 - 17:16
    These are also the exact things that
    computers have just learned how to do.
  • 17:16 - 17:19
    So 80 percent of the world's employment
    in the developed world
  • 17:19 - 17:22
    is stuff that computers
    have just learned how to do.
  • 17:22 - 17:23
    What does that mean?
  • 17:23 - 17:26
    Well, it'll be fine.
    They'll be replaced by other jobs.
  • 17:26 - 17:29
    For example, there will be
    more jobs for data scientists.
  • 17:29 - 17:30
    Well, not really.
  • 17:30 - 17:33
    It doesn't take data scientists
    very long to build these things.
  • 17:33 - 17:36
    For example, these four algorithms
    were all built by the same guy.
  • 17:36 - 17:38
    So if you think, oh,
    it's all happened before,
  • 17:38 - 17:42
    we've seen the results in the past
    of when new things come along
  • 17:42 - 17:44
    and they get replaced by new jobs,
  • 17:44 - 17:46
    what are these new jobs going to be?
  • 17:46 - 17:48
    It's very hard for us to estimate this,
  • 17:48 - 17:51
    because human performance
    grows at this gradual rate,
  • 17:51 - 17:54
    but we now have a system, deep learning,
  • 17:54 - 17:57
    that we know actually grows
    in capability exponentially.
  • 17:57 - 17:58
    And we're here.
  • 17:58 - 18:01
    So currently, we see the things around us
  • 18:01 - 18:03
    and we say, "Oh, computers
    are still pretty dumb." Right?
  • 18:03 - 18:07
    But in five years' time,
    computers will be off this chart.
  • 18:07 - 18:11
    So we need to be starting to think
    about this capability right now.
  • 18:11 - 18:13
    We have seen this once before, of course.
  • 18:13 - 18:14
    In the Industrial Revolution,
  • 18:14 - 18:17
    we saw a step change
    in capability thanks to engines.
  • 18:18 - 18:21
    The thing is, though,
    that after a while, things flattened out.
  • 18:21 - 18:23
    There was social disruption,
  • 18:23 - 18:26
    but once engines were used
    to generate power in all the situations,
  • 18:26 - 18:28
    things really settled down.
  • 18:28 - 18:30
    The Machine Learning Revolution
  • 18:30 - 18:33
    is going to be very different
    from the Industrial Revolution,
  • 18:33 - 18:36
    because the Machine Learning Revolution,
    it never settles down.
  • 18:36 - 18:39
    The better computers get
    at intellectual activities,
  • 18:39 - 18:43
    the more they can build better computers
    to be better at intellectual capabilities,
  • 18:43 - 18:45
    so this is going to be a kind of change
  • 18:45 - 18:47
    that the world has actually
    never experienced before,
  • 18:47 - 18:51
    so your previous understanding
    of what's possible is different.
  • 18:51 - 18:53
    This is already impacting us.
  • 18:53 - 18:56
    In the last 25 years,
    as capital productivity has increased,
  • 18:56 - 19:01
    labor productivity has been flat,
    in fact even a little bit down.
  • 19:01 - 19:04
    So I want us to start
    having this discussion now.
  • 19:04 - 19:07
    I know that when I often tell people
    about this situation,
  • 19:07 - 19:09
    people can be quite dismissive.
  • 19:09 - 19:10
    Well, computers can't really think,
  • 19:10 - 19:13
    they don't emote,
    they don't understand poetry,
  • 19:13 - 19:16
    we don't really understand how they work.
  • 19:16 - 19:17
    So what?
  • 19:17 - 19:19
    Computers right now can do the things
  • 19:19 - 19:22
    that humans spend most
    of their time being paid to do,
  • 19:22 - 19:24
    so now's the time to start thinking
  • 19:24 - 19:28
    about how we're going to adjust our
    social structures and economic structures
  • 19:28 - 19:30
    to be aware of this new reality.
  • 19:30 - 19:31
    Thank you.
  • 19:31 - 19:32
    (Applause)
Title:
The wonderful and terrifying implications of computers that can learn
Speaker:
Jeremy Howard
Description:

What happens when we teach a computer how to learn? Technologist Jeremy Howard shares some surprising new developments in the fast-moving field of deep learning, a technique that can give computers the ability to learn Chinese, or to recognize objects in photos, or to help think through a medical diagnosis. (One deep learning tool, after watching hours of YouTube, taught itself the concept of “cats.”) Get caught up on a field that will change the way the computers around you behave … sooner than you probably think.

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
19:45

English subtitles

Revisions Compare revisions