The wonderful and terrifying implications of computers that can learn | Jeremy Howard | TEDxBrussels
-
0:10 - 0:13It used to be that if you wanted
to get a computer to do something new, -
0:13 - 0:15you would have to program it.
-
0:15 - 0:19Now, programming, for those of you here
that haven't done it yourself, -
0:19 - 0:22requires laying out in excruciating detail
-
0:22 - 0:25every single step that you want
the computer to do -
0:25 - 0:27in order to achieve your goal.
-
0:27 - 0:31Now, if you want to do something
that you don't know how to do yourself, -
0:31 - 0:33then this is going to be
a great challenge. -
0:33 - 0:37So this was the challenge faced
by this man, Arthur Samuel. -
0:37 - 0:43In 1956, he wanted to get this computer
to be able to beat him at checkers. -
0:43 - 0:45How can you write a program,
-
0:45 - 0:49lay out in excruciating detail,
how to be better than you at checkers? -
0:49 - 0:51So he came up with an idea:
-
0:51 - 0:54he had the computer play
against itself thousands of times -
0:54 - 0:57and learn how to play checkers.
-
0:57 - 1:00And indeed it worked,
and in fact, by 1962, -
1:00 - 1:03this computer had beaten
the Connecticut state champion. -
1:03 - 1:07So Arthur Samuel was
the father of machine learning, -
1:07 - 1:08and I have a great debt to him,
-
1:08 - 1:11because I am
a machine learning practitioner. -
1:11 - 1:13I was the president of Kaggle,
-
1:13 - 1:16a community of over 200,000
machine learning practitioners. -
1:16 - 1:18Kaggle puts up competitions
-
1:18 - 1:22to try and get them to solve
previously unsolved problems, -
1:22 - 1:25and it's been successful
hundreds of times. -
1:26 - 1:28So from this vantage point,
I was able to find out -
1:28 - 1:32a lot about what machine learning
can do in the past, can do today, -
1:32 - 1:34and what it could do in the future.
-
1:34 - 1:37Perhaps the first big success
-
1:37 - 1:39of machine learning commercially
was Google. -
1:39 - 1:42Google showed that it is possible
to find information -
1:42 - 1:44by using a computer algorithm,
-
1:44 - 1:47and this algorithm is based
on machine learning. -
1:47 - 1:51Since that time, there have been many
commercial successes of machine learning. -
1:51 - 1:53Companies like Amazon and Netflix
-
1:53 - 1:56use machine learning to suggest
products that you might like to buy, -
1:56 - 1:58movies that you might like to watch.
-
1:58 - 2:00Sometimes, it's almost creepy.
-
2:00 - 2:02Companies like LinkedIn and Facebook
-
2:02 - 2:04sometimes will tell you about
who your friends might be -
2:04 - 2:06and you have no idea how it did it,
-
2:06 - 2:09and this is because it's using
the power of machine learning. -
2:09 - 2:13These are algorithms that have learned
how to do this from data -
2:13 - 2:15rather than being programmed by hand.
-
2:16 - 2:18This is also how IBM was successful
-
2:18 - 2:21in getting Watson to beat
two world champions at "Jeopardy," -
2:21 - 2:24answering incredibly subtle
and complex questions like this one. -
2:24 - 2:29[The ancient "Lion of Nimrud" went missing
from this city's national museum in 2003] -
2:29 - 2:32This is also why we are now able
to see the first self-driving cars. -
2:33 - 2:35If you want to be able to tell
the difference between, say, -
2:35 - 2:38a tree and a pedestrian,
well, that's pretty important. -
2:38 - 2:41We don't know how to write
those programs by hand, -
2:41 - 2:44but with machine learning,
this is now possible. -
2:44 - 2:46And in fact, this car has driven
over a million miles -
2:46 - 2:49without any accidents on regular roads.
-
2:49 - 2:52So we now know that computers can learn,
-
2:52 - 2:55and computers can learn to do things
-
2:55 - 2:57that we actually sometimes
don't know how to do ourselves, -
2:57 - 3:00or maybe can do them better than us.
-
3:00 - 3:04One of the most amazing examples
I've seen of machine learning -
3:04 - 3:07happened on a project that I ran at Kaggle
-
3:07 - 3:11where a team run by a guy
called Geoffrey Hinton -
3:11 - 3:12from the University of Toronto
-
3:12 - 3:15won a competition
for automatic drug discovery. -
3:15 - 3:17Now, what was extraordinary here
is not just that they beat -
3:17 - 3:22all of the algorithms developed by Merck
or the international academic community, -
3:22 - 3:27but nobody on the team had any background
in chemistry or biology or life sciences, -
3:27 - 3:29and they did it in two weeks.
-
3:29 - 3:31How did they do this?
-
3:31 - 3:34They used an extraordinary algorithm
called deep learning. -
3:34 - 3:37So important was this that in fact
the success was covered -
3:37 - 3:40in The New York Times
in a front page article a few weeks later. -
3:40 - 3:43This is Geoffrey Hinton
here on the left-hand side. -
3:43 - 3:47Deep learning is an algorithm
inspired by how the human brain works, -
3:47 - 3:49and as a result it's an algorithm
-
3:49 - 3:52which has no theoretical limitations
on what it can do. -
3:52 - 3:56The more data you give it and the more
computation time you give it, -
3:56 - 3:57the better it gets.
-
3:57 - 3:59The New York Times
also showed in this article -
3:59 - 4:02another extraordinary result
of deep learning -
4:02 - 4:04which I'm going to show you now.
-
4:04 - 4:08It shows that computers
can listen and understand. -
4:09 - 4:11(Video) Richard Rashid: Now, the last step
-
4:11 - 4:14that I want to be able
to take in this process -
4:14 - 4:17is to actually speak to you in Chinese.
-
4:19 - 4:22Now the key thing there is,
-
4:22 - 4:27we've been able to take a large amount
of information from many Chinese speakers -
4:27 - 4:30and produce a text-to-speech system
-
4:30 - 4:34that takes Chinese text
and converts it into Chinese language, -
4:35 - 4:39and then we've taken
an hour or so of my own voice -
4:39 - 4:41and we've used that to modulate
-
4:41 - 4:45the standard text-to-speech system
so that it would sound like me. -
4:45 - 4:48Again, the result's not perfect.
-
4:48 - 4:51There are in fact quite a few errors.
-
4:51 - 4:53(In Chinese)
-
4:53 - 4:57(Applause)
-
4:58 - 5:01There's much work to be done in this area.
-
5:01 - 5:05(In Chinese)
-
5:05 - 5:08(Applause)
-
5:10 - 5:14Jeremy Howard: Well, that was
at a machine learning conference in China. -
5:14 - 5:17It's not often, actually,
at academic conferences -
5:17 - 5:19that you do hear spontaneous applause,
-
5:19 - 5:22although of course sometimes
at TEDx conferences, feel free. -
5:22 - 5:25Everything you saw there
was happening with deep learning. -
5:25 - 5:27(Applause) Thank you.
-
5:27 - 5:29The transcription in English
was deep learning. -
5:29 - 5:32The translation to Chinese and the text
in the top right, deep learning, -
5:32 - 5:36and the construction of the voice
was deep learning as well. -
5:36 - 5:39So deep learning
is this extraordinary thing. -
5:39 - 5:42It's a single algorithm
that can seem to do almost anything, -
5:42 - 5:45and I discovered that a year earlier,
it had also learned to see. -
5:45 - 5:47In this obscure competition from Germany
-
5:47 - 5:50called the German Traffic Sign
Recognition Benchmark, -
5:50 - 5:53deep learning had learned
to recognize traffic signs like this one. -
5:53 - 5:55Not only could it recognize
the traffic signs -
5:55 - 5:57better than any other algorithm,
-
5:57 - 6:00the leaderboard actually showed
it was better than people, -
6:00 - 6:02about twice as good as people.
-
6:02 - 6:04So by 2011, we had the first example
-
6:04 - 6:07of computers that can see
better than people. -
6:07 - 6:09Since that time, a lot has happened.
-
6:09 - 6:13In 2012, Google announced that they had
a deep learning algorithm -
6:13 - 6:14watch YouTube videos
-
6:14 - 6:17and crunched the data
on 16,000 computers for a month, -
6:17 - 6:22and the computer independently learned
about concepts such as people and cats -
6:22 - 6:24just by watching the videos.
-
6:24 - 6:26This is much like the way
that humans learn. -
6:26 - 6:29Humans don't learn
by being told what they see, -
6:29 - 6:32but by learning for themselves
what these things are. -
6:32 - 6:35Also in 2012, Geoffrey Hinton,
who we saw earlier, -
6:35 - 6:38won the very popular ImageNet competition,
-
6:38 - 6:42looking to try to figure out
from one and a half million images -
6:42 - 6:44what they're pictures of.
-
6:44 - 6:47As of 2014, we're now
down to a six percent error rate -
6:47 - 6:49in image recognition.
-
6:49 - 6:51This is better than people, again.
-
6:51 - 6:55So machines really are doing
an extraordinarily good job of this, -
6:55 - 6:57and it is now being used in industry.
-
6:57 - 7:00For example, Google announced last year
-
7:00 - 7:04that they had mapped every single location
in France in two hours, -
7:04 - 7:08and the way they did it
wast hat they fed street view images -
7:08 - 7:12into a deep learning algorithm
to recognize and read street numbers. -
7:12 - 7:14Imagine how long
it would have taken before: -
7:14 - 7:18dozens of people, many years.
-
7:18 - 7:20This is also happening in China.
-
7:20 - 7:24Baidu is kind of
the Chinese Google, I guess, -
7:24 - 7:26and what you see here in the top left
-
7:26 - 7:30is an example of a picture that I uploaded
to Baidu's deep learning system, -
7:30 - 7:34and underneath you can see that the system
has understood what that picture is -
7:34 - 7:36and found similar images.
-
7:36 - 7:39The similar images
actually have similar backgrounds, -
7:39 - 7:42similar directions of the faces,
even some with their tongue out. -
7:42 - 7:45This is not clearly looking
at the text of a web page. -
7:45 - 7:47All I uploaded was an image.
-
7:47 - 7:51So we now have computers
which really understand what they see -
7:51 - 7:52and can therefore search databases
-
7:52 - 7:56of hundreds of millions
of images in real time. -
7:56 - 7:59So what does it mean
now that computers can see? -
7:59 - 8:01Well, it's not just
that computers can see. -
8:01 - 8:03In fact, deep learning
has done more than that. -
8:03 - 8:06Complex, nuanced sentences like this one
-
8:06 - 8:09are now understandable
with deep learning algorithms. -
8:09 - 8:10As you can see here,
-
8:10 - 8:13this Stanford-based system
showing the red dot at the top -
8:13 - 8:17has figured out that this sentence
is expressing negative sentiment. -
8:17 - 8:20Deep learning now in fact
is near human performance -
8:20 - 8:25at understanding what sentences are about
and what it is saying about those things. -
8:25 - 8:28Also, deep learning
has been used to read Chinese, -
8:28 - 8:31again at about
native Chinese speaker level. -
8:31 - 8:33This algorithm developed
out of Switzerland -
8:33 - 8:37by people, none of whom speak
or understand any Chinese. -
8:37 - 8:39As I say, using deep learning
-
8:39 - 8:41is about the best system
in the world for this, -
8:41 - 8:46even compared
to native human understanding. -
8:46 - 8:49This is a system that we put together
at my company -
8:49 - 8:51which shows putting
all this stuff together. -
8:51 - 8:54These are pictures which have
no text attached, -
8:54 - 8:56and as I'm typing in here sentences,
-
8:56 - 8:59in real time it's understanding
these pictures -
8:59 - 9:01and figuring out what they're about
-
9:01 - 9:04and finding pictures that are similar
to the text that I'm writing. -
9:04 - 9:07So you can see, it's actually
understanding my sentences -
9:07 - 9:09and actually understanding these pictures.
-
9:09 - 9:11I know that you've seen
something like this on Google, -
9:11 - 9:14where you can type in things
and it will show you pictures, -
9:14 - 9:18but actually what it's doing is it's
searching the webpage for the text. -
9:18 - 9:21This is very different from
actually understanding the images. -
9:21 - 9:23This is something that computers
have only been able to do -
9:23 - 9:27for the first time in the last few months.
-
9:27 - 9:31So we can see now that computers
cannot only see but they can also read, -
9:31 - 9:34and, of course, we've shown that they
can understand what they hear. -
9:34 - 9:38Perhaps not surprising now
that I'm going to tell you they can write. -
9:38 - 9:43Here is some text that I generated
using a deep learning algorithm yesterday. -
9:43 - 9:47And here is some text that an algorithm
out of Stanford generated. -
9:47 - 9:48Each of these sentences was generated
-
9:48 - 9:53by a deep learning algorithm
to describe each of those pictures. -
9:53 - 9:57This algorithm before has never seen
a man in a black shirt playing a guitar. -
9:57 - 9:59It's seen a man before,
it's seen black before, -
9:59 - 10:01it's seen a guitar before,
-
10:01 - 10:05but it has independently generated
this novel description of this picture. -
10:05 - 10:09We're still not quite at human performance
here, but we're close. -
10:09 - 10:13In tests, humans prefer
the computer-generated caption -
10:13 - 10:14one out of four times.
-
10:14 - 10:16Now this system is now only two weeks old,
-
10:16 - 10:18so probably within the next year,
-
10:18 - 10:21the computer algorithm will be
well past human performance -
10:21 - 10:23at the rate things are going.
-
10:23 - 10:26So computers can also write.
-
10:26 - 10:29So we put all this together and it leads
to very exciting opportunities. -
10:29 - 10:31For example, in medicine,
-
10:31 - 10:33a team in Boston announced
that they had discovered -
10:33 - 10:36dozens of new clinically relevant features
-
10:36 - 10:41of tumors which help doctors
make a prognosis of a cancer. -
10:42 - 10:44Very similarly, in Stanford,
-
10:44 - 10:48a group there announced that,
looking at tissues under magnification, -
10:48 - 10:50they've developed
a machine learning-based system -
10:50 - 10:53which in fact is better
than human pathologists -
10:53 - 10:56at predicting survival rates
for cancer sufferers. -
10:57 - 11:00In both of these cases, not only
were the predictions more accurate, -
11:00 - 11:03but they generated new insightful science.
-
11:03 - 11:04In the radiology case,
-
11:04 - 11:07they were new clinical indicators
that humans can understand. -
11:07 - 11:09In this pathology case,
-
11:09 - 11:14the computer system actually discovered
that the cells around the cancer -
11:14 - 11:17are as important
as the cancer cells themselves -
11:17 - 11:19in making a diagnosis.
-
11:19 - 11:24This is the opposite of what pathologists
had been taught for decades. -
11:24 - 11:27In each of those two cases,
they were systems developed -
11:27 - 11:31by a combination of medical experts
and machine learning experts, -
11:31 - 11:34but as of last year,
we're now beyond that too. -
11:34 - 11:37This is an example
of identifying cancerous areas -
11:37 - 11:40of human tissue under a microscope.
-
11:40 - 11:44The system being shown here
can identify those areas more accurately, -
11:44 - 11:47or about as accurately,
as human pathologists, -
11:47 - 11:51but was built entirely with deep learning
using no medical expertise -
11:51 - 11:53by people who have
no background in the field. -
11:54 - 11:57Similarly, here, this neuron segmentation.
-
11:57 - 12:00We can now segment neurons
about as accurately as humans can, -
12:00 - 12:03but this system was developed
with deep learning -
12:03 - 12:06using people with no previous background
in medicine. -
12:06 - 12:10So myself, as somebody with
no previous background in medicine, -
12:10 - 12:13I seem to be entirely well qualified
to start a new medical company, -
12:13 - 12:16which I did.
-
12:16 - 12:17I was kind of terrified of doing it,
-
12:17 - 12:20but the theory seemed to suggest
that it ought to be possible -
12:20 - 12:26to do very useful medicine
using just these data analytic techniques. -
12:26 - 12:28And thankfully, the feedback
has been fantastic, -
12:28 - 12:31not just from the media
but from the medical community, -
12:31 - 12:33who have been very supportive.
-
12:33 - 12:37The theory is that we can take
the middle part of the medical process -
12:37 - 12:40and turn that into data analysis
as much as possible, -
12:40 - 12:43leaving doctors to do
what they're best at. -
12:43 - 12:45I want to give you an example.
-
12:45 - 12:49It now takes us about 15 minutes
to generate a new medical diagnostic test -
12:49 - 12:51and I'll show you that in real time now,
-
12:51 - 12:54but I've compressed it down
to three minutes -
12:54 - 12:56by cutting some pieces out.
-
12:56 - 12:59Rather than showing you
creating a medical diagnostic test, -
12:59 - 13:01I'm going to show you
a diagnostic test of car images, -
13:01 - 13:04because that's something
we can all understand. -
13:04 - 13:07So here we're starting
with about 1.5 million car images, -
13:07 - 13:10and I want to create something
that can split them into the angle -
13:10 - 13:12of the photo that's being taken.
-
13:12 - 13:16So these images are entirely unlabeled,
so I have to start from scratch. -
13:16 - 13:18With our deep learning algorithm,
-
13:18 - 13:22it can automatically identify
areas of structure in these images. -
13:22 - 13:25So the nice thing is that the human
and the computer can now work together. -
13:25 - 13:27So the human, as you can see here,
-
13:27 - 13:30is telling the computer
about areas of interest -
13:30 - 13:35which it wants the computer then
to try and use to improve its algorithm. -
13:35 - 13:39Now, these deep learning systems actually
are in 16,000-dimensional space, -
13:39 - 13:43so you can see here the computer
rotating this through that space, -
13:43 - 13:45trying to find new areas of structure.
-
13:45 - 13:46And when it does so successfully,
-
13:46 - 13:50the human who is driving it can then
point out the areas that are interesting. -
13:50 - 13:53So here, the computer
has successfully found areas, -
13:53 - 13:55for example, angles.
-
13:55 - 13:57So as we go through this process,
-
13:57 - 13:59we're gradually telling
the computer more and more -
13:59 - 14:02about the kinds of structures
we're looking for. -
14:02 - 14:03You can imagine in a diagnostic test
-
14:03 - 14:07this would be a pathologist identifying
areas of pathosis, for example, -
14:07 - 14:12or a radiologist indicating
potentially troublesome nodules. -
14:12 - 14:14And sometimes it can be difficult
for the algorithm. -
14:14 - 14:16In this case, it got kind of confused.
-
14:16 - 14:19The fronts and the backs
of the cars are all mixed up. -
14:19 - 14:21So here we have to be a bit more careful,
-
14:21 - 14:24manually selecting these fronts
as opposed to the backs, -
14:24 - 14:30then telling the computer
that this is a type of group -
14:30 - 14:31that we're interested in.
-
14:31 - 14:34So we do that for a while,
we skip over a little bit, -
14:34 - 14:36and then we train
the machine learning algorithm -
14:36 - 14:38based on these couple of hundred things,
-
14:38 - 14:40and we hope that it's gotten a lot better.
-
14:40 - 14:43You can see, it's now started to fade
some of these pictures out, -
14:43 - 14:48showing us that it already is recognizing
how to understand some of these itself. -
14:48 - 14:51We can then use this concept
of similar images, -
14:51 - 14:53and using similar images, you can now see,
-
14:53 - 14:57the computer at this point is able
to entirely find just the fronts of cars. -
14:57 - 15:00So at this point, the human
can tell the computer, -
15:00 - 15:02okay, yes, you've done a good job of that.
-
15:03 - 15:05Sometimes, of course, even at this point
-
15:05 - 15:09it's still difficult
to separate out groups. -
15:09 - 15:11In this case, even after we let
-
15:11 - 15:14the computer
try to rotate this for a while, -
15:14 - 15:16we still find that the left side
sand the right sides pictures -
15:16 - 15:18are all mixed up together.
-
15:18 - 15:20So we can again give
the computer some hints, -
15:20 - 15:23and we say, okay, try and find
a projection that separates out -
15:23 - 15:25the left sides and the right sides
as much as possible -
15:25 - 15:28using this deep learning algorithm.
-
15:28 - 15:31And giving it that hint --
ah, okay, it's been successful. -
15:31 - 15:33It's managed to find a way
of thinking about these objects -
15:33 - 15:36that's separated out these together.
-
15:36 - 15:38So you get the idea here.
-
15:38 - 15:44This is a case not where the human
is being replaced by a computer, -
15:46 - 15:49but where they're working together.
-
15:49 - 15:53What we're doing here is we're replacing
something that used to take a team -
15:53 - 15:55of five or six people about seven years
-
15:55 - 15:57and replacing it with something
that takes 15 minutes -
15:57 - 16:00for one person acting alone.
-
16:00 - 16:04So this process takes
about four or five iterations. -
16:04 - 16:06You can see we now have 62 percent
-
16:06 - 16:08of our 1.5 million images
classified correctly. -
16:08 - 16:11And at this point,
we can start to quite quickly -
16:11 - 16:12grab whole big sections,
-
16:12 - 16:15check through them to make sure
that there's no mistakes. -
16:15 - 16:19Where there are mistakes, we can let
the computer know about them. -
16:19 - 16:22And using this kind of process
for each of the different groups, -
16:22 - 16:25we are now up to an 80 percent
success rate -
16:25 - 16:27in classifying the 1.5 million images.
-
16:27 - 16:29And at this point, it's just a case
-
16:29 - 16:33of finding the small number
that aren't classified correctly, -
16:33 - 16:36and trying to understand why.
-
16:36 - 16:37And using that approach,
-
16:37 - 16:41by 15 minutes we get
to 97 percent classification rates. -
16:41 - 16:46So this kind of technique
could allow us to fix a major problem, -
16:46 - 16:49which is that there's a lack
of medical expertise in the world. -
16:49 - 16:53The World Economic Forum says
that there's between a 10x and a 20x -
16:53 - 16:55shortage of physicians
in the developing world, -
16:55 - 16:57and it would take about 300 years
-
16:57 - 17:00to train enough people
to fix that problem. -
17:00 - 17:03So imagine if we can help
enhance their efficiency -
17:03 - 17:06using these deep learning approaches?
-
17:06 - 17:08So I'm very excited
about the opportunities. -
17:08 - 17:11I'm also concerned about the problems.
-
17:11 - 17:14The problem here
is that every area in blue on this map -
17:14 - 17:18is somewhere where services
are over 80 percent of employment. -
17:18 - 17:19What are services?
-
17:19 - 17:21These are services.
-
17:21 - 17:23These are also the exact things
-
17:23 - 17:26that computers
have just learned how to do. -
17:26 - 17:29So 80 percent of the world's employment
in the developed world -
17:29 - 17:31is stuff that computers
have just learned how to do. -
17:31 - 17:33What does that mean?
-
17:33 - 17:35Well, it'll be fine.
They'll be replaced by other jobs. -
17:35 - 17:38For example, there will be
more jobs for data scientists. -
17:38 - 17:39Well, not really.
-
17:39 - 17:42It doesn't take data scientists
very long to build these things. -
17:42 - 17:45For example, these four algorithms
were all built by the same guy. -
17:45 - 17:48So if you think, oh,
it's all happened before, -
17:48 - 17:52we've seen the results in the past
of when new things come along -
17:52 - 17:54and they get replaced by new jobs,
-
17:54 - 17:56what are these new jobs going to be?
-
17:56 - 17:58It's very hard for us to estimate this,
-
17:58 - 18:01because human performance
grows at this gradual rate, -
18:01 - 18:03but we now have a system, deep learning,
-
18:03 - 18:06that we know actually grows
in capability exponentially. -
18:06 - 18:08And we're here.
-
18:08 - 18:10So currently, we see the things around us
-
18:10 - 18:13and we say, "Oh, computers
are still pretty dumb." Right? -
18:13 - 18:16But in five years' time,
computers will be off this chart. -
18:16 - 18:20So we need to be starting to think
about this capability right now. -
18:20 - 18:22We have seen this once before, of course.
-
18:22 - 18:23In the Industrial Revolution,
-
18:23 - 18:26we saw a step change
in capability thanks to engines. -
18:27 - 18:30The thing is, though,
that after a while, things flattened out. -
18:30 - 18:32There was social disruption,
-
18:32 - 18:35but once engines were used
to generate power in all the situations, -
18:35 - 18:38things really settled down.
-
18:38 - 18:39The Machine Learning Revolution
-
18:39 - 18:42is going to be very different
from the Industrial Revolution, -
18:42 - 18:45because the Machine Learning Revolution,
it never settles down. -
18:45 - 18:48The better computers get
at intellectual activities, -
18:48 - 18:52the more they can build better computers
to be better at intellectual capabilities, -
18:52 - 18:54so this is going to be a kind of change
-
18:54 - 18:57that the world has actually
never experienced before, -
18:57 - 19:00so your previous understanding
of what's possible is different. -
19:00 - 19:02This is already impacting us.
-
19:02 - 19:06In the last 25 years,
as capital productivity has increased, -
19:06 - 19:10labor productivity has been flat,
in fact even a little bit down. -
19:11 - 19:14So I want us to start
having this discussion now. -
19:14 - 19:17I know that when I often tell people
about this situation, -
19:17 - 19:18people can be quite dismissive.
-
19:18 - 19:20Well, computers can't really think,
-
19:20 - 19:23they don't emote,
they don't understand poetry, -
19:23 - 19:25we don't really understand how they work.
-
19:25 - 19:27So what?
-
19:27 - 19:29Computers right now can do the things
-
19:29 - 19:31that humans spend
most of their time being paid to do, -
19:31 - 19:35so now's the time to start thinking
about how we're going to adjust -
19:35 - 19:37our social structures
and economic structures -
19:37 - 19:39to be aware of this new reality.
-
19:39 - 19:41Thank you.
-
19:41 - 19:42(Applause)
- Title:
- The wonderful and terrifying implications of computers that can learn | Jeremy Howard | TEDxBrussels
- Description:
-
This talk was given at a local TEDx event, produced independently of the TED Conferences. What happens when we teach a computer how to learn? Technologist Jeremy Howard shares some surprising new developments in the fast-moving field of deep learning, a technique that can give computers the ability to learn Chinese, or to recognize objects in photos, or to help think through a medical diagnosis. (One deep learning tool, after watching hours of YouTube, taught itself the concept of “cats.”) Get caught up on a field that will change the way the computers around you behave… sooner than you probably think.
- Video Language:
- English
- Team:
- closed TED
- Project:
- TEDxTalks
- Duration:
- 19:47