Return to Video

Office Hours 8 02mp4

  • 0:00 - 0:05
    (Sebastian) Okay, there's several questions in the online forum on particle filters, and they all boil down to,
  • 0:05 - 0:08
    "Can you be more precise, Sebastian, give us more precise examples?"
  • 0:08 - 0:15
    And it's a good criticism, in fact. I gave you some abstract description. I hope in the future we can do programming
  • 0:15 - 0:18
    assignments for this class, and then people can get source code from me for particle filters.
  • 0:18 - 0:23
    We're working with the infrastructure right now, but we haven't gotten around this class to do this.
  • 0:23 - 0:27
    Meanwhile, I hope that the robotics class clarifies some of the questions that people actually have.
  • 0:27 - 0:35
    So you're going to have lots of quizzes there. I can promise you if you do those quizzes right, then you've got it, you understand particle filters.
  • 0:35 - 0:41
    (Peter) Okay. And here's one from Dan Smullen in your old home state of Pennsylvania.
  • 0:41 - 0:46
    (Sebastian) Hi, Pennsylvania. (Peter) "I noticed that the course doesn't discuss expert systems or other rule-based approaches.
  • 0:46 - 0:50
    "Are such techniques still considered part of A.I.?" So yes, they certainly are.
  • 0:50 - 0:58
    Expert systems are still a big part of A.I., and it's a multi-million-dollar business, and lots of expert systems running to this day.
  • 0:58 - 1:02
    But it seems to be less important than it was in the '80s and '90s.
  • 1:02 - 1:07
    And since we haven't covered that much in the course, let me just say a little bit about it:
  • 1:07 - 1:13
    An expert system is when you go out, interview an expert, you say, "How do you do your task, what you do?"
  • 1:13 - 1:21
    And then try to encode that into a program. And rather than trying to encode it directly into Java or C,
  • 1:21 - 1:26
    you do it into a language that's specifically designed to capture this expert knowledge.
  • 1:26 - 1:29
    And sometimes that works great, but there are some difficulties.
  • 1:29 - 1:34
    One of them is that sometimes the experts don't know how it is that they do it.
  • 1:34 - 1:40
    So they know how to do the task, but they don't know how to explain it, they don't really understand the steps,
  • 1:40 - 1:45
    all the steps that they're taking, and so they can't explain it properly.
  • 1:45 - 1:52
    Another problem is that most of these languages were based on logic rather than on probability,
  • 1:52 - 1:59
    so if it's a clear, "A implies B," then it works well, but if there's uncertainty involved, then the languages weren't very good
  • 1:59 - 2:04
    at describing that uncertainty. And so they were limited in their application.
  • 2:04 - 2:08
    (Sebastian) Yeah, I guess today, the best (unintelligible) to do expert systems is Bayes Networks,
  • 2:08 - 2:12
    which really (unintelligible) uncertainties quite a bit. And they have their own problems, but they've been used
  • 2:12 - 2:15
    just like expert systems in larger applications today.
  • 2:15 - 2:22
    (Peter) Right. And I should also say that kind of the point of view of what computers are has changed since the '80s.
  • 2:22 - 2:28
    In the '80s you were dealing with large mainframes or even workstations, which were supposed to be smaller,
  • 2:28 - 2:35
    cheap computers, still cost $50,000 each. And so this was like the salary of an expert.
  • 2:35 - 2:38
    And so you wanted to replace this expert to get your cost back.
  • 2:38 - 2:44
    Today we think of computers as being helpers that are all around us. We have ones in our pockets,
  • 2:44 - 2:50
    on our desk, we're always carrying a couple with us. And so the canonical application of A.I. is not so much,
  • 2:50 - 2:54
    "Can we replace an expert?" but, "Can we help someone out in their daily life?"
  • 2:54 - 2:58
    (Sebastian) You know, I would love to be replaced by a computer. Honestly. So we could just go to the beach all the time
  • 2:58 - 3:06
    and have a good life. (Peter) I don't believe you, Sebastian. I think if you were replaced with your current job with a computer,
  • 3:06 - 3:10
    you'd find out some other job to do. (Sebastian) Yeah, but how do the students out there know that we're real?
  • 3:10 - 3:16
    We could be robots. (Peter) We've done a really good job of our artificially intelligent robots here, and we are on the beach right now.
  • 3:16 - 3:22
    (Sebastian) I think sometimes I sound like a robot, speaking for myself. (robot sounds) All right, next question.
  • 3:22 - 3:27
    "What are some big open questions in artificial intelligence, specifically for someone pursuing a Ph.D.?
  • 3:27 - 3:33
    "What is the best way to prepare for a doctoral course? What should one study before doing this?
  • 3:33 - 3:37
    "And sub-question, how can A.I. and machine learning be differentiated?" This is by a student in Heidelberg, Germany.
  • 3:37 - 3:45
    Hi, Heidelberg. (Peter) So I would say don't worry about the big open questions in A.I.
  • 3:45 - 3:51
    Just worry about doing something. So go out and find a task that's interesting to you. If you're interested in robots
  • 3:51 - 3:57
    or computer vision or whatever, just go out and start doing something, and at some point you'll get to the point
  • 3:57 - 4:03
    where you can't download a package or read a paper that solves the next thing, and then you'll know,
  • 4:03 - 4:08
    there you have an open problem. Doesn't have to be a big open problem, but it's one that's yours and is relevant to you.
  • 4:08 - 4:14
    (Sebastian) Yeah, so one of the things I teach my students at Stanford is that up to a Ph.D., you're in a mode
  • 4:14 - 4:18
    where someone gives you a problem and you have to solve the problem.
  • 4:18 - 4:25
    And when you do a Ph.D., you have to invent the problem at the same time, and that's a skill that is not native to people.
  • 4:25 - 4:30
    So you work on something, and you think you're solving a problem, and you get stuck by doing something interesting, regardless,
  • 4:30 - 4:33
    and what you really found is that you worked on the wrong problem.
  • 4:33 - 4:38
    So while you're trying to find a solution, you have to also find the problem that you're trying to solve.
  • 4:38 - 4:46
    That sounds like a red herring--you cannot find a solution and a problem simultaneously--but that's what it is to get a Ph.D.
  • 4:46 - 4:54
    If you stick with one problem and it's unsolvable, you're stuck forever, and if it's trivially solvable, it's just not interesting.
  • 4:54 - 5:00
    So being open to new problems as you work is one of the key skills. So one way to do this is when you do something
  • 5:00 - 5:05
    and you come across an interesting problem, write it down so you don't lose it. And later on, when you solve your problem,
  • 5:05 - 5:10
    when you get bored, go back to these written down problems and see if there's a more interesting one in there.
  • 5:10 - 5:15
    Second thing I want to say is I think it's actually important to work on big open problems for society.
  • 5:15 - 5:21
    Not a (unintelligible) problem, you don't want to solve all of poverty or all of cancer, but as a visionary thing that guides you,
  • 5:21 - 5:27
    so that you can explain to yourself and your grandmother that if you solved the technical problems you're out to solve,
  • 5:27 - 5:33
    you'd have a positive impact on society. I find this really important because a lot of academic institutions miss this point,
  • 5:33 - 5:37
    and people get engraved in a specific problem formulation. And sometimes they're good, but sometimes they're
  • 5:37 - 5:41
    really really bad, and even if you solve them, you haven't really changed a thing.
  • 5:41 - 5:50
    I really care about changing big things in society, like solve poverty, solve diseases, solve transportation, solve architecture.
  • 5:50 - 5:55
    There's amazing opportunities, I think, to solve big problems. Of course, take them as a guideline in your life,
  • 5:55 - 6:00
    not as the immediate problem you're trying to solve tomorrow morning. But do that, because it's really important.
  • 6:00 - 6:05
    (Peter) Another societal problem we're interested in is education, and so that's why we're here,
  • 6:05 - 6:08
    and we really appreciate that you're there and are participating in that experiment.
  • 6:08 - 6:11
    (Sebastian) And you should tell the students the story of the frame problem.
  • 6:11 - 6:20
    (Peter) Oh yeah. So we were just talking before of when we were in school, and if we had asked our advisors,
  • 6:20 - 6:27
    "What's the important problem to solve?" the list that was given then probably wouldn't be seen as so important now.
  • 6:27 - 6:32
    So one of the big problems then was known as the (Prain) problem, and it was solved.
  • 6:32 - 6:36
    Ray Rider was the most important one who did the most work on that.
  • 6:36 - 6:40
    But once it was solved, we didn't really get to where we wanted to be.
  • 6:40 - 6:45
    It didn't really mean that we could solve more practical things in the real world.
  • 6:45 - 6:53
    We solved an artificial problem, technical problem within A.I., but it didn't lead to practical solutions in the world.
  • 6:53 - 6:57
    (Sebastian) Yeah. When I grew up, it was the biggest problem. It was, "Solve the Prain problem and you're famous,
  • 6:57 - 7:02
    "you solved A.I." And then it was solved, and it was trivially solved, it turns out, it was really trivial.
  • 7:02 - 7:09
    The problem made no sense anymore, it was like an artifact of a specific way of using logic, the way logic shouldn't be used.
  • 7:09 - 7:13
    And nothing changed in the world, right? When it came out, people didn't even care about it.
  • 7:13 - 7:17
    And yet generations of students had written Ph.D. theses on how to solve it.
  • 7:17 - 7:22
    Today, generations of students have been writing Ph.D. theses on whether P equals nP.
  • 7:22 - 7:26
    And I venture to say, if you solve P equals nP, not much is going to change.
  • 7:26 - 7:31
    We're going to solve it, and we're going to say, "Oh, that's interesting," or maybe it isn't, but I don't think much will change.
  • 7:31 - 7:35
    (Peter) If P does equal nP, then it was constructive. (Sebastian) I doubt it.
  • 7:35 - 7:41
    I think if there were an efficient solution to exponentially hard problems, you would've found at least a couple of them so far.
  • 7:41 - 7:45
    And sometimes you find equivalency class complexity where there are theoretically the same,
  • 7:45 - 7:49
    practically have such a vastly different constant, that the difference doesn't matter.
  • 7:49 - 7:53
    So I don't think, I don't have any hope that this problem is interesting to work on, to be honest.
  • 7:53 - 7:57
    (Peter) Yeah. So I guess the conclusion from all this is don't listen to your advisors.
  • 7:57 - 8:01
    And so our advice to you is don't listen to us, right? (Sebastian) Yeah, that's true.
  • 8:01 - 8:05
    (Peter) Okay, so go out and solve some problems. Make up some problems on your own and solve them.
  • 8:05 - 8:07
    (Sebastian) Goodbye. (Peter) Goodbye, thanks. (Sebastian) Bye.
タイトル:
Office Hours 8 02mp4
Video Language:
English
Team:
Udacity
プロジェクト:
CS271 - Intro to Artificial Intelligence
Duration:
08:08
Udacity Robot edited 英語(米国) subtitles for Office Hours 8 02mp4
sp1 added a translation

English subtitles

改訂 Compare revisions