Return to Video

04x-01 Office Hours Week 3

  • 0:00 - 0:05
    Hello and welcome to the third office hours. Let's jump right in. >>All right.
  • 0:05 - 0:10
    Many students, myself included, had some questions about the resampling wheel.
  • 0:10 - 0:15
    Specifically, when you draw a random number between 0 and twice W max,
  • 0:15 - 0:18
    where did that number, twice W max, come from?
  • 0:18 - 0:21
    I have the same question. I made it up.
  • 0:21 - 0:25
    I wanted to make sure that these wheel can jump over entire particles
  • 0:25 - 0:28
    that wouldn't large enough, but if you make it really large then you have to search a lot.
  • 0:28 - 0:31
    I figured if it's 2^n it's going to be fine.
  • 0:31 - 0:35
    Does this bias the sample in any way, choosing this number.
  • 0:35 - 0:37
    I actually don't know.
  • 0:37 - 0:42
    I think there is certainly a correlation between adjacent particles in the selection of particles.
  • 0:42 - 0:45
    They're not independently drawn.
  • 0:45 - 0:50
    I just don't know what the effect on the particle filter will be, and I wish I did.
  • 0:50 - 0:54
    The next question, or two questions I should say, come from George.
  • 0:54 - 0:58
    He wants to know if there are any rules of thumb that we should keep in mind
  • 0:58 - 1:01
    when we're choosing what filter to use for a given situation.
  • 1:01 - 1:05
    When do we use the particle filter? When do we use the Kalman filter?
  • 1:05 - 1:07
    Absolutely, yes.
  • 1:07 - 1:13
    The particle filter is the easiest to implement, but the complexity scales exponentially
  • 1:13 - 1:15
    with the number of dimensions.
  • 1:15 - 1:19
    That's usually a problem, because if you have a high-dimensional space, you just can't apply it.
  • 1:19 - 1:23
    The Kalman filter is the only filter that does scale exponentially, so it's very nice.
  • 1:23 - 1:27
    If you have like a 15-dimensional space, you will usually use a particle filter,
  • 1:27 - 1:31
    but the problem with the Kalman filter is it's unimodal, so you can't really have multiple hypotheses.
  • 1:31 - 1:34
    There are extensions of the Kalman filter that does this
  • 1:34 - 1:38
    called mixed rough Kalman filter, multihypothesis Kalman filter that can do this.
  • 1:38 - 1:42
    They address some of these problems.
  • 1:42 - 1:48
    The histogram filter is applicable in situations similar to the particle filter
  • 1:48 - 1:51
    where you have a global uncertainty, and it's more systematic.
  • 1:51 - 1:56
    In the particle filter, if you loose track of the correct hypothesis, you might never regain it.
  • 1:56 - 1:59
    In a grid-based filter you have a chance of regaining it.
  • 1:59 - 2:02
    Grids are easily supported in many programming frameworks.
  • 2:02 - 2:07
    Sometimes there are better ones to use, but they also have a generic limitation
  • 2:07 - 2:10
    in the accuracy, which is related to the resolution of the grid.
  • 2:10 - 2:16
    My recommendation is if you have a multimodal distribution use particle filters if you can.
  • 2:16 - 2:21
    If it's really a continuous space with a unimodal distribution use Kalman filters.
  • 2:21 - 2:27
    Okay. That's a great segue this question about switching between filters on the fly.
  • 2:27 - 2:32
    For example, we have our particle filters that converges to a unimodal distribution.
  • 2:32 - 2:35
    Then can we switch to the Kalman filter?
  • 2:35 - 2:38
    It isn't done much that people switch.
  • 2:38 - 2:41
    The reason is when you switch between filters,
  • 2:41 - 2:46
    you end up getting moments of increased uncertainty.
  • 2:46 - 2:49
    You can see this when you buy commercial GPS receivers.
  • 2:49 - 2:52
    They tend to run multiple Kalman filters, it turns out, 3D
  • 2:52 - 2:55
    depending on whether it's 2D or navigation.
  • 2:55 - 2:59
    When they are switched, the behavior becomes a little bit iffy, and often that is bad for robots,
  • 2:59 - 3:03
    because they they a little bit because this is where it says this thing versus the other thing.
  • 3:03 - 3:06
    They're not quite consistent.
  • 3:06 - 3:09
    There are ways to combine multiple filters types.
  • 3:09 - 3:12
    The most common one is called the Rao-Blackwellized filter--Rao-Blackwellized--
  • 3:12 - 3:14
    after Rao and Blackwell.
  • 3:14 - 3:21
    What they found is that in a particle domain, sometimes if we nail certain dimensions with particles,
  • 3:21 - 3:25
    everything else conditional on the particle becomes Gaussian or unimodal.
  • 3:25 - 3:31
    Then you can exploit the efficiency of a Kalman filter that is now attached to individual particles.
  • 3:31 - 3:36
    I'm not going to go into depth here, but there's an entire field of Rao-Blackwellized filters
  • 3:36 - 3:40
    that sometimes can estimate in the spaces of hundreds of dimensions.
  • 3:40 - 3:45
    Great. Thank you. That actually predicted George's next question, so we'll move on.
  • 3:45 - 3:50
    Drew wanted to know about what happens to a particle filter when the motion modal
  • 3:50 - 3:54
    moves a particle into an invalid place.
  • 3:54 - 3:58
    For an example, in that corridor demonstration you gave, what if a particle gets moved into a wall?
  • 3:58 - 4:01
    Well, thanks Drew. Great question.
  • 4:01 - 4:03
    The obvious answer is you killed that particle.
  • 4:03 - 4:07
    The way to think about this is in the measurement model you've got to have
  • 4:07 - 4:11
    this kind of implicit sensor that says, "I'm just sitting inside a wall."
  • 4:11 - 4:15
    The truth is the robot never sits inside a wall, so that sensor would always say
  • 4:15 - 4:18
    with absolute certainty, "I'm not sitting in the middle of a wall."
  • 4:18 - 4:21
    This kind of hypothetical sensor would justify
  • 4:21 - 4:26
    that the weight of that particle that's in the middle of the wall would get weight 0, so you just kill it.
  • 4:26 - 4:30
    Okay. A couple more questions from Drew.
  • 4:30 - 4:34
    What about dynamically adjusting our big end, our number of particles
  • 4:34 - 4:39
    when we want to trade off between computational cost versus the accuracy of our filter?
  • 4:39 - 4:42
    Dynamically setting the number of particles has been done quite a bit,
  • 4:42 - 4:44
    and it's a good idea under certain circumstances.
  • 4:44 - 4:47
    Obviously, the fewer particles you have the faster you can compute.
  • 4:47 - 4:50
    If you're tracking really well and all the particles are centered in one location,
  • 4:50 - 4:54
    there isn't really a need to have as many around as when you're globally uncertain.
  • 4:54 - 4:57
    They way to set the number of particles is often done by looking
  • 4:57 - 5:00
    at the total non-normalized importance weights.
  • 5:00 - 5:04
    If all your importance weights are really large, then you're probably doing a great job tracking,
  • 5:04 - 5:06
    and you don't need that many particles.
  • 5:06 - 5:09
    Whereas if your importance weights are all very small,
  • 5:09 - 5:13
    then chances are you're doing a really lousy job tracking, and you need more particles.
  • 5:13 - 5:17
    That isn't perfect. An unlikely measurement can cause weights to be small,
  • 5:17 - 5:20
    but a good heuristic would be to say, let's particle sample
  • 5:20 - 5:23
    until our non-normalized importance weights reach a certain threshold,
  • 5:23 - 5:26
    and then let's stop sampling.
  • 5:26 - 5:29
    Now truth telling, many of the systems we're dealing with are real-time systems,
  • 5:29 - 5:33
    and you can't really afford using many particles sometimes and few other times,
  • 5:33 - 5:36
    because there's a fixed amount of time in which you want to do your estimates.
  • 5:36 - 5:41
    Then yes, it's more tricky, but in principle using few particles when you track well
  • 5:41 - 5:43
    is a very viable solution.
  • 5:43 - 5:47
    Thank you. Taka also had a question.
  • 5:47 - 5:51
    He wants to know how to distinguish between moving landmarks and non-moving landmarks.
  • 5:51 - 5:56
    The maps that we dealt with in class were all static, and the landmarks were fixed.
  • 5:56 - 5:59
    How does the Google car distinguish between these moving vehicles,
  • 5:59 - 6:02
    moving people, and these static landmarks?
  • 6:02 - 6:04
    That's a wonderful question.
  • 6:04 - 6:07
    First of all, the Google car assumes that the ground map,
  • 6:07 - 6:11
    like the surface map of the street, is basically fixed.
  • 6:11 - 6:15
    If lane markers would move along a little bit, then the Google car would probably get confused--
  • 6:15 - 6:20
    a little secret here--so please don't repaint the lane markers when the Google car comes by.
  • 6:20 - 6:23
    That's treated differently than stuff that sticks out of the ground,
  • 6:23 - 6:25
    and even that is used as a landmark.
  • 6:25 - 6:32
    What we do is we have a probabilistic threshold that says what's the chance of this thing being mobile or not?
  • 6:32 - 6:36
    We do this by establishing correspondence, which means we take measurements a time step earlier
  • 6:36 - 6:39
    and measurements a time step later, see which has the most likely
  • 6:39 - 6:42
    correspondence between these two measurements
  • 6:42 - 6:44
    and then see if there's a motion vector in between.
  • 6:44 - 6:48
    Sometimes we can explain it away by just noise, but for cars and people and so on,
  • 6:48 - 6:51
    there is very often a very clear and strong motion
  • 6:51 - 6:54
    in which case we assume this thing is moving and tracking.
  • 6:54 - 6:59
    It also turns out that the way we build our prior maps is sometimes we drive by multiple times.
  • 6:59 - 7:03
    We do differencing, and then we have most captures are static things in our maps.
  • 7:03 - 7:07
    We happen to know that in the middle of a street, there tend to be no static things.
  • 7:07 - 7:09
    There tends to be just moving things.
  • 7:09 - 7:12
    You can bias the estimate toward saying, well, in the middle of the street
  • 7:12 - 7:15
    what we'll see is likely not static.
  • 7:15 - 7:18
    Tossing this all together gives us a fairly good tracker.
  • 7:18 - 7:23
    Thanks a lot. That's all we had for this time. We'll see you all next week.
  • 7:23 - 7:25
    But I want to make a comment.
  • 7:25 - 7:30
    I hear that many people really positively received the homework assignment on particle filters,
  • 7:30 - 7:33
    which is great. It took me a while to make it.
  • 7:33 - 7:37
    On the discussion forum there is a posting of a graphical version of it,
  • 7:37 - 7:40
    and I downloaded it. It looks really great.
  • 7:40 - 7:45
    I couldn't get the curser keys to work on my computer, but it's a really great basis to visualize.
  • 7:45 - 7:49
    I think particle filters are really hard to understand without visualization,
  • 7:49 - 7:52
    so I'm really sorry that our current programming environment doesn't provide visualization.
  • 7:52 - 7:56
    I hope in the next round, we'll fix that. I'm pretty sure we'll fix this.
  • 7:56 - 7:58
    So please play with it.
  • 7:58 - 8:02
    The other thing I noticed in the discussion boards, and I wanted to just call for feedback,
  • 8:02 - 8:05
    and the same for the Facebook group, that people want harder homework assignments.
  • 8:05 - 8:11
    I'm not sure that's true universally, so if you have an opinion, why don't you just post it.
  • 8:11 - 8:13
    I'd like to get a sense of it.
  • 8:13 - 8:17
    I'm thinking of making an assignment where we toss everything togethera
  • 8:17 - 8:20
    and really build like a mini version of an actual car.
  • 8:20 - 8:26
    That's going to be really involved, so let me know what you feel like.
  • 8:26 - 8:29
    This course right now, I would argue, is really Stanford caliber.
  • 8:29 - 8:32
    What you guys are doing, and you girls are doing, is really
  • 8:32 - 8:36
    at a quality that I would expect my best Standford students to do.
  • 8:36 - 8:42
    The type of things you implement, certainly, is at the same pace I would teach at Stanford
  • 8:42 - 8:43
    and possibly faster.
  • 8:43 - 8:47
    But if I go to a general build-a-robot example, then I'm going to exceed beyond Stanford base.
  • 8:47 - 8:49
    It's up to you. Let us know.
  • 8:49 - 8:53
    All right. Keep us posted. I'll be reading the forums. I'm sure Sebastian will too.
  • 8:53 - 8:55
    Thank you very much and see you next week.
  • 8:55 -
    All right.
タイトル:
04x-01 Office Hours Week 3
概説:

Other units in this course below:
Unit 1: http://www.youtube.com/playlist?list=PL1EF620FCB11312A6
Unit 2: http://www.youtube.com/playlist?list=PL107FD47786234011
Unit 3: http://www.youtube.com/playlist?list=PL5493E5D24A081719
Unit 4: http://www.youtube.com/playlist?list=PLAADAB4F235FE8D65
Unit 5: http://www.youtube.com/playlist?list=PL1B9983ACF22B1920
Unit 6: http://www.youtube.com/playlist?list=PLC9ED5AC39694C141
QA: http://www.youtube.com/playlist?list=PL3475310BFB1CBE34

To gain access to interactive quizzes, homework, programming assignments and a helpful community, join the class at http://www.udacity.com

more » « less
Team:
Udacity
プロジェクト:
CS373 - Artificial Intelligence
Duration:
08:58
Amara Bot added a translation

English subtitles

改訂