Got a YouTube account?

New: enable viewer-created translations and captions on your YouTube channel!

English subtitles

← Office Hours Week 2 - Artificial Intelligence for Robotics

Get Embed Code
2 Languages

Showing Revision 3 created 05/25/2016 by Udacity Robot.

  1. Welcome to the second office hours. I have Sebastian here and myself.
  2. We're going to do this the same way we did last time.
  3. First we're going to talk about some content,
  4. and then we're going to go into some applications,
  5. referring mostly to Kalman filters but also talking a little bit about Stanley and Junior.
  6. Okay, what's the first question?
  7. The first question--both of the content questions actually have to do with linear algebra.
  8. When we were talking about this state transition matrix f,
  9. we have this equation x' equals f times x plus U.
  10. You said that this U is the motion,
  11. but the motion seemed to be embedded in this f matrix in that
  12. the velocity was taken from the state with the f matrix.
  13. What exactly is going on with U?
  14. U allows you to apply a choice like when we just said
  15. the position is a function of velocity that's describing physics
  16. but doesn't give you a choice.
  17. When you want to insert a choice into the system, like for example,
  18. you might want to change accelerations and that affects velocity and affects state.
  19. That choice is expressed with the vector U
  20. We didn't really use it because we were tracking things.
  21. We didn't know how they were being actuated, so we set U to zero in our example.
  22. But if you have control over the system you're trying to track
  23. and you can insert like a motion yourself, then you use the vector U.
  24. Interesting--just one other piece of intuition we're hoping to get is
  25. a lot of the matrices we talked about either didn't have off-diagonal elements
  26. or we at least didn't initialize any off-diagonal elements.
  27. Specifically, in the covariance matrix and the r matrix,
  28. can you give us some intuition about what those off-diagonal elements would represent?
  29. We actually ran into an example that did have off-diagonal elements.
  30. We didn't initialize it, but then the velocity became correlated with the position,
  31. and we realized the faster we moved the further we are to the right.
  32. That was expressed by an off-diagonal element.
  33. They turn into correlations, so the largest these elements are,
  34. the more 2 variables are actually correlated.
  35. The more knowledge about one variable correlates itself with another variable.
  36. Now, let's get into some of the applications that the students seem especially interested in.
  37. Specifically, this r matrix. Where does it come from? How do we get it in real life?
  38. What are we doing with our sensors?
  39. So the noise matrices express how noisy your sensor is,
  40. and at first approximation you'd say let's just measure
  41. what the variation of the measurement is and then plug it in.
  42. But because these filters, a very subtle thing, assume conditional independence,
  43. they assume that noise is independent from one type to the next whereas in reality it isn't.
  44. Typically you start with a very large value, and you look a the result,
  45. and if the result looks good to you, you leave it that way.
  46. Unfortunately, there is no good science for it.
  47. Awhile back, together with Andrew Ng, I published a paper
  48. on how to learn the noise matrix, but that goes way beyond this class.
  49. You mentioned that Stanley and Junior both use laser and radar.
  50. Does each one offer something unique or are the both just doing the same thing?
  51. Either way, how do we incorporate measurements from 2 sources into our Kalman filter?
  52. These are two questions.
  53. First of all, laser and radar at first approximation do the same thing.
  54. They measure how far things are away.
  55. Radar also measures how fast they're moving for the Doppler effect,
  56. but their characteristics are very different. They measure different things.
  57. There are certain things that can only be measured by laser, others just by radar.
  58. In fact, laser tends to have much higher spatial resolution,
  59. but events become foggy.
  60. The wavelength of light tends not to be as good as a radar wavelength.
  61. So they're somewhat complementary.
  62. To incorporate both, Bayes rule allows you to incorporate sensor measurements one after another.
  63. If you have a laser and a radar, you just multiply both of them in, and that's just fine.
  64. Okay, great.
  65. The top-rated question, actually, was about
  66. the programming languages used in Stanley and Junior.
  67. Was it Python? Was it something else? >>It was not Python, honestly. >>Okay.
  68. At the time, we started out with C, but then C++ became very popular. I was popular.
  69. And it was the better choice, so almost all the code is written in C++/
  70. How do you make that decision when you're starting a project this big? Is there debate?
  71. There is always debate.
  72. The beauty of C++ is it is very efficient in execution,
  73. and when you use it right it can be very powerful. If you don't abuse it.
  74. It has way too many things built in, but some of the things like the classes are just really, really good.
  75. Then you hire people, and you work with students, and some of them like Java,
  76. so they write their code in Java, and others like C++ or Python or Ruby on Rails.
  77. Then you just bring all the stuff together.
  78. Okay--thinking of Stanley and Junior,
  79. what were the major hardware and software differences between the two vehicles?
  80. Stanley had about 6 embedded processors.
  81. Junior had 2 PCs with quad cores so there was a more integrated system.
  82. The biggest difference in the hardware was really the sensors.
  83. Stanley had very, very cheap and simple laser-range finders as the main sensor.
  84. We did have radars. We didn't use them much.
  85. Whereas Junior had a much more ranging sensor suite.
  86. Junior could look in all directions--we call this "surround sensing"--
  87. whereas Stanley could only look straight ahead.
  88. So that thing we saw spinning on the top of Stanley, that was the laser-range finder?
  89. What you saw spinning on the top of Stanley was actually on Junior.
  90. That was on Junior--okay.
  91. Yeah, you never saw anything on Stanley, because there's nothing spinning there.
  92. That was on Junior, and that was a laser-range finder.
  93. Right, and that one spins because it looks in all directions.
  94. That was important for city driving because what's behind you actually matters in cities
  95. whereas if you drive in desert terrain there is no traffic. You don't have to look back.
  96. That's a good segue into the next question, which is what's the next big challenge?
  97. We've done deserts. We've done urban driving.
  98. The next challenge is it'll take over our cars.
  99. Basically get this technology into every single car and make sure driving is safe.
  100. Every person has a special button that says, like I explained, my little chauffeur button.
  101. I want to just drive automatically,
  102. and then I'm just going to get home without having to pay attention.
  103. Finally, how comfortable is it?
  104. Do these cars drive similar to the way you or I would drive?
  105. When we first started out, I would say the driving is effective but not elegant.
  106. You would get in the car, and you'd know exactly what I mean.
  107. The steering wheel would go like this all the time, and it would make a lot of noise.
  108. It was pretty clear you were inside a robot.
  109. On the outside it looked pretty great, but on the inside it didn't.
  110. But as things moved on, if you get into a Google car right now,
  111. you won't be able to distinguish from a human driver. It's really rock solid.
  112. The steering wheel stays like this, but it turns it confidentally drags it around,
  113. moves the right direction, comes back. It's actually come a long way.
  114. To get from what you said to where we are now--was it low-pass filtering?
  115. We will have a class on control,
  116. and the control techniques ended up to be very sophisticated but also very, very good.
  117. All the motion of the steering wheel are all related to inaccuracies.
  118. They came from multiple sources.
  119. Some of it was that we weren't processing our GPS date good enough yet and our map data.
  120. Some of it was the map resolution--like if you have a 10-15 cm grid cell
  121. and your estimate jumps from 1 grid cell to the next or your particle filter jumps a little bit around.
  122. They might not look dramatic on a screen, but if you turned to steering motion
  123. in your steering wheel, it goes by 2 or 3 or 4 degrees. That's really bad.
  124. We had a sensor that tracks that angle of the steering wheel and the spatial resolution was about a degree.
  125. That means you couldn't quite know what the angle was,
  126. so you would drive a little blind--like up to a degree.
  127. A degree of steering wheel doesn't sound like much, but it's actually a lot.
  128. You can try this out.
  129. If you drive a car and you only move it by a degree, you feel a noticeable effect.
  130. You'd find out after a while we'll actually pulling in this direction. Let's drag it back.
  131. And all that stuff we kind of fixed.
  132. So we'll learn about that in, I think, Unit 5? PIT controllers? >>Yep.
  133. Excellent. I can't wait. >> All right. >>Thanks a lot. >>All right. Take care.