## ← 06-38 Implementing Slam

• 1 Follower
• 66 Lines

### Get Embed Code x Embed video Use the following code to embed this video. See our usage guide for more details on embedding. Paste this in your document somewhere (closest to the closing body tag is preferable): ```<script type="text/javascript" src='https://amara.org/embedder-iframe'></script> ``` Paste this inside your HTML body, where you want to include the widget: ```<div class="amara-embed" data-url="http://www.youtube.com/watch?v=uEElLi1Ob_g" data-team="udacity"></div> ``` 4 Languages

• English [en] original
• Spanish [es]
• Japanese [ja]
• Russian [ru]

Showing Revision 1 created 06/29/2012 by Amara Bot.

1. So now we've learned all about Linear GraphSLAM,
2. and that's quite a bit--and it's really simple.
3. Every time there's a constraint--
4. Initial Position, Motion or Measurement--
5. we take this constraint and add something to Omega, Xi.
6. And what we add is the constraint itself,
7. but it's up multiplied by a strength factor.
8. There's nothing else but 1 over sigma--
9. the uncertainty in Motion or in Measurements.
10. And then when we're done with this adding--
11. we simply calculate this guy
12. and out comes our best possible PATH--
13. and along with the MAP of all the landmarks.
14. Isn't that something? Isn't that really cool?
15. So let's dive in and have you program your own real robot example.
16. This is a fairly complicated generalization of what we just saw.
17. I'm giving you an environment where you can specify
18. the number of landmarks that exist,
19. the number of time steps you want the robot to run,
20. the world_size, the measurement_range--that is
21. the range at which a robot might be able to see a landmark--
22. if it's further away than this--it just won't see it;
23. a motion_noise, a measurement_noise,
24. and a distance parameter.
25. The distance specifies how fast a robot moves in each step.
26. And then I'm giving you a routine which makes the data.
27. It takes all these parameters and it outputs a data field
28. that contains a sequence of motions and a sequence of measurements.
29. The code comments on the exact format of what data looks like.
30. Now I want you to program the function, SLAM,
31. that inputs the data and various important parameters
32. and it outputs my result--a sequence of estimated poses,
33. the robot PATH, and estimated landmark positions.
34. This is really challenging to program.
35. It's based on the math I just gave you.
36. The robot coordinates are now x and y coordinates.
37. The measurements are differences in x and y--
38. so you have to duplicate things for x and things for y.
39. I, myself, put them all into one big matrix,
40. but you could have them in 2 separate matrices, if you so wish.
41. You have to apply everything we learned so far,
42. including the weights of one with our measurerment_noise
43. and one with our motion_noise.
44. These happen to be equivalent, in this case--but they might be different.
45. And then you have to run SLAM
46. and return back to me a result data structure.
47. I'm also supplying you with the print_result routine
48. so you can go in and see how the result has to look like.
49. There's an example routine--that doesn't work--
50. that outputs all the correct formats,
51. but it tries not to implement the estimate that I want you to estimate.
52. You have to bring this to life
53. and turn this into an amazing SLAM routine
54. so that when you run it, you get the same results that I do
55. for the examples here,
56. where there's an estimated PATH
57. and estimated landmark positions.
58. There's one last thing I wanted to know--
59. is I assume the initial robot position
60. is going to be in the center of the world.
61. So it's the real-world set of 100
62. and it's going to be 50/50--or here it's printed as 49.999,
63. but this is the same as 50.
64. So you have to put in a constraint
65. that sets the initial robot pose
66. to the center of the world.