0:00:00.000,0:00:05.000
In Kalman filters we iterate measurement and motion.
0:00:05.000,0:00:08.000
This is often called a "measurement update,"
0:00:08.000,0:00:10.000
and this is often called "prediction."
0:00:10.000,0:00:17.000
In this update we'll use Bayes rule, which is nothing else but a product or a multiplication.
0:00:17.000,0:00:24.000
In this update we'll use total probability, which is a convolution,
0:00:24.000,0:00:27.000
or simply an addition.
0:00:27.000,0:00:35.000
Let's talk first about the measurement cycle and then the prediction cycle,
0:00:35.000,0:00:44.000
using our great, great, great Gaussians for implementing those steps.
0:00:44.000,0:00:47.000
Suppose you're localizing another vehicle,
0:00:47.000,0:00:50.000
and you have a prior distribution that looks as follows.
0:00:50.000,0:00:55.000
It's a very wide Gaussian with the mean over here.
0:00:55.000,0:00:58.000
Now, say we get a measurement that tells us something about
0:00:58.000,0:01:03.000
the localization of the vehicle, and it comes in like this.
0:01:03.000,0:01:07.000
It has a mean over here called "mu,"
0:01:07.000,0:01:11.000
and this example has a much smaller covariance for the measurement.
0:01:11.000,0:01:16.000
This is an example where in our prior we were fairly uncertain about a location,
0:01:16.000,0:01:21.000
but the measurement told us quite a bit as to where the vehicle is.
0:01:21.000,0:01:23.000
Here's a quiz for you.
0:01:23.000,0:01:36.000
Will the new mean of the subsequent Gaussian be over here, over here, or over here?
0:01:36.000,9:59:59.000
Check one of these three boxes.