## ← 29-04 Mean_MLE

• 2 Followers
• 25 Lines

29-04 Mean_MLE

### Get Embed Code x Embed video Use the following code to embed this video. See our usage guide for more details on embedding. Paste this in your document somewhere (closest to the closing body tag is preferable): ```<script type="text/javascript" src='https://amara.org/embedder-iframe'></script> ``` Paste this inside your HTML body, where you want to include the widget: ```<div class="amara-embed" data-url="http://www.youtube.com/watch?v=-gt4FU4Dpik" data-team="udacity"></div> ``` 4 Languages

Showing Revision 2 created 11/02/2015 by Udacity Robot.

1. It's proof time again! 3 AM in the recording studio. And as always, this is entirely optional.
2. This goes way beyond what will ever be taught in an introduction to statistics class.
3. It's only for those who love hard-to-prove challenges.
4. And as previously, we're going to prove the correctness of
5. the maximum likelihood estimator or MLE but this time for Gaussians.
6. Our Xi's can now be arbitrary values not just 0s or 1s.
7. And the mean and the variance are computed using these two now familiar formula.
8. How can we prove this to be correct? Again, last chance to skip.
9. As before, there's the proof on the left side with a couple of blanks five in total.
10. And there's seven expressions that you can put in
11. and the way you put them in is if for example the seventh expression
12. in your opinion fits over here you put the number 7 in here.
13. You only use each expression on the right side once,
14. and there's obviously more expressions on the right side than the holes on the left side.
15. Let me quickly go over the proof.
16. In maximizing the probability of the data, we first state the probability of these data items
17. and then send over here using a normal distribution with prime of µ and σ².
18. But that left something unknown on the left that you have to pick from the right side.
19. Then we take the logarithm of this and here's the logarithm with two gaps in sight.
20. We take the first derivative of the logarithm that you have to compute
21. by picking the appropriate term on the right
22. and set it to 0, which is where the maximum is obtained.
23. From there, we can transform the expression to this expression over here where I left out a term
24. and it finally gives you the desired proof for the maximum likelihood estimator for the mean.
25. Good luck!