[Script Info] Title: [Events] Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text Dialogue: 0,0:00:00.00,0:00:01.88,Default,,0000,0000,0000,, Dialogue: 0,0:00:01.88,0:00:08.30,Default,,0000,0000,0000,,Let's learn a little bit about\Nthe law of large numbers, which Dialogue: 0,0:00:08.30,0:00:11.73,Default,,0000,0000,0000,,is on many levels, one of the\Nmost intuitive laws in Dialogue: 0,0:00:11.73,0:00:14.19,Default,,0000,0000,0000,,mathematics and in\Nprobability theory. Dialogue: 0,0:00:14.19,0:00:18.57,Default,,0000,0000,0000,,But because it's so applicable\Nto so many things, it's often a Dialogue: 0,0:00:18.57,0:00:22.19,Default,,0000,0000,0000,,misused law or sometimes,\Nslightly misunderstood. Dialogue: 0,0:00:22.19,0:00:26.25,Default,,0000,0000,0000,,So just to be a little bit\Nformal in our mathematics, let Dialogue: 0,0:00:26.25,0:00:28.57,Default,,0000,0000,0000,,me just define it for you first\Nand then we'll talk a little Dialogue: 0,0:00:28.57,0:00:29.44,Default,,0000,0000,0000,,bit about the intuition. Dialogue: 0,0:00:29.44,0:00:33.99,Default,,0000,0000,0000,,So let's say I have a\Nrandom variable, X. Dialogue: 0,0:00:33.99,0:00:38.75,Default,,0000,0000,0000,,And we know its expected value\Nor its population mean. Dialogue: 0,0:00:38.75,0:00:41.79,Default,,0000,0000,0000,,The law of large numbers just\Nsays that if we take a sample Dialogue: 0,0:00:41.79,0:00:46.48,Default,,0000,0000,0000,,of n observations of our random\Nvariable, and if we were Dialogue: 0,0:00:46.48,0:00:49.08,Default,,0000,0000,0000,,to average all of those\Nobservations-- and let me Dialogue: 0,0:00:49.08,0:00:50.68,Default,,0000,0000,0000,,define another variable. Dialogue: 0,0:00:50.68,0:00:54.15,Default,,0000,0000,0000,,Let's call that x sub n\Nwith a line on top of it. Dialogue: 0,0:00:54.15,0:00:56.97,Default,,0000,0000,0000,,This is the mean of n\Nobservations of our Dialogue: 0,0:00:56.97,0:00:57.76,Default,,0000,0000,0000,,random variable. Dialogue: 0,0:00:57.76,0:01:00.53,Default,,0000,0000,0000,,So it's literally this is\Nmy first observation. Dialogue: 0,0:01:00.53,0:01:03.23,Default,,0000,0000,0000,,So you can kind of say I run\Nthe experiment once and I get Dialogue: 0,0:01:03.23,0:01:07.12,Default,,0000,0000,0000,,this observation and I run it\Nagain, I get that observation. Dialogue: 0,0:01:07.12,0:01:11.64,Default,,0000,0000,0000,,And I keep running it n times\Nand then I divide by my Dialogue: 0,0:01:11.64,0:01:12.63,Default,,0000,0000,0000,,number of observations. Dialogue: 0,0:01:12.63,0:01:14.15,Default,,0000,0000,0000,,So this is my sample mean. Dialogue: 0,0:01:14.15,0:01:17.34,Default,,0000,0000,0000,,This is the mean of all the\Nobservations I've made. Dialogue: 0,0:01:17.34,0:01:23.21,Default,,0000,0000,0000,,The law of large numbers just\Ntells us that my sample mean Dialogue: 0,0:01:23.21,0:01:27.58,Default,,0000,0000,0000,,will approach my expected\Nvalue of the random variable. Dialogue: 0,0:01:27.58,0:01:32.76,Default,,0000,0000,0000,,Or I could also write it as my\Nsample mean will approach my Dialogue: 0,0:01:32.76,0:01:39.91,Default,,0000,0000,0000,,population mean for n\Napproaching infinity. Dialogue: 0,0:01:39.91,0:01:43.29,Default,,0000,0000,0000,,And I'll be a little informal\Nwith what does approach or Dialogue: 0,0:01:43.29,0:01:44.27,Default,,0000,0000,0000,,what does convergence mean? Dialogue: 0,0:01:44.27,0:01:46.41,Default,,0000,0000,0000,,But I think you have the\Ngeneral intuitive sense that if Dialogue: 0,0:01:46.41,0:01:50.51,Default,,0000,0000,0000,,I take a large enough sample\Nhere that I'm going to end up Dialogue: 0,0:01:50.51,0:01:54.21,Default,,0000,0000,0000,,getting the expected value of\Nthe population as a whole. Dialogue: 0,0:01:54.21,0:01:56.80,Default,,0000,0000,0000,,And I think to a lot of us\Nthat's kind of intuitive. Dialogue: 0,0:01:56.80,0:02:01.61,Default,,0000,0000,0000,,That if I do enough trials that\Nover large samples, the trials Dialogue: 0,0:02:01.61,0:02:04.07,Default,,0000,0000,0000,,would kind of give me the\Nnumbers that I would expect Dialogue: 0,0:02:04.07,0:02:06.72,Default,,0000,0000,0000,,given the expected value and\Nthe probability and all that. Dialogue: 0,0:02:06.72,0:02:09.33,Default,,0000,0000,0000,,But I think it's often a little\Nbit misunderstood in terms Dialogue: 0,0:02:09.33,0:02:10.62,Default,,0000,0000,0000,,of why that happens. Dialogue: 0,0:02:10.62,0:02:13.28,Default,,0000,0000,0000,,And before I go into\Nthat let me give you Dialogue: 0,0:02:13.28,0:02:15.09,Default,,0000,0000,0000,,a particular example. Dialogue: 0,0:02:15.09,0:02:17.40,Default,,0000,0000,0000,,The law of large numbers will\Njust tell us that-- let's say I Dialogue: 0,0:02:17.40,0:02:24.54,Default,,0000,0000,0000,,have a random variable-- X is\Nequal to the number of heads Dialogue: 0,0:02:24.54,0:02:30.78,Default,,0000,0000,0000,,after 100 tosses of a fair\Ncoin-- tosses or flips Dialogue: 0,0:02:30.78,0:02:33.43,Default,,0000,0000,0000,,of a fair coin. Dialogue: 0,0:02:33.43,0:02:36.09,Default,,0000,0000,0000,, Dialogue: 0,0:02:36.09,0:02:37.89,Default,,0000,0000,0000,,First of all, we know what\Nthe expected value of Dialogue: 0,0:02:37.89,0:02:39.84,Default,,0000,0000,0000,,this random variable is. Dialogue: 0,0:02:39.84,0:02:43.08,Default,,0000,0000,0000,,It's the number of tosses,\Nthe number of trials times Dialogue: 0,0:02:43.08,0:02:46.50,Default,,0000,0000,0000,,the probabilities of\Nsuccess of any trial. Dialogue: 0,0:02:46.50,0:02:49.18,Default,,0000,0000,0000,,So that's equal to 50. Dialogue: 0,0:02:49.18,0:02:53.48,Default,,0000,0000,0000,,So the law of large numbers\Njust says if I were to take a Dialogue: 0,0:02:53.48,0:02:57.53,Default,,0000,0000,0000,,sample or if I were to average\Nthe sample of a bunch of these Dialogue: 0,0:02:57.53,0:03:03.35,Default,,0000,0000,0000,,trials, so you know, I get-- my\Nfirst time I run this trial I Dialogue: 0,0:03:03.35,0:03:06.28,Default,,0000,0000,0000,,flip 100 coins or have 100\Ncoins in a shoe box and I shake Dialogue: 0,0:03:06.28,0:03:10.28,Default,,0000,0000,0000,,the shoe box and I count the\Nnumber of heads, and I get 55. Dialogue: 0,0:03:10.28,0:03:11.88,Default,,0000,0000,0000,,So that Would be X1. Dialogue: 0,0:03:11.88,0:03:15.24,Default,,0000,0000,0000,,Then I shake the box\Nagain and I get 65. Dialogue: 0,0:03:15.24,0:03:18.11,Default,,0000,0000,0000,,Then I shake the box\Nagain and I get 45. Dialogue: 0,0:03:18.11,0:03:22.76,Default,,0000,0000,0000,,And I do this n times and then\NI divide it by the number Dialogue: 0,0:03:22.76,0:03:24.00,Default,,0000,0000,0000,,of times I did it. Dialogue: 0,0:03:24.00,0:03:27.06,Default,,0000,0000,0000,,The law of large numbers just\Ntells us that this the Dialogue: 0,0:03:27.06,0:03:31.05,Default,,0000,0000,0000,,average-- the average of all\Nof my observations, is going Dialogue: 0,0:03:31.05,0:03:38.70,Default,,0000,0000,0000,,to converge to 50 as n\Napproaches infinity. Dialogue: 0,0:03:38.70,0:03:40.89,Default,,0000,0000,0000,,Or for n approaching 50. Dialogue: 0,0:03:40.89,0:03:42.77,Default,,0000,0000,0000,,I'm sorry, n\Napproaching infinity. Dialogue: 0,0:03:42.77,0:03:45.15,Default,,0000,0000,0000,,And I want to talk a little\Nbit about why this happens Dialogue: 0,0:03:45.15,0:03:47.00,Default,,0000,0000,0000,,or intuitively why this is. Dialogue: 0,0:03:47.00,0:03:50.52,Default,,0000,0000,0000,,A lot of people kind of feel\Nthat oh, this means that if Dialogue: 0,0:03:50.52,0:03:54.95,Default,,0000,0000,0000,,after 100 trials that if I'm\Nabove the average that somehow Dialogue: 0,0:03:54.95,0:03:57.65,Default,,0000,0000,0000,,the laws of probability are\Ngoing to give me more heads Dialogue: 0,0:03:57.65,0:04:00.15,Default,,0000,0000,0000,,or fewer heads to kind of\Nmake up the difference. Dialogue: 0,0:04:00.15,0:04:01.94,Default,,0000,0000,0000,,That's not quite what's\Ngoing to happen. Dialogue: 0,0:04:01.94,0:04:04.44,Default,,0000,0000,0000,,That's often called the\Ngambler's fallacy. Dialogue: 0,0:04:04.44,0:04:05.48,Default,,0000,0000,0000,,Let me differentiate. Dialogue: 0,0:04:05.48,0:04:06.51,Default,,0000,0000,0000,,And I'll use this example. Dialogue: 0,0:04:06.51,0:04:08.42,Default,,0000,0000,0000,,So let's say-- let\Nme make a graph. Dialogue: 0,0:04:08.42,0:04:09.28,Default,,0000,0000,0000,,And I'll switch colors. Dialogue: 0,0:04:09.28,0:04:23.01,Default,,0000,0000,0000,, Dialogue: 0,0:04:23.01,0:04:25.20,Default,,0000,0000,0000,,This is n, my x-axis is n. Dialogue: 0,0:04:25.20,0:04:27.65,Default,,0000,0000,0000,,This is the number\Nof trials I take. Dialogue: 0,0:04:27.65,0:04:32.52,Default,,0000,0000,0000,,And my y-axis, let me make\Nthat the sample mean. Dialogue: 0,0:04:32.52,0:04:36.19,Default,,0000,0000,0000,,And we know what the expected\Nvalue is, we know the expected Dialogue: 0,0:04:36.19,0:04:39.09,Default,,0000,0000,0000,,value of this random\Nvariable is 50. Dialogue: 0,0:04:39.09,0:04:40.06,Default,,0000,0000,0000,,Let me draw that here. Dialogue: 0,0:04:40.06,0:04:42.67,Default,,0000,0000,0000,, Dialogue: 0,0:04:42.67,0:04:43.34,Default,,0000,0000,0000,,This is 50. Dialogue: 0,0:04:43.34,0:04:47.41,Default,,0000,0000,0000,, Dialogue: 0,0:04:47.41,0:04:49.71,Default,,0000,0000,0000,,So just going to\Nthe example I did. Dialogue: 0,0:04:49.71,0:04:53.76,Default,,0000,0000,0000,,So when n is equal to--\Nlet me just [INAUDIBLE] Dialogue: 0,0:04:53.76,0:04:54.89,Default,,0000,0000,0000,,here. Dialogue: 0,0:04:54.89,0:04:59.25,Default,,0000,0000,0000,,So my first trial I got 55\Nand so that was my average. Dialogue: 0,0:04:59.25,0:05:00.75,Default,,0000,0000,0000,,I only had one data point. Dialogue: 0,0:05:00.75,0:05:04.72,Default,,0000,0000,0000,,Then after two trials,\Nlet's see, then I have 65. Dialogue: 0,0:05:04.72,0:05:09.15,Default,,0000,0000,0000,,And so my average is going to\Nbe 65 plus 55 divided by 2. Dialogue: 0,0:05:09.15,0:05:10.49,Default,,0000,0000,0000,,which is 60. Dialogue: 0,0:05:10.49,0:05:13.41,Default,,0000,0000,0000,,So then my average\Nwent up a little bit. Dialogue: 0,0:05:13.41,0:05:15.38,Default,,0000,0000,0000,,Then I had a 45, which\Nwill bring my average Dialogue: 0,0:05:15.38,0:05:16.88,Default,,0000,0000,0000,,down a little bit. Dialogue: 0,0:05:16.88,0:05:18.21,Default,,0000,0000,0000,,I won't plot a 45 here. Dialogue: 0,0:05:18.21,0:05:19.50,Default,,0000,0000,0000,,Now I have to average\Nall of these out. Dialogue: 0,0:05:19.50,0:05:22.41,Default,,0000,0000,0000,,What's 45 plus 65? Dialogue: 0,0:05:22.41,0:05:23.70,Default,,0000,0000,0000,,Let me actually just\Nget the number just Dialogue: 0,0:05:23.70,0:05:24.58,Default,,0000,0000,0000,,so you get the point. Dialogue: 0,0:05:24.58,0:05:28.67,Default,,0000,0000,0000,,So it's 55 plus 65. Dialogue: 0,0:05:28.67,0:05:32.93,Default,,0000,0000,0000,,It's 120 plus 45 is 165. Dialogue: 0,0:05:32.93,0:05:36.37,Default,,0000,0000,0000,,Divided by 3. Dialogue: 0,0:05:36.37,0:05:39.89,Default,,0000,0000,0000,,3 goes into 165 5--\N5 times 3 is 15. Dialogue: 0,0:05:39.89,0:05:42.28,Default,,0000,0000,0000,,It's 53. Dialogue: 0,0:05:42.28,0:05:43.52,Default,,0000,0000,0000,,No, no, no. Dialogue: 0,0:05:43.52,0:05:45.31,Default,,0000,0000,0000,,55. Dialogue: 0,0:05:45.31,0:05:46.99,Default,,0000,0000,0000,,So the average goes\Ndown back down to 55. Dialogue: 0,0:05:46.99,0:05:49.42,Default,,0000,0000,0000,,And we could keep\Ndoing these trials. Dialogue: 0,0:05:49.42,0:05:51.63,Default,,0000,0000,0000,,So you might say that the law\Nof large numbers tell this, Dialogue: 0,0:05:51.63,0:05:56.64,Default,,0000,0000,0000,,OK, after we've done 3 trials\Nand our average is there. Dialogue: 0,0:05:56.64,0:06:00.11,Default,,0000,0000,0000,,So a lot of people think that\Nsomehow the gods of probability Dialogue: 0,0:06:00.11,0:06:02.31,Default,,0000,0000,0000,,are going to make it more\Nlikely that we get fewer Dialogue: 0,0:06:02.31,0:06:03.17,Default,,0000,0000,0000,,heads in the future. Dialogue: 0,0:06:03.17,0:06:06.15,Default,,0000,0000,0000,,That somehow the next couple of\Ntrials are going to have to Dialogue: 0,0:06:06.15,0:06:09.03,Default,,0000,0000,0000,,be down here in order to\Nbring our average down. Dialogue: 0,0:06:09.03,0:06:10.72,Default,,0000,0000,0000,,And that's not\Nnecessarily the case. Dialogue: 0,0:06:10.72,0:06:13.24,Default,,0000,0000,0000,,Going forward the probabilities\Nare always the same. Dialogue: 0,0:06:13.24,0:06:15.20,Default,,0000,0000,0000,,The probabilities are\Nalways 50% that I'm Dialogue: 0,0:06:15.20,0:06:16.16,Default,,0000,0000,0000,,going to get heads. Dialogue: 0,0:06:16.16,0:06:19.66,Default,,0000,0000,0000,,It's not like if I had a bunch\Nof heads to start off with or Dialogue: 0,0:06:19.66,0:06:22.10,Default,,0000,0000,0000,,more than I would have expected\Nto start off with, that all of Dialogue: 0,0:06:22.10,0:06:25.36,Default,,0000,0000,0000,,a sudden things would be made\Nup and I would get more tails. Dialogue: 0,0:06:25.36,0:06:27.53,Default,,0000,0000,0000,,That would the\Ngambler's fallacy. Dialogue: 0,0:06:27.53,0:06:29.91,Default,,0000,0000,0000,,That if you have a long streak\Nof heads or you have a Dialogue: 0,0:06:29.91,0:06:32.02,Default,,0000,0000,0000,,disproportionate number of\Nheads, that at some point Dialogue: 0,0:06:32.02,0:06:35.07,Default,,0000,0000,0000,,you're going to have-- you have\Na higher likelihood of having a Dialogue: 0,0:06:35.07,0:06:37.01,Default,,0000,0000,0000,,disproportionate\Nnumber of tails. Dialogue: 0,0:06:37.01,0:06:38.44,Default,,0000,0000,0000,,And that's not quite true. Dialogue: 0,0:06:38.44,0:06:41.23,Default,,0000,0000,0000,,What the law of large numbers\Ntells us is that it doesn't Dialogue: 0,0:06:41.23,0:06:45.70,Default,,0000,0000,0000,,care-- let's say after some\Nfinite number of trials your Dialogue: 0,0:06:45.70,0:06:48.13,Default,,0000,0000,0000,,average actually-- it's a low\Nprobability of this happening, Dialogue: 0,0:06:48.13,0:06:50.30,Default,,0000,0000,0000,,but let's say your average\Nis actually up here. Dialogue: 0,0:06:50.30,0:06:52.27,Default,,0000,0000,0000,,Is actually at 70. Dialogue: 0,0:06:52.27,0:06:56.00,Default,,0000,0000,0000,,You're like, wow, we really\Ndiverged a good bit from Dialogue: 0,0:06:56.00,0:06:57.02,Default,,0000,0000,0000,,the expected value. Dialogue: 0,0:06:57.02,0:06:58.45,Default,,0000,0000,0000,,But what the law of large\Nnumbers says, well, I don't Dialogue: 0,0:06:58.45,0:07:00.27,Default,,0000,0000,0000,,care how many trials this is. Dialogue: 0,0:07:00.27,0:07:03.87,Default,,0000,0000,0000,,We have an infinite\Nnumber of trials left. Dialogue: 0,0:07:03.87,0:07:06.59,Default,,0000,0000,0000,,And the expected value for that\Ninfinite number of trials, Dialogue: 0,0:07:06.59,0:07:11.57,Default,,0000,0000,0000,,especially in this type of\Nsituation is going to be this. Dialogue: 0,0:07:11.57,0:07:15.68,Default,,0000,0000,0000,,So when you average a finite\Nnumber that averages out to Dialogue: 0,0:07:15.68,0:07:18.39,Default,,0000,0000,0000,,some high number, and then an\Ninfinite number that's going to Dialogue: 0,0:07:18.39,0:07:22.89,Default,,0000,0000,0000,,converge to this, you're going\Nto over time, converge back Dialogue: 0,0:07:22.89,0:07:24.03,Default,,0000,0000,0000,,to the expected value. Dialogue: 0,0:07:24.03,0:07:27.31,Default,,0000,0000,0000,,And that was a very informal\Nway of describing it, but Dialogue: 0,0:07:27.31,0:07:29.58,Default,,0000,0000,0000,,that's what the law or\Nlarge numbers tells you. Dialogue: 0,0:07:29.58,0:07:30.94,Default,,0000,0000,0000,,And it's an important thing. Dialogue: 0,0:07:30.94,0:07:33.56,Default,,0000,0000,0000,,It's not telling you that if\Nyou get a bunch of heads that Dialogue: 0,0:07:33.56,0:07:36.19,Default,,0000,0000,0000,,somehow the probability of\Ngetting tails is going Dialogue: 0,0:07:36.19,0:07:38.26,Default,,0000,0000,0000,,to increase to kind of\Nmake up for the heads. Dialogue: 0,0:07:38.26,0:07:41.66,Default,,0000,0000,0000,,What it's telling you is, is\Nthat no matter what happened Dialogue: 0,0:07:41.66,0:07:44.59,Default,,0000,0000,0000,,over a finite number of trials,\Nno matter what the average is Dialogue: 0,0:07:44.59,0:07:46.56,Default,,0000,0000,0000,,over a finite number of\Ntrials, you have an infinite Dialogue: 0,0:07:46.56,0:07:47.96,Default,,0000,0000,0000,,number of trials left. Dialogue: 0,0:07:47.96,0:07:51.85,Default,,0000,0000,0000,,And if you do enough of them\Nit's going to converge back Dialogue: 0,0:07:51.85,0:07:52.80,Default,,0000,0000,0000,,to your expected value. Dialogue: 0,0:07:52.80,0:07:54.43,Default,,0000,0000,0000,,And this is an important\Nthing to think about. Dialogue: 0,0:07:54.43,0:07:57.77,Default,,0000,0000,0000,,But this isn't used in practice\Nevery day with the lottery and Dialogue: 0,0:07:57.77,0:08:02.22,Default,,0000,0000,0000,,with casinos because they know\Nthat if you do large enough Dialogue: 0,0:08:02.22,0:08:04.96,Default,,0000,0000,0000,,samples-- and we could even\Ncalculate-- if you do large Dialogue: 0,0:08:04.96,0:08:07.76,Default,,0000,0000,0000,,enough samples, what's the\Nprobability that things Dialogue: 0,0:08:07.76,0:08:09.51,Default,,0000,0000,0000,,deviate significantly? Dialogue: 0,0:08:09.51,0:08:12.96,Default,,0000,0000,0000,,But casinos and the lottery\Nevery day operate on this Dialogue: 0,0:08:12.96,0:08:15.64,Default,,0000,0000,0000,,principle that if you take\Nenough people-- sure, in the Dialogue: 0,0:08:15.64,0:08:18.24,Default,,0000,0000,0000,,short-term or with a few\Nsamples, a couple people Dialogue: 0,0:08:18.24,0:08:19.57,Default,,0000,0000,0000,,might beat the house. Dialogue: 0,0:08:19.57,0:08:22.21,Default,,0000,0000,0000,,But over the long-term the\Nhouse is always going to win Dialogue: 0,0:08:22.21,0:08:24.30,Default,,0000,0000,0000,,because of the parameters of\Nthe games that they're Dialogue: 0,0:08:24.30,0:08:25.30,Default,,0000,0000,0000,,making you play. Dialogue: 0,0:08:25.30,0:08:28.18,Default,,0000,0000,0000,,Anyway, this is an important\Nthing in probability and I Dialogue: 0,0:08:28.18,0:08:29.76,Default,,0000,0000,0000,,think it's fairly intuitive. Dialogue: 0,0:08:29.76,0:08:32.59,Default,,0000,0000,0000,,Although, sometimes when you\Nsee it formally explained like Dialogue: 0,0:08:32.59,0:08:34.29,Default,,0000,0000,0000,,this with the random variables\Nand that it's a little Dialogue: 0,0:08:34.29,0:08:34.94,Default,,0000,0000,0000,,bit confusing. Dialogue: 0,0:08:34.94,0:08:39.55,Default,,0000,0000,0000,,All it's saying is that as you\Ntake more and more samples, the Dialogue: 0,0:08:39.55,0:08:44.59,Default,,0000,0000,0000,,average of that sample is going\Nto approximate the Dialogue: 0,0:08:44.59,0:08:45.67,Default,,0000,0000,0000,,true average. Dialogue: 0,0:08:45.67,0:08:47.41,Default,,0000,0000,0000,,Or I should be a little\Nbit more particular. Dialogue: 0,0:08:47.41,0:08:51.69,Default,,0000,0000,0000,,The mean of your sample is\Ngoing to converge to the true Dialogue: 0,0:08:51.69,0:08:54.97,Default,,0000,0000,0000,,mean of the population or to\Nthe expected value of Dialogue: 0,0:08:54.97,0:08:56.13,Default,,0000,0000,0000,,the random variable. Dialogue: 0,0:08:56.13,0:08:58.85,Default,,0000,0000,0000,,Anyway, see you in\Nthe next video.