std error or std deviation?
Opened this issue · 0 comments
stevee-bee commented
Course: Statistical Inference
Lesson: Asymptotics
While doing the dice rolls, we divide "by the standard error".
|================ | 22%
| To show the CLT in action consider this figure from the slides. It presents 3
| histograms of 1000 averages of dice rolls with sample sizes of 10, 20 and 30
| respectively. Each average of n dice rolls (n=10,20,30) has been normalized by
| subtracting off the mean (3.5) then dividing by the standard error, sqrt(2.92/n).
| The normalization has made each histogram look like a standard normal, i.e., mean
| 0 and standard deviation 1.
But then when doing coin flips, we divide "by the std deviation".
|================== | 25%
| Recall that if the probability of a head (call it 1) is p, then the probability of
| a tail (0) is 1-p. The expected value then is p and the variance is p-p^2 or
| p(1-p). Suppose we do n coin flips and let p' represent the average of these n
| flips. We normalize p' by subtracting the mean p and dividing by the std deviation
| sqrt(p(1-p)/n). Let's do this for 1000 trials and plot the resulting histogram.