Tutorial: The Central Limit Theorem
An interesting and useful statistical phenomenon is the following. First, assume that a distribution for a random variable (time to failure, diameter of a ball bearing, percent of voters who want more gun control laws, etc.) has a finite mean (μ) and finite variance (σ2
). Then, we measure a parameter of interest by taking the mean of measurements on a sample of size N (N times to failure, ball bearings, voters polled, etc.) and repeat this for many samples each of size N. We then would find that the sample means tend to be distributed normally, despite the shape of the distribution of the random variable, and the fit to a normal becomes better as the sample size N (the number of measurements in one sample, not the number of samples taken) increases. Further, the mean of this normal distribution of sample means will be equal to the mean of the distribution of the random variable and the variance of the normal distribution of sample means will be equal to σ2
/N, the variance of the distribution of the random variable divided by the sample size (again, the number of measurements in one sample). This phenomenon is called the central limit theorem, and it permits us to estimate parameters for a distribution of interest using sample data.
Analysis of normal distributions is facilitated because any normal distribution can be converted to a standard normal distribution in which the mean (μ) = 0 and σ, the standard deviation (the square root of the variance) = 1. Conversion is accomplished using Equation 1:
For any value (z) of the standard normal distribution, we can obtain the percent of the area of the curve from - ∞ to z, from z to ∞, from 0 to z, or from -z to z. These are listed in tables widely available. Table 1 contains excerpted data from a table of the standard normal distribution.
Table 1: Standard Normal Data
|Value of z
||Area between z and -z
||Area from z to ∞
The standard normal tables give us the means to determine confidence in quality measured from samples (or in opinion polling; the same methods apply).
For example, suppose we take a sample from a lot of parts and find a certain proportion defective. We know the size of the sample (N) and the proportion defective of the sample (p'). What we want to determine is the proportion defective in the entire lot (p), or more specifically, a range of values that will contain the true value of p with a known probability (the predetermined confidence).
Since a part is defective or not, the proportion defective, p, follows a binomial distribution. However, p' is a sample measurement and so, by the central limit theorem, is distributed normally, with mean of the distribution = p. To convert our measured value of p' to a standard normal value, z, we note:
where σ is the standard deviation of the distribution of sample means. σ is the square root of the variance of the sample distribution, which, per the central limit theorem, is equal to the variance of the parent binomial distribution divided by the sample size. Since the variance of a binomial distribution is p(1-p), the variance of the distribution of the sample means is p(1-p)/N and σ is the square root of p(1-p)/N. Hence:
Since we do not know p, we cannot determine the standard deviation (the denominator of Equation 3). However, we can estimate it by using the measured value p' for p. Doing so yields an expression called the standard error and Equation 3 becomes:
Rearranging the terms results in Equation 5:
From Table 1, we find that 90% of the area under the standard normal curve is between the values z =
-1.645 and z = 1.645. This means there is a 90% probability that a randomly selected value of z would be between -1.645 and 1.645. This, in turn, implies that it is 90% probable that p would be in a range defined by Equations 6 and 7.
For example, if we took a sample of 1000 parts and found 500 defective, p' =.5, and we would be 90% sure that p, the proportion defective in the parent population, was actually in the range:
Thus, 1000 samples allow us to determine the value of p to be .5 plus or minus .026 which is less than a 3% error. Note that this is 3% of the possible range of values for p, not 3% of the estimated value of p. If, instead of counting defectives, we were polling voters and found 500 out of 1000 polled in favor of some government action, we would report the results as 50% in favor with a margin of error of 3%.
While the measured value p' will affect the margin of error, the biggest influence on it is the sample size. A sample size of 100 with p' measured at .5 will result in a margin of error, obtained by using Equations 6 and 7, of .08225, or just over 8%. Similarly, a sample size of 10,000 will reduce the margin of error to .00825 or less than 1%, for a 90% confidence. The user can trade-off between confidence, margin of error, and sample size, based on a normal distribution of sample means, no matter what the underlying distribution of the parameter of interest, thanks to the central limit theorem.