Statistical Questions from the Classroom

by J. Michael Shaughnessy and Beth Chance

I was talking with a mathematically-astute friend of mine and told him I lacked confidence in my grasp of statistics. (Insert clever joke about confidence intervals here.) So he recommended this 88-page book published by the National Council of Teachers of Mathematics as a good refresher on some basic concepts. The book consists of 11 chapters, each one addressing a question frequently asked by statistics students.

What is a margin of error?

“Margin of error measures the ‘error’ due to variability in random sampling. It is really not an ‘error’ at all; rather, it stems from sampling variability. Margin of error does not measure any errors caused by a poorly collected sample, poor wording of the question, dishonest answers, and so on. It simply provides an interval within which the population value plausibly lies.”

As an example, the authors use a population of grade point averages with a mean of 3.23 and a standard deviation of 0.276. The sampling distribution is based on 500 samples of size 25. “The normal distribution predicted by the central limit theorem is superimposed, with mean 3.23 and standard deviation [of the sampling distribution] 0.276 / √25 = 0.055.”

“Using the normal distribution, we can apply the empirical rule: 95 percent of the observations should fall within 2 standard deviations of the mean. Here the observations are sample means, the mean of the distribution of the sample means is 3.23, and the standard deviation is 0.055. So approximately 95 percent of sample means should fall inside the interval (3.23 – 2(0.055), 3.23 + 2(0.055)) = (3.12, 3.34), depicted in figure 8.2”

Figure 8.2. 95% confidence interval (p.62)

“For our actual simulated sample means, 480 of the 500 samples, or 96%, are within this interval and within 2 standard deviations (2 x 0.055 = 0.11) of the population mean. This value, 0.11, is called the ‘margin of error.’ The margin of error indicates how far we can expect our sample results to be from the population result, taking into account the inevitable variability that occurs from random sample to random sample… Since this statement is accurate 95 percent of the time, the best we can say is that we are 95 percent confident that the population mean is within 0.11 point of the sample mean we obtain.”

But what if your statistics are categorical rather than numerical, such as an opinion survey? The authors use an example of a Gallup poll of 1017 American adults. “The maximum value for the margin of error is 1 / √n when we estimate a population proportion.” So, the margin of error of this poll can be approximated to be 1 / √1017 = .03. “Thus, if we were to take many random samples of 1017 adult Americans, we would expect 95 percent of the sample proportions to fall within .03 of the true population proportion. The conclusion from the Gallup survey is that we are 95 percent confident that the true proportion of adult Americans who consider drugs to be a very or extremely serious problem is between .71 – .03 and .71 + .03, or in the interval (.68, .74).”

What is a p-value?

“A p-value is a probability, and it measures the strength of evidence against some hypothesis. The smaller the p-value, the stronger the evidence is against the hypothesis. By convention, a p-value below .05 reflects evidence strong enough to reject the corresponding hypothesis.”

Example: “Null hypothesis: The coin is fair; flips are independent. Observed data: In 10 flips, all landed heads. p-value: … = .001. With such a small p=value, we have strong evidence that this is not a fair coin when flipped… The logic here is ‘either the null hypothesis is false or something extremely unlikely has occurred.’”

What are degrees of freedom?

“Consider the following analogy from algebra that mirrors degrees of freedom in statistics. Suppose we are given a linear equation in three variables of the form 3x + 4y -2z =100. If the values of any two of the variables in this equation are known, the third value is automatically also known. For example, if x=4 ad y=2, then z would have to be -40… Any two of the variables in the equation 3x + 4y -2z = 100 are ‘free to roam,’ but the third is determined by the other two… In statistical language, there are ‘two degrees of freedom’ in this linear equation in three variables.”

“In simple linear regression, we estimate the slope and the intercept, so the associated degrees of freedom are n-2 for n observations in a linear regression.”

What is the difference between an experiment and an observational study?

“In an observational study, we compare groups of individuals where the group distinction is ‘built-in’—for example, examining the life spans of those who had a diet rich in blueberries and comparing them to individuals who did not have a diet rich in blueberries. The key is that it was up to those individuals to determine which group they were in. We are just gathering information about them ‘after the fact.’”

“In an experiment, the individuals whom we compare start out as a single group and the researchers are responsible for dividing them into subgroups. We could find some volunteers to tell half of them to add blueberries to their diet and the other half to avoid blueberries. The researchers are imposing the diet (the explanatory variable) on the individuals; it is not their own choice.”

“In 1912, Dr. Charles Mayo… was comparing two groups, and he thought he was using an appropriate control group (the adults), but his two groups had not been determined using a chance device, and Mayo was misled by a confounding variable. What no one realized at the time was that in normal, healthy people, the thymus gets smaller as one gets older. The difference in size that Mayo had observed was due simply to the differences in age, and it had nothing to do with the respiratory problems of the children.”

“If we want to be able to attribute the longer life spans to blueberries, we would have to carry out a randomized comparative experiment. This may or may not be feasible depending on the cost and time frame allotted for our research. If we relied on the information from an observational study, we might see an association between eating blueberries and longer life spans, but we would not be able to isolate the blueberry-rich diet as the cause. Observational studies definitely have their purpose; we just need to be careful that we don’t forget about the potential confounding variables and overstate a causal connection between the variables… With randomized experiments, however, even potential factors not considered prior to the start of the study should be relatively balanced by the randomization process.”

Additional questions addressed by this book include:

  • What can we learn from the shape of the data?
  • What do r and r2 tell us?
  • Why are deviations squared?
  • What is independence?
  • Why use random samples?
  • What are the differences among the distribution of a population, the distribution of a single sample, and the sampling distribution?
  • Why do we divide by n-1 instead of n?

I still find statistics confusing, but this book did make some things clearer. The book was peer-reviewed by a panel of college and high school statistics instructors.


Shaughnessy, Michael, and Beth L. Chance. Statistical Questions from the Classroom. Reston, Virginia USA: National Council of Teachers of Mathematics, 2005. Buy from Amazon.com


Disclosure: As an Amazon Associate I earn from qualifying purchases.