Re-Thinking Statistics

I came across this paper on teaching statistics last week. It changed the way I think about teaching statistics. Here is my summary of what the paper says.

Our approach statistics mirrors our move from a geocentric view of the solar system to a heliocentric view. The problem with the geocentric view was that it did not perfectly predict the paths of the planets. In particular, planets would sometimes engage in retrograde motion, where the planet would move in the “wrong” direction in the sky from what the geocentric theory predicted. This was “solved” by adding epicycles. The epicycles solved some of the problems, but not all. In all, about 80 epicycles were added to help explain why the pure geocentric model did not fit the data perfectly. Of course, no epicycles are needed for the heliocentric model, so this is a better model.

Cobb, the author of the above article, says that statistics is similar. If we take a sample, we want the sample distribution to be roughly normal (which it can be, thanks to the Central Limit Theorem). The first reason for a statistical “epicycle” is that we are required to know the population standard deviation, which is normally not something we can know. We can estimate this by looking at the sample standard deviation, but this change means that our distribution is no longer normal. The statistical epicycle that “fixes” this that we can use Student’s t-distribution. If we want to compare numerical data from two populations and the samples have different standard deviations, then we no longer have a t-distribution. So now we add another statistical epicycle and introduce degrees of freedom (etc).

Cobb argues that all of these statistical epicycles distract students from good statistical thinking—the students dwell on the details of the statistical epicycles rather than thinking about inference actually means. He proposes using simulation as a way out of this (and direct calculation of some small examples so that students know what the simulation is doing). This used to be prohibitively computationally expensive, but no longer. He quotes Fisher from 1936 as: “the statistician does not carry out this very simple and very tedious process, but his conclusions have no justification beyond the fact that they agree with those which could have been arrived at by this elementary method.”

Here is the example that Cobb gives for how to teach inference. Suppose that you want to determine if some new medical intervention decreases the recovery time for surgery patients. You have a control group and a treatment group with 3 and 4 patients, respectively. Cobb proposes that instead of doing anything about normal distributions, you simply do a permutation test: you assume that the intervention made no difference, then look at all of the 7-choose-3 ways that the seven patients could have been divided up into control and treatment groups. You calculate the statistics for all 7-choose-3 divisions, and then you figure out how many of those are at least as extreme as the data you actually got—that is your p-value. In the case of really large numbers, you would just simulate doing this a bunch of times, and figure out what percentage of the simulations are at least as extreme as the data from your study.

I have been on board with doing simulations for a couple of years now, but this paper gave me a new insight: I had been thinking about simulation as being a way for students to understand things related to the normal distribution, but Cobb is saying that the focus should be on statistical thinking rather than the normal distributions that I was focused on.

One big barrier for me to realize this earlier is that I am a mathematician teaching statistics—this point may be completely obvious to a statistician (or a more competent mathematician), but I was too ignorant of statistics to realize that there is an underlying statistical thinking that I should be focusing on (maybe I was aware of it, but I didn’t think about it enough). I had tunnel-vision about what I was “supposed” to teach, and didn’t realize that I should be teaching something more foundational. I will try to remember these statistical blinders of mine when I see a student thinking the same way about, say, calculus (“I just want to plug stuff into the formula.”).

P.S. Feel free to correct any mistakes I made in describing the statistics above. Leave a comment.

4 Responses to “Re-Thinking Statistics”

  1. Andy Rundquist Says:

    Love this. I have felt the same way about presenting data with error bars. Why not just print tiny histograms? Is there a good text for this approach?

  2. gasstationwithoutpumps Says:

    Permutation tests are a standard technique that is widely used these days—they should definitely be taught, though sampling permutations is much more common than exhaustive enumeration (anything small enough to enumerate exhaustively is probably too small to be useful).

    One limitation of your approach is the focus on p-values, which are valuable for single-hypothesis tests, but misleading when there are many hypotheses being explored. Looking a E-values with multiple hypotheses is less likely to lead to p-hacking.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: