Background: My son plays soccer, and I want him to avoid concussions if possible. Virginia Tech has done a lot of research into how effective various headgear are at preventing concussions. My son has been wearing a headband to reduce the risk of concussions for the last couple of years.
He recently started complaining about the headband, saying that it seems like it might be doing some damage—he gets an unpleasant vibrating sensation when he heads the ball. I decided to look into the data a little more to figure out if I could feel good about him not wearing a headband. Here is what I found:
- The Virginia Tech studies are done with mannequins in a lab. This is a good start, but I wanted better evidence that it would work on humans.
- Delaney et al have a 2007 paper that says the relative risk concussions for wearing headgear is about 0.4 with a ridiculously low p-value. They showed that 14 of 52 players who wore headgear got a concussion during the season (about 27%), and 114 of 216 (52%) who didn’t wear headgear got a concussion. These rates are unusually high (my teams have had about 1 concussion for every 15 players over the last several years), and I think that they are a result of this being a retrospective survey. This isn’t convincing to me.
- McGuine et all did a prospective survey of about 3000 players (about half of which wore headgear—1505 players, versus 1545 who did not wear headgear), which is mostly convincing. They found a risk ratio of 0.98 (p=0.946), which indicates that headgear likely does not help.
Moreover, the risk ratio for males (474 with headgear, 546 without) was 1.83 (p=0.286), which suggests to me that the headgear might make things worse (the risk ratio was 0.9 for females). I concluded that my son doesn’t need to wear a headband.
I regularly teach introductory statistics, and I am pleased that I will be able to tell my students that I used my knowledge of statistics in a way to help me in my non-work life. This is something that they will be able to do in the future (I had to Google “risk ratio,” so it will be okay when they do that, too).
There were some quirks about that paper that made me pause, though. First, the authors state that only 13 players did not comply with the study: 6 of 1545 players (0.39%) used headgear when they weren’t supposed to, and 7 of 1505 (0.47%) didn’t use headgear when they were supposed to. They were working with teenagers, and this seems like a remarkable rate of compliance (my son forgets to wear his headband more often than 0.5% of the time—once every 200 games/practices—and he is reasonable and committed to wearing it).
Stranger yet: of the 7 who failed to use headgear but didn’t, all 7 experience concussions. The overall concussion rate was 4.3% (which matches with my experience as a coach, and which is why I do not find Delaney’s paper convincing), so having a 100% concussion rate for these 7 seems strange.
If you run the statistics on what the players actually did rather than what they were supposed to do, the results change quite a bit: the overall risk ratio changes from 0.98 (as mentioned above) to 0.63 (p=0.091). The authors of the paper were kind enough to include this, although they failed to mention that all 7 of the students got concussions—I had to sift through the numbers to figure this out on my own.
As a father, I would be very interest in a risk ratio of 0.63, even if p-value is north of 5% (9.1% is still fairly small, so there is some evidence that we should reject the null hypothesis that the headgear doesn’t help). This basically means that if, say, 5 of every 100 non-headgear wearing players got a concussion during the season, only 3.15 of every headgear wearing players would get a concussion. It doesn’t quite cut the risk in half, but it is close.
The drawback is that this changes the validity of the study, since the players were not randomized (although it was only 13 out of 3050 who “defected”).
Ultimately, I still will let my son stop wearing the headband. This is mainly because, while I like the effect size of 0.63 at a p-value of 9.1%, almost all of the benefit goes to the females: disaggregated by gender, we see a new risk ratio of 0.93 (p=0.909) for boys and 0.64 (p=0.094) for girls. Since I have a son and all of the benefit goes to girls in all cases, it seems like he shouldn’t have to wear it (although I might make my daughter wear one if she played soccer).
I emailed the corresponding author of the paper to find out if I was mistaken about the 7 players, but he has not yet responded back to me. I would appreciate if any statistics-savvy people out there can let me know if I am thinking about any of this incorrectly.