**Medical "Research"**

Reuters has an interesting news article up:

Tetanus jab may curb multiple sclerosis risk but it turns out to be yet another example of how bad medical research is. The sample size they're looking at is so small they can't get any useful conclusions. And taking a harder look at it I think the conclusion they're pushing is false.

Since MS is an autoimmune disease (by leading theories) the article immediately caught my interest. One of the reasons we worry about the vaccine schedule for kids is that it affects only one part of the immune system, potentially causing an imbalance leading to autoimmune disorders (yes, this is speculation). The researchers have some analogous speculations:

The biologic mechanism by which the tetanus vaccination may protect against MS is unclear, according to the authors. They note, however, that vaccination with tetanus toxoid may shift the T helper cell immune response from a proinflammatory Th1 response to an anti-inflammatory Th2 response.

Then I caught another bit on my second read through the article:

Analyses centered on a total of 963 MS cases and 3126 controls. They found that a history of having been immunized against tetanus was associated with a 33 percent decrease in risk of MS.

As far as I can tell they're saying the controls, all having had tetanus shots, were less likely to get MS. [Or that the MS cases were less likely to have gotten tetanus, but that's unlikely to produce a nice round 33%] So what they're saying is "We got two cases when we should have expected three." A quick google showed that

the incidence of MS in the USA is 0.91 per 1000 people.

Let's drop this into Excel. The study is asserting that for their group of tetanus vaccine recipients the odds of getting MS were lower than for the general population. I can simulate their study by putting a random function in 3126 cells, which each have a 0.091% probability of getting MS. So I ran this 100 times (ie, doing their experiment 100 times) and counted how many MS cases I got in each run. That gives me this graph:

The first thing to notice is that 100 trials isn't enough to produce a smooth distribution. The mean (average) came out a little low, 2.6 instead of the 2.8 it should be. But that's close enough to give us a feel for it. So note how tall that "2" column is. The prediction of three cases comes from multiplying the incidence by the number of people. But when you do that in real life you're going to get a few samples with lots of cases and many with fewer to balance them out. That's part of the problem with using a sample size so small. The expected result presses up against zero and gives you a distorted curve instead of a nice easily analyzed bell curve.

So what we're getting is that "two cases" is both the median (result in the middle) and the mode (most common result) of the distribution. So getting that in their sample isn't a "33% improvement", it's WHAT THEY SHOULD EXPECT. The result of the study doesn't show the probability of getting MS changed from getting tetanus vaccine. If it had gone down dramatically they should have seen one or no cases of MS in the group. But even if that had happened it wouldn't be proof. There's a 1 in 4 chance that would have happened even with the normal probability of MS.

So the conclusions to draw are:

1. Don't draw conclusions from small sample sizes.

2. Tetanus has no apparent impact on your odds of MS.

There's also a 1-in-4 chance that they'd have come up with results showing that tetanus vaccination increases the chance of getting MS. I wonder if they would have published that.

In defense of Dr. Hernan and his colleagues, they're using a larger sample size than the nine studies they're doing a meta-analysis of. And I'm working from a Reuters article, so this may have been a discussion of the innumeracy of science reporters, not medical researchers. But it's a good example of why I'm very wary of widely publicized medical research showing something is/isn't safe.

**Current Mood:** * cynical*