Precision and normal variation and stuff

One of the things that I do at one of my jobs is help nurses do triages on patients walking in to be seen. It never ceases to amaze me how many people look at their temperature and say that they “run cold” or “run hot” if their temperature is less than or greater than 98.6 degrees Fahrenheit. Just the other day, on a very cold day, a woman said that her 97.9 degree temperature was “too cold” for her. She then added that she must be sick.

Yeah, because that’s how the body works, I guess.

So where did we come up with 98.6 degrees Fahrenheit as the “normal” body temperature for humans? Dr. Robert Schmerling of Beth Israel Deaconess Medical Center explains:

“It’s a fact that is still taught daily to schoolchildren all over the world: Normal human body temperature is 98.6 degrees Fahrenheit. But as with most measurements, “normal” has a range. With current technology, don’t be surprised if your actual measure temperature is rarely 98.6. That’s because “normal” temperature was based on the average temperature of hundreds of people using oral, mercury thermometers. Current thermometers are not only much faster, they are much more accurate and document two things known even in the days of mercury thermometers: There is variability in body temperature over the course of the day, and there is variability between different people.”

That’s right, folks. We got 98.6 a long time ago and with mercury thermometers. Dr. Philip Mackowiak investigated further:

“The 98.6 temperature myth goes back 150 years to the work of Dr. Carl Wunderlich, a German physician who recorded the temperatures of thousands of patients.

Wunderlich published many findings regarding human body temperature, but the finding that made its way into medical literature in Germany and the United States was that mean temperature of the human body is 98.6.

Mackowiak knows that, in fact, Wunderlich never suggested that there was one normal temperature, that his mean temperature reading of 98.6 just took hold in medical minds.

Mackowiak and two colleagues collected 700 temperatures from 148 healthy adults and found their readings ranged from 96 to 100.8. When they figured the average, they found it to be 98.2 degrees with just 8 percent of their 700 temperature readings coming up 98.6.

In 1992, in an article in the Journal of the American Medical Association, Mackowiak and his colleagues concluded that 98.6 should be abandoned.

Mackowiak kept up his research and had the good fortune of locating one of Wunderlich’s thermometers in a museum in Philadelphia.

Mackowiak took the thermometer to Baltimore, where he discovered it was calibrated a degree-and-a-half centigrade higher than the ones used today. Wunderlich also measured temperatures by taking them in the armpit, which should have made them lower than oral temperatures.

This further buttressed the findings from Mackowiak’s earlier study.

Mackowiak has said that the important message isn’t that 98.6 is wrong but that normal temperature depends on the person and can be influenced by age, gender, race and time of day.

In the JAMA article, Mackowiak and his colleagues suggested an oral temperature higher than 99 in the early morning and 100 degrees in the early evening can be called fever.”

That’s the thing about measurements; they come with a lot of variation. Even if you sampled people completely at random, and tested them all with the most precise thermometer around, and somehow figured out a way to test them all at exactly the same moment in their daily circadian rhythm, and you were 100% sure that none of those people were had any condition that would affect their body temperature, you would still not have the true “average” temperature of a human being. You’d be closer, but you wouldn’t be there.

Furthermore, when reporting your findings, you’re better off reporting more than just the “average”. Reporting a range and a median along with the average (“mean”) is a better way to describe scientific findings like body temperature. In fact, the median is less affected by big outliers. Because we don’t have any technology that is 100% precise, we must be careful in how we present measurements of any kind.

So what happens when you can’t measure everything? In the body temperature example, we cannot possibly measure everyone’s temperature at the same time with the same device, making sure that everyone is not having their body temperature affected by a disease process. Scientists measure a subset of a population and make estimates based on those measurements. This is where the Central Limit Theorem comes into place, and it’s a really, really big deal in statistics in general and biostatistics in particular:

As the video states, the sample size is critical. It is also critical to draw those samples at random. Do those two things correctly, and you’re well on your way to making an inference about the population based on the sample measurements. This kind of makes sense if you think about it for a little bit. If I go to a sporting event in Mexico City and take the temperature of the 22 soccer players on the field, I cannot possibly state that their average temperature is the average temperature of all humans on the planet. Even if I tested all the people at Azteca Stadium — which has a capacity of about 110,000 — I could not extrapolate that beyond Mexico City residents, or even all Mexicans.

Canadians in February might have something to say about average temperature of humans based on measurements of Mexicans in Mexico City at a soccer game in August.

But here’s the thing… If I report my findings and give a range of measurements, and also give a 95% confidence interval, I will give far more information as to what the “normal” body temperature is, regardless of what group of people I test so long as the sample size is big (which 110,000 is) and the sampling is random (e.g. I drew Azteca Stadium out of a box holding the names of all soccer stadia of similar size in the world). I could report something like:

“Based on the measurement of all the people present at the soccer stadium that day, we can report that the average human body temperature is XX.X degrees Celsius, with a 95% confidence interval of XX.X to XX.X degrees Centigrade.”

In that statement, the 95% confidence interval is my way of telling you that I am 95% confident that the true measure of the average body temperature is in that range. Also note that I used Celsius (or Centigrade) because metric is how all science is done… Or it should be done. Notice that even in that statement I am giving you some measurement of how precise or imprecise my measurements are.

So why am I telling you all this?

I’m telling you all this because there will be people in the world that will try to pull one over on you with bogus statistics, or “statistically-sounding” verbiage. They may or may not be doing it on purpose, but they are trying to say something without saying it. For example, one antivaxxer once wrote in an “infographic” the following:

“25% [of parents] believe vaccines cause autism and these U.S. parents statistically have collegial education.”

Where did this come from? It came from a survey of parents reported back in 2010:

“One in four U.S. parents believes some vaccines cause autism in healthy children, but even many of those worried about vaccine risks think their children should be vaccinated. Most parents continue to follow the advice of their children’s doctors, according to a study based on a survey of 1,552 parents. Extensive research has found no connection between autism and vaccines. “Nine out of 10 parents believe that vaccination is a good way to prevent diseases for their children,” said lead author Dr. Gary Freed of the University of Michigan. “Luckily their concerns don’t outweigh their decision to get vaccines so their children can be protected from life-threatening illnesses.””

Funny how the antivaxxer in question didn’t mention that parents continue to follow the advice of qualified medical professionals (and not random infographics on anti-vaccine websites) and that 9 out of 10 parents believe vaccines prevent diseases. (That’s just belief, mind you. The reality is closer to 100% that vaccines prevent diseases. Reality doesn’t care what we believe.)

Can you decipher what the antivaxxer means by “these U.S. parents statistically have collegial education”? I can’t.

Anyway, the paper by Freed et al doesn’t just say “1 in 4” and leave it at that. Here’s the abstract:

“Objective: Vaccine safety concerns can diminish parents’ willingness to vaccinate their children. The objective of this study was to characterize the current prevalence of parental vaccine refusal and specific vaccine safety concerns and to determine whether such concerns were more common in specific population groups.

Methods: In January 2009, as part of a larger study of parents and nonparents, 2521 online surveys were sent to a nationally representative sample of parents of children who were aged ≤17 years. The main outcome measures were parental opinions on vaccine safety and whether the parent had ever refused a vaccine that a doctor recommended for his or her child.

Results: The response rate was 62%. Most parents agreed that vaccines protect their child(ren) from diseases; however, more than half of the respondents also expressed concerns regarding serious adverse effects. Overall, 11.5% of the parents had refused at least 1 recommended vaccine. Women were more likely to be concerned about serious adverse effects, to believe that some vaccines cause autism, and to have ever refused a vaccine for their child(ren). Hispanic parents were more likely than white or black parents to report that they generally follow their doctor’s recommendations about vaccines for their children and less likely to have ever refused a vaccine. Hispanic parents were also more likely to be concerned about serious adverse effects of vaccines and to believe that some vaccines cause autism.

Conclusions: Although parents overwhelmingly share the belief that vaccines are a good way to protect their children from disease, these same parents express concerns regarding the potential adverse effects and especially seem to question the safety of newer vaccines. Although information is available to address many vaccine safety concerns, such information is not reaching many parents in an effective or convincing manner.”

The thing about surveys is that they are subject to different answers based on how the questions are asked. (See this post in the “Epi Night School” for a better explanation of this phenomenon.) Table 1 in the paper tells us that 61% of the parents surveyed had at least some college. If you read the abstract and the paper, you’ll see that only 62% of those surveyed responded. These two things raise two or three red flags in my mind.

First, 61% of the population of the United States does not have a college education. It’s a high percentage and getting higher, but it’s not that high. But they only sampled parents, right? So it stands to reason that maybe 61% of all parents in the United States have at least some college. Alright, I’ll give them that. But… Second, that 61% is only 61% of the 62% who responded. So the proportion is less (about 39%) in the total sample (if those not responding are all not college educated, giving us another source of imprecision).

Finally, there’s this funny thing called “bias” that is pervasive in all scientific studies, especially surveys. Tell me, are you more or less likely to leave a comment at a restaurant or hotel if your service was bad or if your service was good? Bad, right? Recall bias is a huge problem in survey studies. It turns out that people who have had bad experiences (real or perceived) with vaccines and other pharmaceutical products are more likely to respond to surveys about their opinion of vaccines and other pharmaceutical products. This is the problem with the VAERS reporting system for vaccine adverse events. You’re only looking at the reports of people who did or feel that they had a bad reaction to a vaccine.

Table 2 in the Freed et al paper tells us again the story of imprecision. When asked if they agree with the statement “Some vaccines cause autism in healthy children,” 25% of the 62% of parents who answered the survey said they “agree” or “strongly agree” with the statement. Contrary to what the antivaxxer tells us in the infographic, the authors do not tell us how many college-educated vs. non-college-educated parents “agree” or “strongly agree” that vaccines cause autism.

That’s the type of wordplay that I am warning you about.

When someone tells you a statistic of an average measurement, don’t always take it at face value. Get to know what the statistic means. Ask yourself how big the sample size was and if it was drawn at random. Think of the design of the study and remember that there is a hierarchy to it. Ask for the range of values, their median, and what the confidence interval is. (When comparing two percentages and getting “Relative Risk” or “Risk Proportion” or “Odds Ratio”, the 95% confidence interval must not overlap 1.0 or the risk of the groups being compared has a better-than-random chance of being the same.) Look at the questions in a survey and see if they are leading or if they could be interpreted in more than one way.

The Freed et al study is not a bad study. It’s a good jump-off point to understanding how and why parents are choosing to skip vaccines and put their children and the children in their community at risk for serious diseases. It’s also a good way to begin to understand who out there is still worried about vaccines and autism even with the wealth of evidence disproving the lies that antivaxxers spread. After all, to diagnose the disease, you have to look at the signs and symptoms. A simple temperature reading won’t do, especially if you don’t know the range of “normal” temperatures in humans.

%d bloggers like this: