More bad epidemiology from Dr. Brian S. Hooker and friends

Remember Dr. Brian S. Hooker? The PhD who published a seriously flawedand now retracted — study in which he said that the MMR vaccine caused autism in African American male boys? Yeah, that one. Well, it appears that he has been busy trying to form a case for the legal action he has pending in the vaccine court. This time, he and a group of friends went on one heck of a fishing expedition into the Vaccine Safety Datalink project to see what would come up. What did come up was yet another seriously flawed “study.” I put the word “study” in quotes because, as an epidemiologist, I cannot call this a study.

I really can’t.

fishing

So let’s analyze “A Dose-Response Relationship between Organic Mercury Exposure from Thimerosal-Containing Vaccines and Neurodevelopmental Disorders” by Geier et al (including Hooker), published in the International Journal of Environmental Research and Public Health. Like Jack the Ripper would, let’s take this apart one piece at a time.

First, the conflicts of interest.

When you read a paper, you don’t usually look at the end of the paper right off the bat. But, because of who the authors of this paper were, and what I know about them, I decided to skip to the end and read about their conflicts of interest. Also, you should immediately recognize the Geiers, the father and son, in the list of authors. In case you forget, let Todd W. over at Harpocrates Speaks remind us of who they are:

“For those who don’t know, Dr. Mark Geier is half of the father-son team that developed the “Lupron Protocol” for treating autism. Put simply, Geier and his son came up with the scientifically unsupported idea that testosterone and mercury bind together in humans, allegedly causing autism. His treatment for this involves dosing children with leuprolide, followed by chelation. Leuprolide (also known by the brand name Lupron) is legitimately used for treatment of precocious puberty and as part of IVF treatment. It is also used off-label to chemically castrate sex offenders.

Dr. Geier, through his Institute of Chronic Illness and Genetic Centers of America, misdiagnosed autistic children with precocious puberty so he could claim that he was using Lupron on label, rather than for an unapproved, experimental indication (i.e., autism). This also allowed him to bill insurance companies for the lupron. His actions got him into hot water with various state medical boards, starting with his medical license in Maryland being suspended on April 27, 2011. Since then, one by one, 11 of his 12 medical licenses were suspended, an application for a thirteenth license in Ohio was denied, and some of those suspensions became complete revocations. The last actions I wrote about were the revocation of his license in Missouri and suspension of his Illinois license. At the time, the only state left in which Dr. Geier could practice was Hawaii.

As of April 11, 2013, that is no longer the case.”

Great. So, right off the bat, we have two authors in this paper who are true believers of the vaccine-autism link. Of course, we know the story of Dr. Brian S. Hooker. The third author, Janet Kern, apparently also works with the Geiers at the Institute of Chronic Illnesses, Inc., which is comfortably located in a suburban neighborhood just north of Washington, DC. The fourth and fifth authors, Paul King and Lisa Sykes, list their affiliation as CoMed, Inc., which — get this — has the same address as the Institute of Chronic Illnesses, Inc. (That’s one busy household.) Pro tip to all of these folks: When you list a suburban house at the end of a very posh cul-de-sac as the headquarters of an “institute,” you kind of raise some red flags about your legitimacy.

Just saying.

justsaying

Then I read the “Acknowledgements” section:

“This study was supported by the non-profit Institute of Chronic Illnesses, Inc., and the non-profit CoMeD, Inc. This study was also supported the Seltz Foundation and the Dwoskin Family Foundation, but they were not involved in the design and conducting of the study, in the collection analysis, in the interpretation of the data, in the preparation, in the review nor in the approval of the manuscript.”

Why, you don’t say? Your study, whose authors are in one way or another affiliated with the “Institute” and with “CoMeD” was funded by the “Institute” and by “CoMeD”? I’m shocked. Next thing you’ll tell me that you found some sort of spurious association between vaccines and autism.

Oh, wait.

Before I jump into the design and the findings, let’s look at that acknowledgement one more time. First, the “Dwoskin Family Foundation” is explained by our good friend Todd:

“The Dwoskin Family Foundation is a philanthropic vehicle for Albert and Lisa Claire Dwoskin. They established it as a 501(c)3 non-profit foundation in 2001. The sole contributions to the foundation are from the Dwoskins themselves (not unusual for a family foundation) to the tune of $600,000 in 2010 and $750,000 in 2011. In addition, a significant portion of the foundation’s assets are held in off-shore accounts and cash investments. The foundation’s 990 form for 2011 (the latest available via GuideStar.com, free registration and login required to view) lists net assets at $3.5 million. Needless to say, they have a lot of purchase power, as it were.

Claire Dwoskin is a board member of the anti-vaccine group National Vaccination Information Center. Her husband, Albert, is president and CEO of A.J. Dwoskin & Associates, Inc. Through their foundation, they funded The Greater Good Movie, giving $25,000 to the project in 2010. Two years ago, they made two donations to the American Foundation for University of British Columbia, academic home to Shaw and Tomljenovic. One contribution, for $10,000, was just for “general expenses”. The more significant donation was for lab costs for the “Aluminum Toxicity Project”, for which they donated $125,000. This is in addition to approximately $200,000 for NVIC.”

That’s a lot of cash. No wonder all that anti-vaccine propaganda can be put up in all sorts of places. And they call me the shill? I wonder what else you could buy with that kind of money? And as far as the Selz Foundation? I have nothing on it. I couldn’t find anything online. So I leave it up to my wonderful readers to go seek the truth and bring it forth. I suggest googling “Seltz Foundation” along with “antivaccine”.

Finally, one thing you need to note about all the authors is that none of them, not one, has education in epidemiology and biostatistics, and they all have some anti-vaccine bone to chew with “the man.” All of which will become painfully clear as we dive into the paper. Get your galoshes on because we’re about to walk on some crap.

brace

Let’s look at the data sources.

So they took children listed in the Vaccine Safety Datalink (VSD) and found their “cases,” children with neurodevelopmental disorders, by looking at ICD-9 codes. Then they state this:

“Additionally, to allow for a potential cause and effect relationship between exposure and outcome, only individuals diagnosed with the outcomes examined following administration of the vaccines under study were included in the present analyses as cases.”

You may think that this is sound reasoning. After all, it looks like the child was diagnosed after they were immunized, right? So it would stand to reason that the disorder came after the vaccine, right? Well, no. Only the diagnosis was made after the vaccination. We have no information, none, on the onset of symptoms of neurological disorder. Like with Brian S. Hooker’s other paper, children may have been immunized in grater proportion if they have a disorder as part of their early education/intervention program, or, simply, because children with such disorders are at much higher risk of complications from influenza and other vaccine-preventable diseases.

If you look at the first table of data, you will see that some cases were diagnosed very, very early because the conditions they are diagnosed with manifest themselves very early, like “failure to thrive” or “cerebral degeneration.” But in that same table we see one of the biggest problems in this analysis and something that I will discuss in the methods analysis. First, three are huge differences in the proportions of males and females in some categories of disorders while in others the male-to-female ratio is almost 1:1. Yes, there is some evidence that some of these disorders affect males more than females, and the authors are going to have to adjust for this in the analysis. Will they? Remember that the Geiers made a lot of money treating kids with autism based on a flawed theory that testosterone (a hormone found in males in greater concentrations than in females) binds to mercury. So we’ll see.

For their controls, they picked children in the database who were as old as the average age of the case children, plus two standard deviations. That would have been great, except that they treated a couple of groups differently:

“The only exceptions were for the specific diagnoses of tic disorder and hyperkinetic syndrome of childhood, where controls had to have been continuously enrolled from birth until the mean age of initial diagnosis of the specific diagnosis being assessed plus the standard deviation of the mean age of initial diagnosis of that specific diagnosis. The length of follow-up was shortened for those two outcomes, because the mean ages of initial diagnosis for tic disorder (5.1 years-old) and hyperkinetic syndrome of childhood (5.7 years-old) were so long that, because of the time limitations on the years of records that were available for study, requiring continuous enrollment from birth until the mean age of initial diagnosis of the specific diagnosis being assessed plus twice the standard deviation of mean age of initial diagnosis would have resulted in virtually no controls available for comparison to those cases.”

They did this to themselves by chasing causality. Had they been looking for simple associations, as you should do when you’re giving it a first go-around and not trying to be so specific about the age of diagnosis. But whatever.

One last thing in the choosing of cases and controls… They only chose cases and controls who were in the database from 1991 to 2000. If this seems weird to you, it should. We know that thimerosal was removed from childhood vaccinations beginning in about 2001, 2002. If these authors were really interested in the vaccine-everything link, they would have pulled up cases and controls post-2000 and then adjusted for date of vaccination (pre-2000 to post-2000) to see if their results hold water. It’s a pretty simple thing to do, especially since they already had the data available to them.

How did they determine the exposure?

“The vaccine file for cases and controls was then reviewed to determine the exact dates of HBV administration. Those cases and controls receiving no HBVs were also included in the present study.”

As I’m sitting here writing this, it just occurred to me that their obsession with causality has introduced a bias into the analysis, one that has probably ruined their whole thing a priori. What happens when you exclude children who were diagnosed before being vaccinated? How would those observations affect the results? I can’t tell you without the data in front of me, but it is not beyond the real of possibilities that the association between Hep B vaccine and the cases or controls gets terribly confounded by doing this. Suppose, for example, that more children were diagnosed before being vaccinated, then that would show that the vaccine had nothing to do with their diagnosis. However, by doing what these researchers did — making sure the diagnosis was made after the vaccine — they have pretty much assured that almost EVERYONE who is a case was vaccinated. And you can see it in table 2. The smallest groups of cases are among the non-vaccinated.

jesus

They didn’t do this for the controls. Remember, they picked controls based on age and not having any of the disorders. Not picking your cases and controls at random, or with the same rules, introduces more bias. I mean, seriously, at this point I feel like I should just stop. The design of the study is, in my most humble opinion, flawed.

Like, really flawed.

What were the methods?

Alright, so they collected all these data and ran some fancy biostatistics on it. Here’s what they did, with my emphasis in bold:

The logistic regression function was employed for each of the case versus control comparisons examined to determine the odds ratio (OR) per µg organic-Hg from T-HBVs administered within the first 6 months of life. Additionally, the data were separated by gender, and the logistic regression function was employed for each of the case versus control comparisons examined to determine the OR per µg organic-Hg from T-HBVs administered within the first 6 months of life.”

You know what? I’m going to stop right there. I’m going to stop because my head is going to explode. Why? Really simple. First, they looked at exposure to the Hep B vaccine within the first six months of life. But they picked their cases and controls of all ages (because of diagnosis, I guess). This is what we in the biz call immortal person-time. They picked cases as old as 5 or 6, judging by their data in Table 1. Why? If they were only looking for effects after the pre-6-months vaccines? Well, because that is the only way you can find more cases. In essence, if a child is to be diagnosed with a neurological/developmental disorder, they will be more likely to be diagnosed the older they are.

Alright, I won’t stop. I’ll keep going a little longer.

aspirina

Second, they didn’t need to separate the data by gender… If they were doing a proper logistic regression. The thing about logistic regression is that it gives you adjusted odds ratios (or relative risk, or risk difference, depending on your study design) if you do it properly. Why they separate the data by gender and then run the regression again is beyond me… And it’s beyond a couple of people who do biostats for a living. But that was not all:

“Furthermore, the Fisher’s exact statistical test was utilized for each of the ND cases versus control comparisons examined to determine the discrete OR for exposure to 12.5 µg organic-Hg, 25 µg organic-Hg or 37.5 µg organic-Hg from T-HBVs in comparison to 0 µg organic-Hg from HBV or no HBVs administered within the first 6 months of life.”

Why do Fisher’s when you’re already doing logistic regression? You don’t need to. But, still, they did, and what did they find?

Spoiler alert! They found an association!

When you look at table 3, the thing that must stand out to you is the very, very, very tight confidence intervals in their odds ratio. Even if the design of the study was sound, even if there wasn’t so much bias in how they picked cases and controls. Even with all that… Confidence intervals that are that tight should raise red flags.

Tight confidence intervals tell you that either your sample size was off, or that your statistical analysis was off. Can it happen? Yes, it can happen. But look at how close to 1.0 and how tight these things are: 1.05-1.09 and 1.04 to 1.05. And isn’t it funny that running the regression without females raises the odds ratios for males so much but leaves it statistically insignificant for females? That tells us that gender was an effect modifier on the association between thimerosal exposure and neurodevelopmental disorders… IF THE DESIGN WERE EPIDEMIOLOGICALLY SOUND TO BEGIN WITH.

What does it really mean?

What it means is that this study, like the last study of Brian S. Hooker’s that I reviewed, is not epidemiologically sound in my opinion. It means that authors who have severe conflicts of interests — with everything to lose if the vaccine-autism link is, as it has, been proven to be false — came up with different ways of picking cases and controls. You never do that. Your cases and controls must be the same in everything except their disease state. I think that their apparent obsession with causality poisoned the whole thing.

Of course, I could be wrong.

I’m willing to admit if I’m wrong, and I invite any of the authors to prove me wrong. Show me that there was no bias in the design and tell me how you justify the tight confidence intervals. Heck, tell me why you did logistic regression AND Fisher’s exact test. But, most of all, tell me why you keep finding spurious associations and seemingly torturing data until it tells you what do.

Speaking of torture…

torture

I'm a doctoral candidate in the Doctor of Public Health program at the Johns Hopkins University Bloomberg School of Public Health. All opinions posted here are my own, of course, and they do not necessarily reflect the opinions of my school, employers, friends, family, etc. Feel free to follow me on Twitter: @EpiRen

4 thoughts on “More bad epidemiology from Dr. Brian S. Hooker and friends

    • Maybe. But I don’t have that kind of patience. Sarcasm would seep in. Besides, if their peer reviewers didn’t catch all this stuff, well, that doesn’t say a lot about their editors.

      Like

  1. I fundamentally disagree with Ren on one point.
    Prophylactic aspirin would be utterly ineffective. A proper prophylactic for these “papers” would be 2mg lorazepam, at a minimum.

    Gee, what was that bias called again? The one where you have the results you want and play with the data until you can “get” the results you wanted?
    Oh wait, that’s not a bias, that is simply fraud.

    Like

Comments are closed.