I’m on my way right now — literally on a plane as I write this — to Minneapolis for a gathering of people who want to do disease surveillance a little differently. Seriously, the guest list reads like a who is who of the epidemiological world… Well, the world I’m connected into. The “old guard” of epidemiology is not on that list, and I’m not surprised. The way we want to do epidemiology deviates a lot in its methods from the way that epidemiology is done now, and that’s not a bad thing.
Epidemiological surveillance is based on several tenets. First, you systematically collect and analyze data. You can do this a number of ways, but the “old guard” demands that the data be validated, standardized, and that it meets a set of criteria set up a while ago. This in itself is not a bad thing. You want data to be of good quality, otherwise you end up drawing the correct conclusions from the wrong sources. That gets you in trouble.
Another tenet is that you disseminate the information and the interpretations of the information to everyone who needs to know, or to everyone that needs to act… Or just to everyone. This is why you see flu reports every week posted on the websites of a lot of public health agencies. They’re trying to tell us what the influenza situation is and what we need to do. There are more complex reports that go to decision-makers, like the reports that tell them how many ventilators are available for people with severe pneumonia, or how many ambulances are available to go get people.
The final tenet is that you need to do something with the data. There needs to be some action that improves the situation being assessed in your surveillance, or you work on getting markers to look better in future surveillance reports. Without action, all of that data collection, analysis and dissemination is a waste of time. Believe me when I tell you that there were plenty of times when we discovered something doing epidemiological surveillance and the decision-makers (mostly politicians) chose the wrong course of action, or did nothing at all.
The conference I am on my way to is being billed as “EpiHack“:
“EpiHacks have so-far brought together over 176 individuals (and counting!) from all corners of the world — Asia, North America, Europe, Africa, Latin America and beyond — to Southeast Asia. Experts from diverse backgrounds have been invited to work across sectors fusing the fields of animal, human and environmental health with technology and digital solutions. EpiHacks are non-conventional events that requires all participants to collaborate, co-create, unfold and explore ideas and be action-orientated. At every EpiHack each participant works closely with the team and holds his or her own unique, special role.”
Basically, we’re going to brainstorm for a couple of days on new ways to do influenza surveillance.
When we didn’t know what caused influenza, all we had to go was syndromic surveillance, or looking for and counting the number of people with a specific set of signs and symptoms of influenza. Laboratory tests for influenza came along, and we started to rely on them to call cases “confirmed” versus “suspect” or “possible.” Then the lab tests became “rapid” tests — those that you can do in a medical office or in an emergency room — and counting cases became even more accurate.
However, with these systems, you’re relying on people with influenza to have some sort of contact with healthcare. You don’t count cases where the person stayed home or didn’t have access to healthcare… Or the “crunchy” people who don’t believe in medicine and go to traditional healers, naturopaths, chiropractors, and/or other sorts of quacks. How do you count them?
To try and answer that question, a group of epidemiologists in Australia came up with “FluTracking.org”, a web-based system of surveying people every week, asking them if they had signs and symptoms consistent with influenza. The system was a success. It was able to detect increases and decreases in influenza activity at the beginning and end of the influenza season, respectively. What was even better was that the influenza indicators rose and fell sooner than what was being seen in traditional systems. It makes sense because you get sub-clinical influenza before you get clinical influenza… You feel bad before you go to the doctor, not the other way around.
When I started working at the Maryland Department of Health (and Mental Hygiene), one of my first tasks was to get the influenza surveillance system in working order. It wasn’t bad before I got there, but it only relied on physicians reporting cases of influenza-like illness (ILI) and on the influenza tests carried out by the state public health laboratory. My first idea was to use rapid influenza testing to boost the number and proportion of true influenza cases that we were detecting. I recruited a number of emergency departments and urgent care centers to report the number of rapid influenza tests that they performed from week to week, and how many of those were positive.
The following influenza season, I heard about the Australian project and decided to contact them. A few (very expensive) phone calls later, I had enough information on their project, and I convinced my bosses at the health department to let me try something like it in Maryland. So we created the “Maryland Resident Influenza Tracking Survey” or MRITS. It came just in time because the 2009 influenza pandemic came that flu season, and we used the MRITS as a model to keep track of who was sick but not sick enough to go to the doctor. (The answer to that was the older age groups. The younger age groups got sick and really felt it while the older adults didn’t get sick or didn’t get too sick if they did.)
Shortly after all that happened, Google came up with “Google Flu Trends,” where they look at influenza-related queries on Google to determine if there was an excess activity of influenza in a geographic location. Like other alternative systems, it worked pretty well. Shortly after that, “Flu Near You” came online. That system was a lot like the MRITS in Maryland, but it was nationwide in the United States. While MRITS has remained pretty much the same, the people who run “Flu Near You” have a good source of funding and plenty of people working on it. So they have expanded to include looking at other syndromes, using a mobile app, and having a very functional website.
Other ideas that came and went were using Facebook and Twitter status updates to determine if something was happening. For example, if a lot of people started tweeting or posting on Facebook that they were feeling sick, had a fever, etc., the system would alert epidemiologists that something was happening. Then we could take some action, though what that action would be, exactly, was up in the air.
The EpiHack conference is going to be about figuring out not only new ways to do surveillance for influenza and other respiratory diseases like MERS. We’re also going to talk about taking action… At least I hope we do. Otherwise, as I’ve mentioned, the whole thing is useless. So I’ll let you all know how it goes.
(We’ve started our descent.)
René F. Najera, DrPH
I'm a Doctor of Public Health, having studied at the Johns Hopkins University Bloomberg School of Public Health.
All opinions are my own and in no way represent anyone else or any of the organizations for which I work.
About History of Vaccines: I am the editor of the History of Vaccines site, a project of the College of Physicians of Philadelphia. Please read the About page on the site for more information.
About Epidemiological: I am the sole contributor to Epidemiological, my personal blog to discuss all sorts of issues. It also has an About page you should check out.