Latest News From Health Monitoring
Keeping you up to date on recent initiatives, software enhancements, and the national conversation about public health
Keeping you up to date on recent initiatives, software enhancements, and the national conversation about public health
Andrew Walsh, PhD is Health Monitoring Systems resident expert is public health. A Johns Hopkins graduate, he attended this years annual International Society of Disease Surveillance conference and has this tongue-in-cheek report. — kjh
Who doesn’t love a good tag cloud these days? If there had been a tag cloud for 2010 ISDS conference, the largest words would certainly be “meaningful use”, “social network”, and “stakeholder” in some order. So because no one demanded it, here are my musings on these topics, plus a few more that I thought should have been given higher billing.
Meaningful use – Maybe it’s just the timing of the conference, but these conversations reminded me a little of my kids when the toy catalogs come at Christmastime. There was a sense of “You mean we get to ask for stuff and someone will actually give it to us?!?”-level excitement, a belief that they’d get everything on their list, and very little discussion of whether they’d be better off if they actually did.
Out of all the discussion, I thought the most interesting comment was that many providers and facilities are being encouraged to go after public health reporting options with the expectation that public health won’t be able to receive the data, an outcome that lets the senders off the hook. That seems like an important issue, but it was essentially a throwaway comment; there was no discussion on how to be ready to receive the data. Now maybe that’s because the various health departments are confident that they are ready, but I still thought it would have come up more.
Also, after two talks from the ISDS Meaningful Use Workgroup, I’m still confused about what the purpose of their document is. As I understand things, it can’t become part of the federal requirements, at least for Stage I. So is it meant to be a guideline for states when they are deciding what they will actually accept? Or is it just to give providers and facilities a notion of what syndromic surveillance is all about so they know what will be done with the data and what data actually needs to be sent? I would love to be enlightened.
Social Network – It turns out that this meant different things to different people, which led to an amusing panel discussion on “Harnassing social networks for public health surveillance” that wound up being something of a non-sequitor since not all the panelists had the same interpretation. There’s the original notion of a social network as a set of people and the physical contacts that exist between them, which can be used to understand the spread of certain diseases (generally less contagious diseases which require significant contact that can actually be quantified). This overlaps somewhat with the second notion of a social network like Facebook where the connections now exist in a virtual realm, but might also give some information about who interacts with whom in physical space. But then some people are interested in Facebook and Twitter because they are a place where people talk about being sick, which might be another indicator of disease prevalence. And finally, there were social networking platforms like Facebook but specifically set up for people to post data on their own health and talk to other people about specific health-related issues. Talk about overloaded terminology; maybe next year there will be a panel discussion about vectors.
Stakeholder – This word was used constantly, and yet not one presenter made the obvious Buffy the Vampire Slayer joke; I’m not sure if I’m pleased or disappointed.
And now, EpiSanta, I’ve been a good boy, so here is my list of things I’d like to hear more about next year.
Validation – A lot of people were building quantitative, predictive models from data, and many of them paid lip service to validation as a good thing to do and something they hoped to get around to, but very few actually did anything about it. When everyone works in their own corner on their own dataset, overfitting is a major concern. There was even a prime example of it at last year’s conference – someone from Google Flu gave a plenary talk in which they revealed that their trend line showed no signal from H1N1 flu in the Spring of ’09. Why? They didn’t say in exactly these terms, but basically they had overfit to keywords related to “normal” flu patterns. If we’re going to be in the business of making predictions, we need to pay more than just lip service to seeing if those predictions bear any resemblence to reality.
Standard data sets – Ostensibly, much of the research in syndrome surveillance is on detection algorithms. Algorithms are the domain of computer science, and in computer science they compare algorithms on the same data so that they actually have some basis for making comparisons and judgements. If we’re going to focus on algorithms, perhaps we should borrow more ideas from the folks who lead that field. I heard several pleas from public health practitioners for help in assessing the value of using one algorithm over another; such assessments will never be possible until we start making apples-to-apples comparisons.
Something other than the flu! – Everywhere I looked, someone was doing something with the flu – modeling it, detecting it, predicting it, DiSTRIBuTing it. And all of that is perfectly understandable since it is a major public health concern. But it seems like any time a new data stream is created/found/summoned from the ether, we see if it predicts the flu, and when it does we declare victory and move on. And that just makes me wonder – Why do we need yet another data source that shows the same trends? And if everything predicts the flu, what does that tell us about the bar we’ve set?
Case in point – everyone was agog over a research talk about using Twitter to track the flu season (Zut alors, surely such a thing cannot be done!). Given the response, you would have thought we had seen lead turned to gold before our very eyes, and yet all they had was one year of data that was so heavily smoothed that it could have been approximated quite nicely with a second order polyomial. Then they showed their fitted curve, which was clearly a higher-order polynomial that had more structure than their data (overfitting, anyone?) and declared victory without quantifying the fit in anyway or doing any serious validation.
(And no, I’m not bitter at all that everyone thought Mr. Twitter was both brilliant and hilarious, while my abstract was summarily rejected for lack of rigor.)
We were discussing meaningful use in the office today and hit upon the perfect description for what hospitals want to hear from public health. “Easy Check Off” They want a quick win, a clear victory, a slam dunk. How can we make that happen?
Here are some initial thoughts on what can be done.
First, they need to cut through any confusion. Public health should work with the hospital association and let healthcare providers know what their strategy is for syndromic surveillance, laboratory reporting, and immunization registries. This can be a little tricky since some data collection programs are run by state health departments and others by county health departments. Outlining how this works clearly, in a letter to each healthcare organization can go a long way.
The next obvious issue they will need to address is, “What work do I have to do on my part?” I can’t answer for other systems, but for the EpiCenter system this is pretty straightforward. They need to contact Health Monitoring Systems, implement a standard HL7 feed (or file transfer) over a secure connection (VPN most likely) and the process is done. Anecdotally, this can take as little as four hours.
We like to cite the time we visited a facility in northwest Ohio. The morning meeting went very well. By the time we were back in our Pittsburgh office the facility’s IT staff had contacted us and were ready to test.
Finally, the last question is going to be, “How do I know I will get credit for this?” That applies to current organizations providing data as well as ones looking at the possibility. I recommend that either public health (or the vendor) supply a letter to the facility indicating that they are “in compliance” or “actively sending data” or “successfully sent a test message”. This provides the hospital documentation that show they are in compliance.
Hospitals are evaluating their strategies on how to achieve meaningful use criteria. Part of this evaluation is the three optional criteria geared toward public health. The question for public health to answer is how to get hospitals and other eligible providers to select these criteria.
The introduction of meaningful use, the impact of HITECH, and the push for health information exchange have made a healthcare IT a dynamic, challenging environment. IT leaders are looking at strategies to comply with these initiatives, understand the impact on clinical care, and ensure full reimbursement. No small feat. In talking to healthcare IT personnel over the last few months, there were a few points of insight gained that benefit public health.
The single biggest challenge is bridging the communication gap. For most of the meaningful use criteria, healthcare has to demonstrate that their system complies. This means using a certified software system or implementing functionality that can be certified and approved. Working with a public health data system doesn’t follow the same process. What process is to be used needs to be made clear to healthcare IT if they are to select the optional criteria.
–kjh
As reported by Modern Healthcare, RWJF has issued its 2010 update on emergency health preparedness. In contrast to the Modern Healthcare piece, RWJF reports the highest ever scores, but progress is threatened by budget cuts. Judge for yourself. –kjh
In the eighth annual Ready or Not? Protecting the Public from Diseases, Disasters, and Bioterrorism report, 14 states scored nine or higher on 10 key indicators of public health preparedness. Three states (Arkansas, North Dakota, and Washington State) scored 10 out of 10. Another 25 states and Washington, D.C. scored in the 7 to 8 range. No state scored lower than a five.
The scores reflect nearly 10 years of progress to improve how the nation prevents, identifies, and contains new disease outbreaks and bioterrorism threats and responds to the aftermath of natural disasters in the wake of the September 11th and anthrax tragedies. In addition, the real-world experience responding to the H1N1 flu pandemic—supported by emergency supplemental funding—also helped bring preparedness to the next level.
However, the Ready or Not? report, released today by the Trust for America’s Health (TFAH) and the Robert Wood Johnson Foundation, notes that the almost decade of gains is in real jeopardy due to severe budget cuts by federal, state, and local governments. The economic recession has led to cuts in public health staffing and eroded the basic capabilities of state and local health departments, which are needed to successfully respond to crises. Thirty-three states and Washington, D.C., cut public health funding from fiscal years (FY) 2008-09 to 2009-10, with 18 of these states cutting funding for the second year in a row. The report also notes that just eight states raised funding for two or more consecutive years. The Center on Budget and Policy Priorities has found that states have experienced overall budgetary shortfalls of $425 billion since FY 2009.
In addition to state cuts, federal support for public health preparedness has been cut by 27 percent since FY 2005 (adjusted for inflation). Local public health departments report losing 23,000 jobs—totaling 15 percent of the local public health workforce—since January 2008. The impact of the recession was not as drastically felt by the public health workforce until more recently because supplemental funds received to support the H1N1 pandemic flu response and from the American Recovery and Reinvestment Act have almost entirely been used.
Continue reading here…
Get the full text of the report here…
A recently published report by the Robert Woods Johnson Foundation finds what public health has to contend with everyday — limited budgets and resources has slowed the adoption of public health data exchange. — kjh
by Paul Barr
The electronic exchange of health information is targeted as needing improvement in a new public health preparedness report from Trust for America’s Health and funded by the Robert Wood Johnson Foundation.
In a state-by-state analysis looking at 10 indicators of emergency preparedness, seven states’ health departments were identified as not being able to send and receive health information electronically to providers and community health centers. The 52-page report, “Ready or Not? Protecting the Public’s Health from Diseases, Disasters and Bioterrorism,” notes that as seen during the H1N1 influenza outbreak, “this type of communication is crucial to ensure public health departments have an accurate picture of the on-ground events and that healthcare practitioners are given the most up-to-date, accurate information.”
In addition, 10 states do not have a health department that has an electronic syndromic surveillance system—which uses data that precede diagnosis—that can report and exchange information. Better public health data collection and management also was the subject of a recent report from the Institute of Medicine.
The report notes that budget cuts at federal, state and local levels are threatening the country’s ability to respond to public health emergencies.
Read original here…
“What impact will meaningful use have on the EpiCenter application? Has there been exploration to determine if the system will meet the criteria for meaningful use?”
— Julie, Kane County Illinois
EpiCenter is a syndromic surveillance system used by state and local public health departments. The optional meaningful use criteria leaves it to public health to determine if and how syndromic surveillance is conducted in their region.
Certification applies to electronic health records (EHR). There is a lot of discussion about this topic, as well as the process and meaning of certification. There are multiple stages of certification that apply to EHRs and those stages align with the implementation of Meaningful Use criteria.
Since EpiCenter is not an EHR, the Certification Commission for Health Information Technology (CCHIT) certification process does not apply to it.
Our mission: Provide services that focus healthcare resources on existing and emergent threats to community health.
Our customers: State and local public health departments and health systems. We currently serve Connecticut, New Jersey, Pennsylvania, Ohio, Wyoming, and several counties in California, covering a total of more than 40 million people.
What we do: Monitor real-time health-related data for community health indicators. We collect data from nearly 600 hospitals and 3,600 ambulatory systems.
Support email:
support@health-monitoring.com
Emergency support: 1 (844) 231-5776
Additional guidance:
EpiCenter User Manual
700 River Ave., Suite 130
Pittsburgh, PA 15212
Corporate office: 1 (412) 231-2020
General calls: 1 (844) 231-5774
Emergency support: 1 (844) 231-5776