Tapping big data for early identification of preventable conditions

By Roger Foster
10:53 AM

The cost to the U.S. healthcare system from preventable conditions and avoidable care has been estimated in the range of $25-50 billion annually. Preventable conditions are a significant component of the $600-850 billion surplus in healthcare spending ultimately increasing cost and decreasing the overall quality of public health.

Timely access to outpatient services can prevent individuals with long-term medical problems from needing immediate acute care (usually through emergency room services). Some of these preventable conditions include uncontrolled diabetes, dehydration in elderly patients, asthma, and inappropriate management of hypertension.

[Part 5: How the big data tools ACA, HITECH enable will improve public health.]

Often, preventable conditions are not managed properly and patients are not always clear on the health consequences of their behavior. Educating a population of over 120 million covered lives on government-backed health insurance programs about their healthcare management is a national challenge. Enabling medical professionals to track and change patient behavior is critical to the long-term improvement of healthcare delivery. This where big data can serve as a major contributor to improving health and avoiding preventable conditions by tracking patient-level data.

The patient of the future
For individuals with chronic disease (or individuals who are just interested in maintaining their own health), there is a growing movement to digitally quantify themselves. These patients are taking their medical care into their own hands.

Take Larry Smarr for instance, the Internet pioneer who tracked and analyzed his steps, sleep patterns, and heart rate during exercise daily. Home health services are providing similar data links for real-time vital monitoring of patients. Collecting and analyzing this data from millions of people is a big data challenge.

The Centers for Disease Control and Prevention (CDC) is using big data and electronic health records to focus on bio-surveillance and disease outbreak prevention.

CDC’s BioSense program was launched in 2003 to establish an integrated national public health surveillance system for early detection and rapid assessment of potential bioterrorism-related illness. The latest version, BioSense 2.0, integrates current health data shared by health departments to provide insight on the health of communities and the country. By getting more information from data faster, local, state, and federal public health partners will be able to detect and respond to outbreaks and health events more quickly.

[Feature: A new age of CDC biosurveillance is upon us.]

Big data is beginning to play an expanded role in adverse events monitoring for drugs and medical devices. The Food and Drug Administration (FDA) and National Institutes of Health (NIH) are focusing on the scientific research and pre-market and post-market surveillance of promising new drugs and devices. As outlined in the agency’s “FDA Science and Mission at Risk” report, the FDA anticipated many of these challenges and, as such, its report detailed new data sources coming from emerging digital sciences including the use of molecular data for medicine, wireless healthcare, nanotechnology, medical imaging, telemedicine platforms, electronic health records and more. These new digital data sources present a tidal wave of data and new challenges.

In addition, the FDA faces mounting pressure to react and make decisions under increasingly tight time constraints. This includes near real-time response and interactions with other agencies on a global scale to address food safety, imported products, and pesticide issues that may present serious health risks.

The FDA is stepping into the big data area with two new projects: Janus and Sentinel. Janus is focused on collecting pre-market assessment data from drugs and medical devices while Sentinel will focus on post-market surveillance of these products. The deployment of these big data tools will aid FDA in meeting its expanded obligations to make quicker regulatory review decisions and effectively track products’ safety.

Wanted: Data scientists and analytical support
This is the final article in this series on big data and its impact on public health. My goal was to show the range of big data problems facing public health agencies and present some of the government agencies that are leading the way with big data-class or what will soon be big data-class projects.

I also want to point out there are major challenges in implementing these big data projects. Big data usually means access to and using ALL the data available. Often, interesting and unexpected correlations can emerge from the analysis. With big data problems it is usually not a successful approach to just boil the ocean of data hoping to see what is there. Big data analytics requires balancing a hypotheses-driven approach to extract meaningful results from the data with an open data exploration approach to find out what the data are telling us to get results.

[Big data and public health, part 4: Reducing administrative inefficiencies with big data tools.]

Ownership of data is a critical issue. As detailed data on performance becomes more open and transparent there are bound to be very disturbing findings. Some will be correct, others incorrect. Major hospitals are worried about the impact that this data could have on their reputation. As data are integrated from disparate sources, questions surrounding data integrity and validity will arise.

Data privacy is a critical issue. HIPAA and other health data regulations apply and must be followed. The large population data sets must be curated with patient identifiable information de-identified to avoid potential privacy issues.

Right now, understanding these problems requires skilled analytical talent.

Unfortunately, most big data projects have a shortage of experienced data analysts. We will need a corps of “data ninjas” trained in big data analysis to cultivate real insight from mountains of information. Fully harnessing the power of big data to improve public health will be a long journey – and government health agency early adopters should be applauded for taking the first steps.

The first three pieces in this six-part series:

Part 3: Top 9 fraud and abuse areas big data tools can target.

Big data and public health, part 2: Reducing unwarranted services

How to harness Big Data for improving public health
 

Roger Foster is a Senior Director at DRC’s High Performance Technologies Group and advisory board member of the Technology Management program at George Mason University. He has over 20 years of leadership experience in strategy, technology management and operations support for government agencies and commercial businesses. He has worked big data problems for scientific computing in fields ranging from large astrophysical data sets to health information technology. He has a master’s degree in Management of Technology from the Massachusetts Institute of Technology and a doctorate in Astronomy from the University of California, Berkeley. He can be reached at rfoster@drc.com, and followed on Twitter at @foster_roger.

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.