How to harness big data for improving public health

By Roger Foster
09:37 AM

Today, healthcare in the U.S. is a $2.6 trillion market. According to recent reports, at least $600-850 billion of the healthcare spending goes to embedded inefficiencies that increase the cost and decrease the overall quality of public health.

These inefficiencies include the un-warranted use of healthcare services, criminal fraud and abuse, administrative inefficiencies, provider inefficiencies including medical errors, lack of overall coordinated care, and preventable conditions/avoidable care.

In an attempt to address these challenges, government agencies are flooded in a tidal wave of biomedical information. It is not unusual to see hospitals and hospital chains addressing petabyte (1015 bytes) scale data sets when they review all their electronic records.

[Related: Obama plunks $200 million down on big data research.]

According to Graham Hughes, CMO of the SAS Center for Health Analytics and Insights, the U.S. healthcare data sets reached 150 exabytes (1018 bytes) in 2011. For context, 5 exabytes of data would contain all words ever spoken by human beings. At this rate, Big Data will soon reach the zettabyte (1021) scale and a yottabyte (1024) won’t be far behind.

If properly managed, modeled and shared, however, the same wave of health data or “Big Data” flooding government agencies will also be the key to improving care outcomes and ultimately population health.

A McKinsey Global Health Institute study projected that the application of Big Data health analytics can potentially remove $200-$300 billion in cost inefficiencies from the U.S. healthcare system. This means that creative and effective use of Big Data could reduce national healthcare expenditures by 8 percent.

The collective actions and shared goals of government agencies have a large impact on the healthcare system. Between government investments in healthcare and the insured lives of all Medicare/Medicaid recipients, federal and state workers, active duty military and veterans — government agencies influence nearly 40 percent of all healthcare spending in the U.S. How government agencies approach managing their huge stores of healthcare data will be critical to improving public health and delivering better quality care with improved outcomes and lower overall cost.

[Related: ONC to stand up public-private NwHIN-Exchange as non-profit HIE in October.]

Government agencies need to adopt a comprehensive approach to using Big Data and health information technologies to capture even a small part of these savings. Some approaches that agencies could utilize in reducing redundancies and unnecessary costs using Big Data include:

  1. Unwarranted use: Centers for Medicare & Medicaid Services (CMS) and the Agency for Healthcare Research and Quality (AHRQ) are all interested in making sure medical services actually provide benefits. Fee-for-service incentives drive behaviors that encourage multiple visits, higher fees and lower-quality service. New business models will need to financially support pay-for-performance. Big Data analytical tools are needed to build these models and show performance outcome measures.
  2. Fraud waste & abuse: CMS faces significant challenges in tracking organized criminal gangs that are defrauding the Medicare and Medicaid systems for services never rendered. New Big Data analytical algorithms need to be deployed on the CMS claims data repository to identify fraud on a real-time or near real-time basis.
  3. Administrative costs: Administrative inefficiencies present huge challenges for the Department of Veterans Affairs (VA), Military Health System (MHS), Tricare, and other government sponsored provider/payer systems. Existing health records and billing systems processes significantly drive costs for providers, insurers and employees. Big Data analysis can be applied to administratively manage health records and improve billing processes to reduce the cost of bookkeeping for providers, payers, and purchasers.
  4. Provider inefficiencies: The VA and MHS as major government providers are interested in reducing the diagnostic and prescription errors driven by healthcare delivery systems with huge process and performance variances across sites. Clinical decision support systems need to be broadly deployed to improve care and reduce medical errors. These systems need to use population data to predict risk and personalize care.
  5. Lack of coordinated care: The inability to easily share medical records across care providers and institutions causes redundant costs. Effective information sharing will require medical record interoperability between not only the VA and MHS, but also all the major third-party commercial systems. Tools are needed to allow for greater electronic record interoperability across different electronic record systems. Population data can be used to proactively predict risk so that resources can be efficiently applied to improve individual care. Additionally, patients need access to their personal health records so that they can personally participate in their healthcare treatment and buying decisions.
  6. Preventable conditions: The Centers for Disease Control and Prevention (CDC) is moving toward using Big Data and electronic health records to focus on bio-surveillance and disease outbreak prevention. The Food and Drug Administration (FDA) and National Institutes of Health (NIH) are focusing on the scientific research and pre-market and post-market surveillance of promising new drugs and devices. The entire population of government insured users of healthcare needs be to educated in improving their own healthcare management. Often, preventable conditions are not managed properly and patients are not always clear on the health consequences of their behavior. Enabling medical professionals to track and change behavior is critical to the long-term improvement of healthcare delivery.

Big Data tools — if properly implemented and managed — can help government agencies address each of these challenges in the healthcare systems.

 

Roger Foster is a Senior Director at DRC’s High Performance Technologies Group and advisory board member of the Technology Management program at George Mason University. He has over 20 years of leadership experience in strategy, technology management and operations support for government agencies and commercial businesses. He has worked big data problems for scientific computing in fields ranging from large astrophysical data sets to health information technology. He has a master’s degree in Management of Technology from the Massachusetts Institute of Technology and a doctorate in Astronomy from the University of California, Berkeley. He can be reached at rfoster@drc.com.
 

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.