AI’s future in healthcare is not entirely rosy

Numerous challenges will hinder AI growth in healthcare unless the industry improves, a new scientific report finds.
By Benjamin Harris
11:58 AM

A new report examines how under much of the veneer of process improvement and radical new discoveries in the field of medicine, the industry has too many entrenched systems to fully realize the advances artificial intelligence promises.

Writing for Nature partner journal Digital Medicine, Trishan Panch, Heather Mattie and Leo Anthony Celi outline the obstacles healthcare faces in implementing AI solutions.

Unlikely to take root

Business models and incentives remain tightly aligned with regional variations in common practices, making any sweeping “discoveries” AI might offer unlikely to take root in a meaningful, systematic way. Even more elemental than that is the way healthcare data and machine learning are at odds: The latter needs free and open access to large quantities of good data.

While AI may have the potential to discover new treatment methods, the report finds strongly entrenched “ways of working” in the healthcare industry that are resistant to change. The authors warn that “simply adding AI applications to a fragmented system will not create sustainable change.”

As long as financial incentives exist to preserve the status quo, physicians will still do battle with EHRs and large provider organizations that dominate the fields of cloud computing and data management, and that will continue to dictate the methods of practice.

Trust in how healthcare networks amass and handle data needs to be earned as well, the authors find. Plagued by numerous high-profile breaches, the industry’s use of health data has been eyed warily by the public and regulators.

Responding to threats

Before there will be greater buy-in to the type of cross-organizational sharing of information, or any broad support for any potential discoveries, hospitals need to do a better job of building security into their data operations as well as finding and responding to threats.

What data is used as well as its quality and any inherent biases it may contain present significant problems, too. The report sees “islands of aggregated data,” each locked in its own provider universe.

Some data sets are of a higher quality than others, meaning AI’s “learning” may be influenced by a specific patient population just because the data was better labeled. Similarly, problems stemming from upstream factors can also arise.

The authors note that “an algorithm trained on mostly Caucasian patients is not expected to have the same accuracy when applied to minorities.” To avoid biased results, researchers must be vigilant in detecting and compensating for bias in the data they use.

AI and interoperability

How data gets exchanged is yet another hurdle. Healthcare’s ongoing fight for interoperability applies to AI as well. If data isn’t commonly structured or easily shareable it substantially limits an AI’s flexibility to amass a greater store of information to learn from. Data ownership as well raises multiple concerns as to what say individual patients have in how their data is used for research purposes, or whether they would like it used at all.

The report acknowledges the promise of AI in healthcare and believes that as computing power becomes more available, the possibilities increase.

However, health systems need to readjust their expectations for short-term AI gains as well as devote significant resources to building trust, expanding interoperability and developing more open models of data ownership. With the support of standards and a common regulatory framework, as well as an industrywide push to focus on these larger issues, the report finds, the opportunities AI present are far more likely to be had.

Benjamin Harris is a Maine-based freelance writer and former new media producer for HIMSS Media.
Twitter: @BenzoHarris.

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.