AI can improve EHRs and analytics, but physician input is a must

Stanford researchers see risks in improperly deployed artificial intelligence but say it can have big benefits when used as a partner in care delivery.
By Mike Miliard
02:32 PM

Artificial intelligence is starting to make its mark on healthcare in a big way. That's got a lot of people really excited – and not a few are a little nervous.

For all the promise AI holds, after all – the chance to reshape clinical workflows and enable big advances in data analytics and precision medicine, to name just a few – some experts are raising concerns that such automation will start to edge out the human beings who are so crucial to care delivery.

[Also: As AI spreads through healthcare, ethical questions arise]

A report from Infosys earlier this year, for instance, suggested that as health systems forge ahead with their AI deployments, they should also be careful to "establish ethical standards and obligations for the organization" as some people are inevitably displaced from their healthcare roles by automation.

A new study in the Journal of the American Medical Association by medical researchers at Stanford takes a close look at how AI could lead to big improvements in the way information and technology are leveraged in healthcare, specifically with regard to electronic medical records. But it also makes the case that humans have an indispensable role to play along the way.

"The redundancy of the notes, the burden of alerts, and the overflowing inbox has led to the '4000 keystroke a day' problem and has contributed to, and perhaps even accelerated, physician reports of symptoms of burnout," researchers wrote. "Even though the EMR may serve as an efficient administrative business and billing tool, and even as a powerful research warehouse for clinical data, most EMRs serve their front-line users quite poorly."

[Also: Machine learning will replace human radiologists, pathologists, maybe soon]

Moreover, as has been often noted, the arrival of technology in the exam room and radiology suite has led to "unanticipated consequences," sometimes, such as a depersonalization between the patient and a physician hunched over a computer screen and "important social rituals" such as conversation among clinicians to discuss potential diagnoses.

The authors warn about other unintended consequences as AI and machine learning continue to make inroads across healthcare. Both offer big opportunities to help providers "process and creatively use the vast amounts of data being generated," but care must be taken to do it right.

AI's ability to automate the the repetitive tasks that occupy so much of a clinicians' time would be welcome, the Stanford researchers write, noting that "automated charting using speech recognition during a patient visit would be valuable and could free clinicians to return to facing the patient rather than spending almost twice as much time on the 'iPatient'—the patient file in the EMR."

But automation has its perils too, of course. For instance, when developing predictive models, "there is nothing more critical than the data. Bad data (such as from the EMR) can be amplified into worse models." Simply allowing machine learning to run without human intervention presents clear risks.

"Instead, clinicians should seek a partnership in which the machine predicts (at a demonstrably higher accuracy), and the human explains and decides on action," the authors write. "The 2 cultures – computer and the physician – must work together."

Despite the very real apprehensiveness about how unfettered AI could distort data models when improperly applied, and "legitimate concerns that artificial intelligence applications might jeopardize critical social interactions between colleagues and with the patient," the authors are skeptical of some other oft-voice worries about AI.

"Concerns about physician 'unemployment' and 'de-skilling' are overblown," they said.

Instead, when well-deployed, AI could "bring back meaning and purpose in the practice of medicine while providing new levels of efficiency and accuracy." But as they embrace the technology, physicians must "proactively guide, oversee, and monitor the adoption of artificial intelligence as a partner in patient care." 

The JAMA article, "What This Computer Needs Is a Physician: Humanism and Artificial Intelligence," is by Abraham Verghese, MD; Nigam H. Shah and Robert A. Harrington, MD, all of Stanford University School of Medicine.

Future-proofing AI

How AI is driving forward-looking healthcare orgs.

Twitter: @MikeMiliardHITN
Email the writer: mike.miliard@himssmedia.com

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.