Yale experts offer strategies for successful clinical AI rollouts

As regulatory oversight for predictive decision support evolves, certain best practices can help health systems get the most from machine learning-powered CDS models.
By Mike Miliard
02:07 PM

Update: HIMSS20 has been canceled due to the coronavirus. Read more here.

At HIMSS20 this March, two clinicians from Yale School of Medicine will offer their perspective and advice for safe and effective implementation of artificial intelligence-powered clinical decision support.

In Orlando, Dr. Wade L. Schulz, director of informatics at Yale School of Medicine, and Dr. Harlan Krumholz, a cardiologist and researcher at Yale University and Yale-New Haven Hospital, will offer an overview of the shifting regulatory landscape for AI and machine learning algorithms in the clinical setting – and suggest some best practices for hospitals hoping to harness these technologies as the rules governing them evolve.

Decision support tools are being changed fundamentally by AI and ML, with new predictive algorithms built from real-world electronic health record and imaging data changing the calculus for how care choices are made.

But there's plenty still to be ironed-out when it comes to regulatory oversight of these fast-changing technologies, with federal agencies such as FDA and FTC still honing their approaches to safety and efficacy rules.

In their session, Schulz and Krumholz will provide an overview of various approaches for AI and clinical decision support regs, and offer some specific use cases they say are promising for successful implementation of predictive algorithms in the meantime as specific rule frameworks are devised.

They'll spotlight some limitations of real-world data, describe efforts to reduce bias and error in machine learning models and suggest a few best practices for implementation and local validation these advanced CDS tools.

It's critical to "make sure we have high-quality models when we’re looking at deploying algorithms or clinical decision support that’s provider or patient-facing," said Schulz.

He is a clinical pathologist who specializes in transfusion medicine, but most of his time these days is spent researching the computational healthcare space, he said: "the tools that we use to acquire data, data integration, all of that," including advanced analytics, predictive models, neural networks and more.

"We don’t want to be a block to things that are new and exciting and can have good outcomes. We’ve just seen too many times things have gone badly because we haven’t looked closely at the things that go into it."

Dr. Wade L. Schulz, Yale School of Medicine

During his years as a clinical informaticist, Schulz says he's seen plenty of vendors pitch decision support products they say can help.

"What’s been a little concerning to me is really the lack of validation that goes into predictive modeling or clinical decision support a lot of the time," he said.

"The corollary for me, a lot of the time, is the clinical side – the lab," he added. "If we have a vendor come in with a new laboratory test, that has to be FDA-cleared or approved. When I have somebody come in with a new algorithm, it’s really just a lab test on the computer. So you have similar issues of: 'Did you validate the data quality coming in, did you validate your connections to make sure it’s the right patient?'

"There’s a lot of steps that correlate very closely to what we already do in the clinical lab and what we’ve been doing for decades," he explained. "They just haven’t really been applied on the informatics side, and that’s often because there's nothing to compare it to, it’s a totally new field."

Informed decisions, safer deployments

Schulz says his Krumholz's HIMSS20 presentation is aimed partly at CIOs and CMIOs: "the people who are often pitched these tools by outside companies."

Oftentimes, he points out, "you’re lucky if they even have been vetted by more than one institution. So if you have a vendor come in, how can you really evaluate what they’ve done to make sure that tool will be safe and effective in a clinical setting for you? What are the steps that you can do to help validate that algorithm?"

The right approach, of course, "rather than saying this is a really slick tool that’s advertised with outcomes, let’s integrate it into our EHR next week and push it out to physicians, let’s see what it looks like for validating and getting that back out to the clinical frontline to make sure it is safe and effective."

The talk is also aimed at an analytics and data science audience, said Schulz, and aims to  offer some "details on those analytic steps, especially if you’re building your own algorithm rather than something off the shelf from a vendor – what can you build out so you have on file all of that data to support the validation, and if you wanted to you could move forward for regulatory submission."

The challenge at the moment, he explained, is that for these AI-powered CDS tools, there's "currently no regulatory pathway other than 510(k) clearance, which is not required. So the concern is that there really isn’t a good regulatory pathway right now, so a lot of these algorithms just deploy without having been seen by regulators, which is kind of concerning.

"You can’t go in with the assumption that a vendor or researcher has gone through this effort," he added. "So, as physicians, as the stewards of what goes into healthcare IT, what can you do as a CIO or a CMIO, despite not necessarily having a regulatory framework, to ensure what you’re doing is safe before you implement it? You want to be able to get these up quickly, but you have to do it safely or it’s going to become a lot more burdensome if medical errors start happening because of these algorithms.

Schulz emphasizes that the point of the talk is not to be naysayers of these emerging technologies – only to offer a note of caution.

"We don’t want to be a block to things that are new and exciting and can have good outcomes," he said. "We’ve just seen too many times things have gone badly because we haven’t looked closely at the things that go into it. We won’t necessarily catch every error but if we can think of efficient ways to catch the majority of errors, things will improve for patients fairly quickly."

Schulz emphasizes that the point of the talk is not to be naysayers of these emerging technologies – only to offer a note of caution.

"We don’t want to be a block to things that are new and exciting and can have good outcomes," he said. "We’ve just seen too many times things have gone badly because we haven’t looked closely at the things that go into it. We won’t necessarily catch every error but if we can think of efficient ways to catch the majority of errors, things will improve for patients fairly quickly."

Dr. Wade L. Schulz and Dr. Harlan Krumholz will discuss these emerging technologies in their HIMSS20 session "Validation and Regulatory Oversight of Clinical AI Tools." It's scheduled for Tuesday, March 10, from 10:30-11:30 a.m. in room W230A.

Schulz was interviewed for this story by Jonah Comstock.
 

Twitter: @MikeMiliardHITN
Email the writer: mike.miliard@himssmedia.com

Healthcare IT News is a publication of HIMSS Media.

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.