Building the machine learning model isn't enough

Evaluating algorithms' efficacy often takes a lot more effort, as Johns Hopkins Machine Learning and Healthcare Lab Director Suchi Saria explained, with tips, at the HIMSS Machine Learning and AI for Healthcare Forum.

Suchi Saria at the HIMSS Machine Learning and AI for Healthcare Forum

Photo: HIMSS Media

Clinical and operational machine learning models are gaining ground at hospitals and health systems throughout the country, and new ones are evolving rapidly.

But at this point, the challenge is not so much development of new models, as effectively evaluating their use, said panelists at the HIMSS Machine Learning and AI for Healthcare Forum this week. (HIMSS is the parent company of Healthcare IT News.)

"As there has been an explosion of data, an explosion of off-the-shelf software you can download, there have been more and more teams just downloading model learning tools to be able to build preliminary models," said Suchi Saria, director of the Machine Learning and Healthcare Lab at Johns Hopkins University, during a fireside chat session Tuesday with STAT News' Casey Ross.  

"What we're seeing is people come up with a preliminary model, they don't know how to evaluate it – because they have a model, they think it's one-and-done," said Saria.  

In the absence of evaluation, she said, teams are trying to put models into practice without an understanding of what success looks like.  

As a researcher, Saria said it can often take astronomically more effort to understand if and how a model works in a real-world scenario.  

"You need a lot of infrastructure and a rubric to be able to do that," she said.  

Saria's work on sepsis identification, for instance, exposed her to common pitfalls when it comes to detecting signs of the condition. Earlier this year her company, Bayesian Health, released the results of a five-site study showing its sepsis module's high sensitivity and precision.  

"In any one of these areas, when you have to go in and measure whether something is working, you have to bring back the scientific perspective," she said.  

"With Bayesian, it was very much about the real world," she added.  

Saria noted that the technology for developing models is mature, and the technology for deploying them is on its way there, too. And when it comes to evaluating algorithms, she said, "These tools have to be maintained; they have to be continuously monitored."

According to Saria, everyone should be engaged in oversight of the technology – including, but not limited to, federal agencies.  

"My strong encouragement is: Have system leaders start learning about this. Partner with experts who understand it," she said. "In addition to that, my hope is the FDA also comes in and takes on a role."  

She also pointed to the importance of being aware of biases when building models based on a diverse data set or from a wide range of sources.  

"It's doable," she said. "But there's just more depth, and science, and expertise needed to do it well."  

Kat Jercich is senior editor of Healthcare IT News.
Twitter: @kjercich
Email: kjercich@himss.org
Healthcare IT News is a HIMSS Media publication.

Women In Health ITResource Center

Stay Informed

Subscribe today to receive our FREE monthly e-newsletter