AMIA suggests some fine-tuning for FDA's AI and machine learning regs
The American Medical Informatics Association has offered a list of suggestions to the U.S. Food and Drug Administration in hopes of improving its framework of proposed regulations for artificial intelligence, machine learning and software-as-a-medical-device, or SaMD.
WHY IT MATTERS
With an eye toward improving the effectiveness, efficiency, safety and security of clinical AI and SaMD, AMIA wants the FDA to fine tune its policies around machine learning tools – specifically those able to self-update their algorithms based on new data.
In its comments, submitted to FDA Acting Commissioner Dr. Norman Sharpless, AMIA noted that there's a huge difference between the AI technologies with so-called locked algorithms and those that are continuously learning.
While the agency's existing framework "acknowledges the two different kinds of algorithms," AMIA said it's concerned that the regs are "rooted in a concept that both locked and continuously learning SaMD provides opportunity for periodic, intentional updates."
Instead, "our members’ experience is that many AI/ML-based SaMD are intended to perform continuous updates based on real-time or near-real-time data and that the algorithms will constantly adapt as a result."
That means that AI is very vulnerable to "learning from poor or biased data and it may not be able to provide a cogent explanation for any decision it offers," according to AMIA. "This can thus result in a user of SaMD not knowing whether the device reasonably applies to his or her population."
As such, FDA should require a review process whenever such machine learning comes from "population(s) different from its training population," according to the informatics group.
AMIA also encouraged the FDA keep cybersecurity risks top of mind, since hacking or other data manipulation could adversely influence the output of a given algorithm. It suggests, for instance, that rules be developed around "specific types of error detection geared towards preventing a system adaptation to an erroneous signal."
It also recommended that the FDA also put some language in the framework that could protect against algorithm bias that could impact persons of "particular ethnicities, genders, ages, socioeconomic backgrounds, and physical and cognitive abilities," and clarify how developers of SaMD technology could test their products to detect that bias and adjust algorithms accordingly.
THE LARGER TREND
As machine learning continues to make inroads across healthcare, the FDA has approved more and more AI-based tools in recent months. (And it's even using AI in its own work.)
But those approved devices use locked algorithms that don't continually learn. Self-updating technologies are a whole different ballgame, with their own unique characteristics and risks. As the agency hones its approach to regulation and the technology evolves apace, it will need to keep them in mind
In its letter, AMIA noted the need to "view the performance of SaMD differently based on source of data inputs. For example, data inputs that come from data manually entered by clinicians, patients, or family members/caregivers pose different issues for SaMD outputs from inputs that come from fully embedded and automated devices that cannot have settings altered (e.g. entirely closed loop and based on sensed data)."
It pointed out that "a further critical question concerns the extent to which an AI-based SaMD should be able to furnish explanatory reasoning for any decision it provides or supports. In the classical form of AI, where existing expertise has been encoded, it is possible to have a chain of reasoning back to principles or data. Machine learning algorithms, however, may function in a 'black box' mode, with inputs modifying the implicit circuitry with no clear traceability. It is thus vital to consider under what circumstances an AI-based SaMD should provide explanation of any decision it offers."
ON THE RECORD
"Properly regulating AI and Machine Learning-based SaMD will require ongoing dialogue between FDA and stakeholders," said AMIA President and CEO Dr. Douglas B. Fridsma, in a statement.
"This draft Framework is only the beginning of a vital conversation to improve both patient safety and innovation," he said. "We certainly look forward to continuing it."
Twitter: @MikeMiliardHITN
Email the writer: mike.miliard@himssmedia.com
Healthcare IT News is a publication of HIMSS Media.