FDA offers new draft guidance to developers of AI-enabled medical devices
Image: RDNE/Pexels
To support the continued development and marketing of safe and effective medical devices enhanced by artificial intelligence, the U.S. Food and Drug Administration will offer marketing submission recommendations, including the documentation and information needed throughout their total product life cycles for regulatory oversight of the safety and efficacy, on Tuesday.
WHY IT MATTERS
Following its release last month of the final predetermined change control plan guidance for AI and machine learning submissions – defining what is required to maintain the AI/ML components and submitting that for regulatory review without triggering an entirely new marketing submission – the FDA is providing medical device developers with key product design, development and documentation recommendations for initial submissions.
The guidance, which will be published in the Federal Register on January 7, would be the first to provide total product life cycle recommendations for AI-enabled devices, tying together all design, development, maintenance and documentation recommendations, if and when finalized FDA said in its announcement Monday.
The agency said overall, it encourages developers and innovators to engage early and often to guide activities throughout device life cycles – planning, development, testing and ongoing monitoring.
After authorizing more than 1,000 AI-enabled devices through established premarket pathways, the FDA has compiled requirements, along with the agency's shared learnings to be the "first point-of-reference for specific recommendations that apply to these devices, from the earliest stages of development through the device's entire life cycle," Troy Tazbaz, director of the Digital Health Center of Excellence within the FDA's Center for Devices and Radiological Health, said in a statement.
The agency said the new guidance will also address strategies that address transparency and bias, with specific advice to demonstrate bias risk management and suggestions for thoughtful AI design and evaluation.
The FDA said it will accept public comments on the draft guidelines through April 7 and is specifically requesting comments on AI life cycle alignment, the adequacy of its generative AI recommendations, the approach to performance monitoring and the types of information that should be conveyed to AI medical device users.
CDRH said it will also host webinars on February 18 to discuss its new regulatory proposal and on January 14 on its final PCCPs guidance issued in December.
THE LARGER TREND
In a blog Tazbaz cowrote last year with John Nicol, a digital health specialist within FDA's Digital Health Center for Excellence, life cycle management principles can help to navigate the complexities and risks associated with AI software in healthcare.
Because AI continuously learns and adapts in real-world settings, adaptability poses significant risks, "such as exacerbating biases in data or algorithms, potentially harming patients and further disadvantaging underrepresented populations," they wrote.
To address evolving risks in the regulation of AI-enabled medical devices, FDA first sought to nail down PCCPs for AI/ML devices.
"The approach FDA is proposing in this draft guidance would ensure that important performance considerations, including with respect to race, ethnicity, disease severity, gender, age and geographical considerations, are addressed in the ongoing development, validation, implementation and monitoring of AI/ML-enabled devices," the center's then Deputy Director Brendan O'Leary had said.
ON THE RECORD
"As we continue to see exciting developments in this field, it's important to recognize that there are specific considerations unique to AI-enabled devices," Tazbaz said in a statement.
Andrea Fox is senior editor of Healthcare IT News.
Email: afox@himss.org
Healthcare IT News is a HIMSS Media publication.