NIST unveils new open source platform for AI safety assessments
Photo: Matt Lincoln/Getty Images
The freely-downloadable tool, called Dioptra, is designed to help artificial intelligence developers understand some unique data risks with AI models, and to help them "mitigate those risks while supporting innovation," says NIST's director.
Nearly a year since the Biden Administration issued its executive order on Safe, Secure and Trustworthy Development of AI, the National Institute of Standards and Technology has made available a new open source tool to help test the safety and security of AI and machine learning models.
WHY IT MATTERS
The new platform, known as Dioptra, advances an imperative in the White House EO, which stipulates that NIST will take an active role helping with algorithm testing.
"One of the vulnerabilities of an AI system is the model at its core," NIST researchers explain. "By exposing a model to large amounts of training data, it learns to make decisions. But if adversaries poison the training data with inaccuracies – for example, by introducing data that can cause the model to misidentify stop signs as speed limit signs – the model can make incorrect, potentially disastrous decisions."
The goal is to help healthcare and other organizations better understand their AI software, and assess how well it fares in the face of a "variety of adversarial attacks," according to NIST.
The open source tool – available free for download – could help healthcare providers, other businesses and government agencies evaluate and verify AI developers' promises about how their models perform.
"Dioptra does this by allowing a user to determine what sorts of attacks would make the model perform less effectively and quantifying the performance reduction so that the user can learn how often and under what circumstances the system would fail."
THE LARGER TREND
Beyond unveiling the Dioptra platform, NIST's AI Safety Institute this past week also released new draft guidance on Managing Misuse Risk for Dual-Use Foundation Models.
Such models – known as dual-use, because they hold "potential for both benefit and harm" – could pose risks to safety when used in the wrong ways or by the wrong people. The new proposed guideline describes "seven key approaches for mitigating the risks that models will be misused, along with recommendations for how to implement them and how to be transparent about their implementation."
Additionally, NIST also published three finalized documents around AI safety, focused on mitigating the risks of generative AI, reducing threats to the data used to train AI systems and global engagement on AI standards.
Beyond the executive order on AI, there's been lots of effort at the federal level recently to establish safeguards for AI in healthcare and elsewhere.
This includes a major reshuffling of agencies in the Department of Health and Human Services designed to "mission-focused technology, data, and AI policies and activities."
The White House has also promulgated new rules for AI use in federal agencies, including the CDC, VA hospitals.
Meanwhile NIST has also been hard at work on other AI and security initiatives, such as privacy protection guidance for AI-driven research, and a major recent update to its landmark Cybersecurity Framework.
ON THE RECORD
"For all its potentially transformational benefits, generative AI also brings risks that are significantly different from those we see with traditional software," said NIST Director Laurie E. Locascio, in a statement. "These guidance documents and testing platform will inform software creators about these unique risks and help them develop ways to mitigate those risks while supporting innovation."
"AI is the defining technology of our generation, so we are running fast to keep pace and help ensure the safe development and deployment of AI," added U.S. Secretary of Commerce Gina Raimondo. "[These] announcements demonstrate our commitment to giving AI developers, deployers, and users the tools they need to safely harness the potential of AI, while minimizing its associated risks. We've made great progress, but have a lot of work ahead."
Mike Miliard is executive editor of Healthcare IT News
Email the writer: mike.miliard@himssmedia.com
Healthcare IT News is a HIMSS publication.
The HIMSS AI in Healthcare Forum is scheduled to take place Sept. 5-6 in Boston. Learn more and register.