
AI Risk Management Framework. Released on Jan. 26, 2023, NIST’s AI RMF was developed to better manage risks to individuals, organizations, and society associated with AI. “What we’re trying to do with the AI Risk Management Framework is understand how we trust AI, which operates in many ways differently in some of these tasks that we know very well,” particularly regarding how high-impact applications affect cybersecurity, Martin Stanley, principal researcher for AI and cybersecurity at NIST, said at the workshop.
Center for AI Standards and Innovation (CAISI). NIST’s CAISI serves as the “industry’s primary point of contact within the US government to facilitate testing and collaborative research related to harnessing and securing the potential of commercial AI systems,” said Maia Hamin, a technical staff member of CAISI, the center that develops best practices and standards for improving AI security and collaboration. It also “leads evaluations and assessments of US and adversary AI systems, including adoption of foreign models, potential security vulnerabilities, or potential for foreign influence,” she told workshop attendees.
NIST AI 100-2 E2025, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations. This NIST report, published in March 2025,provides a taxonomy of concepts and defines terminology in the field of adversarial machine learning (AML). “Adversarial machine learning or adversarial AI is the field that studies attacks on AI systems that exploit the statistical and data-driven nature of this technology,” NIST research team supervisor Apostol Vassilev said at the workshop. “Hijacking, prompt injection, indirect prompt injection, data poisoning, all these things are part of the field of study of adversarial AI,” he clarified.
