AI Risk and Threat Taxonomy - NIST Computer Security Resource Center Follow the NIST AI RMF to establish robust governance structures in the enterprise Threats that allow the attacker to repurpose the systems’ intended use to achieve own objectives Generally, these are not model features but harms that manifest themselves in the context of model use
NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI . . . Their work, titled Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST AI 100-2), is part of NIST’s broader effort to support the development of trustworthy AI, and it can help put NIST’s AI Risk Management Framework into practice
Evaluation of Comprehensive Taxonomies for Information Technology Threats Comprehensive threat taxonomies fit into risk assessments, like NIST SP 800-30, to present decision-makers with a risk comparison across all of the threats This research found several methods for categorizing all of the possible threats to information technology
Adversarial Machine Learning: A Taxonomy and Terminology of . . . - NIST 50 adversarial machine learning (AML) The taxonomy is built on survey of the AML literature and is 52 attacker goals and objectives, and attacker capabilities and knowledge of the learning process The 54 and points out relevant open challenges to take into account in the lifecycle of AI systems The 57 non-expert readers
The NIST Cybersecurity Framework (CSF) 2. 0 - NIST Computer Security . . . The NIST Cybersecurity Framework (CSF) 2 0 provides guidance to industry, government agencies, and other organizations to manage cybersecurity risks It offers a taxonomy of high-level cybersecurity outcomes that can be used by any organization — regardless of its size, sector, or maturity — to See full abstract
Appendix A List of Acronyms — NIST SP 1800-30 documentation In this practice guide, the NCCoE identified a threat taxonomy for the entire system Threats may manifest differently to the system depending on the domain in which they appear
NIST Releases Final Report on AI ML Cybersecurity Threats and . . . Titled AI 100-2 E2025, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, the report outlines how AI ML systems—particularly predictive AI (PredAI) and generative AI (GenAI)—are uniquely vulnerable to attacks that target every stage of the machine learning lifecycle, from training to deployment These
AI Risk Management Framework | NIST - National Institute of Standards . . . The NIST AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems
TBM Taxonomy NIST - TBM Council This TBM Taxonomy NIST Cyber Security guide is one of several assets delivered by the TBM Council Standards Committee workgroup commissioned to develop a complete mapping of the TBM Taxonomy to the NIST framework and inform best practice use
NIST AI 100-2 E2025 Adversarial Machine Learning: A Taxonomy and . . . The 2025 NIST report greatly enhances its adversarial ML attack taxonomy, providing expanded definitions and clear categorization It specifically details advanced generative AI (GenAI) threats, including misuse and prompt injection attacks, clearly delineating between various types of attacks affecting integrity, availability, and privacy