The report is intended to be a step toward developing a taxonomy and terminology of adversarial machine learning (AML), which in turn may aid in securing applications of AI against adversarial manipulations of AI systems.
The NIST Trustworthy and Responsible AI report, co-authored by Robust Intelligence and Northeastern University, develops a taxonomy of concepts and defines terminology in the field of AML. The taxonomy is built on surveying the AML literature and is arranged in a conceptual hierarchy that includes key types of ML methods and lifecycle stages of attack, attacker goals and objectives, and attacker capabilities and knowledge of the learning process. In this report, you’ll find:
- Attacks on predictive and generative AI systems, including evasion, data and model poisoning, data and model privacy, and abuse/misuse
- Corresponding methods for mitigating and managing the consequences of attacks
- Open challenges to take into account in the lifecycle of AI systems