AI Security and Safety Taxonomy

Understand the generative AI threat landscape with definitions, mitigations, and standards classifications.

A holistic approach to AI risk mitigation

We’re pleased to provide the first AI threat taxonomy that combines security and safety risks. AI security is concerned with protecting sensitive data and computing resources from unauthorized access or attack, whereas AI safety is concerned with preventing harms caused by unintended consequences of an AI application by its designer. Both present business risk which can result in financial, reputational, and legal ramifications. Mitigating these threats requires a novel, comprehensive approach to AI application security.

Robust Intelligence solves for AI security and safety risks with our automated, end-to-end platform: AI Validation detects and assesses model vulnerabilities and AI Protection enforces the necessary guardrails to deploy applications safely. We developed this taxonomy to help the AI and cybersecurity communities navigate a comprehensive set of security and safety risks, complete with descriptions, examples, and mitigation techniques. We also map threats to various AI security standards we helped co-develop alongside NIST, MITRE ATLAS, and OWASP Top 10 for LLM Applications.

We’re continuously updating this taxonomy. Please reach out to us with any questions or comments.

The AI security and safety taxonomy