LLM Security: Practical Protection for AI Developers

With thousands of open-source LLMs on Hugging Face, AI developers have a wealth of resources at their disposal. As developers harness these models that power innovative applications, they may inadvertently expose their company to security risks. It’s not sufficient to rely on the internal guardrails that LLM providers have baked into their models. The stakes are too high, especially with proprietary data being made available to models through fine-tuning or retrieval-augmented generation (RAG). Even internal apps are still vulnerable to adversarial attack. With that, how can developers deploy LLMs painlessly but securely?

In this talk from Databricks' Data+AI Summit 2024, we review the top LLM security risks using real-world examples and explore what’s required to meet emerging standards from OWASP, NIST, and MITRE. We also share vendor-agnostic secure LLM reference architectures and a comprehensive taxonomy of security and safety threats.

LLM Security: Practical Protection for AI Developers

With thousands of open-source LLMs on Hugging Face, AI developers have a wealth of resources at their disposal. As developers harness these models that power innovative applications, they may inadvertently expose their company to security risks. It’s not sufficient to rely on the internal guardrails that LLM providers have baked into their models. The stakes are too high, especially with proprietary data being made available to models through fine-tuning or retrieval-augmented generation (RAG). Even internal apps are still vulnerable to adversarial attack. With that, how can developers deploy LLMs painlessly but securely?

In this talk from Databricks' Data+AI Summit 2024, we review the top LLM security risks using real-world examples and explore what’s required to meet emerging standards from OWASP, NIST, and MITRE. We also share vendor-agnostic secure LLM reference architectures and a comprehensive taxonomy of security and safety threats.