June 12, 2024
-
5
minute read

Automate AI vulnerability testing with Robust Intelligence and MLflow

The Databricks open-source platform MLflow has emerged as one of the leading MLOps solutions with a comprehensive set of tools designed to help teams across the entire AI lifecycle. It has several components to help simplify the complex development process and provide better traceability, consistency, and management at scale.

MLOps solutions like MLflow streamline machine learning experiment tracking, model development, and production readiness. In the era of generative AI, DevSecOps practices ensuring the validation of the safety and security vulnerabilities of AI systems are increasingly important. That’s why Robust Intelligence integrates with the MLflow Model Registry to bring seamless validation to machine learning workflows and help developers create more secure AI applications.

Let’s take a closer look at the integration between Robust Intelligence and MLflow, and discuss how teams can take advantage of automated red teaming to identify model vulnerabilities.

What are MLflow and the Model Registry?

MLflow is an open-source platform developed by Databricks that comprises a variety of tools to help simplify and support the complete AI lifecycle. These core components include Tracking, Evaluation, Prompt Engineering, and the Model Registry.

The MLflow Model Registry is the most widely used model registry today. It serves as a centralized model repository that facilitates better organization, management, collaboration, and documentation between developers and broader teams. Model registries provide significant operational benefits in terms of quality, consistency, and scalability, which is why so many enterprise machine learning teams have come to rely on them.

What is model validation, and why is it important?

Model validation is the process of identifying and addressing safety and security vulnerabilities in a model through file scanning and rigorous testing. It is a foundational practice of AI security and an explicit regulatory requirement outlined in policies like the EU AI Act and the White House Executive Order on AI.

Validation is also important to align with leading AI security standards. Popular standards include the OWASP Top 10 for LLM Applications, for which six of the ten highlighted security concerns can be measured or mitigated during pre-deployment validation.  Additionally, the  Databricks AI Security Framework contains sections on model evaluation (DASF 45), comparing LLM outputs (DASF 47), automating LLM evaluation (DASF 49), platform compliance (DASF 50), and source code control (DASF 52) that are critical steps addressed by model validation.

Companies need to ultimately trust that their models are secure and safe. AI security is concerned with protecting sensitive data and computing resources from unauthorized access or attack, such as prompt injection and data poisoning attacks. AI safety is concerned with preventing harms caused by unintended consequences of an AI application by its designer, such as hallucinations and toxic content. Both present business risk and require mitigation.

This pertains to the hundreds of thousands of open-source models on HuggingFace, closed-source models from AI providers, and proprietary models. Testing should occur every time a change is made to the model, as well as at regular time intervals. As new vulnerabilities can emerge throughout model development and in production, model validation must be seen as a continuous and necessary initiative to protect the business, align on security measures across stakeholders, and demonstrate compliance with internal standards and regulatory requirements.

AI application development is often hampered by security concerns. Model validation is a first step in decoupling AI development from AI security, which can help unblock a company’s AI transformation.

Integrating the Robust Intelligence platform with MLflow

Integrating Robust Intelligence’s AI Validation offering with the MLflow model registry makes the important step of model validation an automatic and unobtrusive part of machine learning development.

Adopting this shift-left approach to testing enables developers to ascertain the security and safety vulnerabilities of a given model early on, identify malicious inclusions in their supply chain, and better understand how actions like fine-tuning adversely impact alignment. It also streamlines communication with other teams like security and compliance, who can then independently verify that models have undergone rigorous validation and comply with relevant standards.

“Companies need to ensure that the models they rely on have undergone rigorous validation for safety and security,” said Omar Khawaja, VP Security and Field CISO at Databricks. “Not only does the integration between Robust Intelligence and MLflow make security a built-in piece of the model development lifecycle, it also improves communication between AI stakeholders across data science, security, and GRC teams.”

When you integrate both platforms, the registration of a new model in the MLflow Model Registry will automatically and discreetly initiate Robust Intelligence AI Validation testing. Our algorithmic red-teaming evaluates model susceptibility to over 150 security and safety categories across four primary failure types. This identifies vulnerabilities to malicious actions, such as prompt injection and data poisoning, and unintentional outcomes. If left unchecked, these vulnerabilities can jeopardize your users, sensitive data, and ultimately the security of your organization.

  • Abuse failures such as toxicity, bias, hate speech, and malicious code.
  • Privacy failures such as PII leakage, data loss, model info leakage, and privacy infringement.
  • Integrity failures such as factual inconsistencies, hallucination, off-topic, and off-policy responses.
  • Availability failures such as denial of services and increased resource or cost consumption.

These tests can be repeated periodically to identify new vulnerabilities that emerge in development, after instances of fine-tuning, or after the model is deployed in a production AI application.

Historical results are available for review in the Robust Intelligence platform and within MLflow as model artifacts, providing centralized and continuous assurance that models in use have undergone thorough testing and adhere to any internal standards and regulatory requirements. Results from AI Validation map directly to leading AI security standards from NIST, MITRE, and OWASP, making it easier to measure and demonstrate compliance. We also auto-generate model cards that translate the test results into an easy-to-read report that is mapped to industry and regulatory standards.

Automating model validation saves teams a tremendous amount of time and expertise that would otherwise be required to manually and routinely test models. Manual, ad hoc testing often falls short in terms of coverage and consistency, as it’s performed with individual discretion in an arbitrary and unstandardized manner. 

After identifying the vulnerability profile of your model, AI Validation automatically recommends custom guardrails when used in conjunction with the Robust Intelligence AI Firewall. This ensures that protections are precisely tailored to cover the gaps that exist in your AI applications before bad actors can exploit them.

Ready to bring seamless, automated model testing to your AI development process? You can learn more about AI Validation and our integration with the MLflow Model Registry here.

June 12, 2024
-
5
minute read

Automate AI vulnerability testing with Robust Intelligence and MLflow

The Databricks open-source platform MLflow has emerged as one of the leading MLOps solutions with a comprehensive set of tools designed to help teams across the entire AI lifecycle. It has several components to help simplify the complex development process and provide better traceability, consistency, and management at scale.

MLOps solutions like MLflow streamline machine learning experiment tracking, model development, and production readiness. In the era of generative AI, DevSecOps practices ensuring the validation of the safety and security vulnerabilities of AI systems are increasingly important. That’s why Robust Intelligence integrates with the MLflow Model Registry to bring seamless validation to machine learning workflows and help developers create more secure AI applications.

Let’s take a closer look at the integration between Robust Intelligence and MLflow, and discuss how teams can take advantage of automated red teaming to identify model vulnerabilities.

What are MLflow and the Model Registry?

MLflow is an open-source platform developed by Databricks that comprises a variety of tools to help simplify and support the complete AI lifecycle. These core components include Tracking, Evaluation, Prompt Engineering, and the Model Registry.

The MLflow Model Registry is the most widely used model registry today. It serves as a centralized model repository that facilitates better organization, management, collaboration, and documentation between developers and broader teams. Model registries provide significant operational benefits in terms of quality, consistency, and scalability, which is why so many enterprise machine learning teams have come to rely on them.

What is model validation, and why is it important?

Model validation is the process of identifying and addressing safety and security vulnerabilities in a model through file scanning and rigorous testing. It is a foundational practice of AI security and an explicit regulatory requirement outlined in policies like the EU AI Act and the White House Executive Order on AI.

Validation is also important to align with leading AI security standards. Popular standards include the OWASP Top 10 for LLM Applications, for which six of the ten highlighted security concerns can be measured or mitigated during pre-deployment validation.  Additionally, the  Databricks AI Security Framework contains sections on model evaluation (DASF 45), comparing LLM outputs (DASF 47), automating LLM evaluation (DASF 49), platform compliance (DASF 50), and source code control (DASF 52) that are critical steps addressed by model validation.

Companies need to ultimately trust that their models are secure and safe. AI security is concerned with protecting sensitive data and computing resources from unauthorized access or attack, such as prompt injection and data poisoning attacks. AI safety is concerned with preventing harms caused by unintended consequences of an AI application by its designer, such as hallucinations and toxic content. Both present business risk and require mitigation.

This pertains to the hundreds of thousands of open-source models on HuggingFace, closed-source models from AI providers, and proprietary models. Testing should occur every time a change is made to the model, as well as at regular time intervals. As new vulnerabilities can emerge throughout model development and in production, model validation must be seen as a continuous and necessary initiative to protect the business, align on security measures across stakeholders, and demonstrate compliance with internal standards and regulatory requirements.

AI application development is often hampered by security concerns. Model validation is a first step in decoupling AI development from AI security, which can help unblock a company’s AI transformation.

Integrating the Robust Intelligence platform with MLflow

Integrating Robust Intelligence’s AI Validation offering with the MLflow model registry makes the important step of model validation an automatic and unobtrusive part of machine learning development.

Adopting this shift-left approach to testing enables developers to ascertain the security and safety vulnerabilities of a given model early on, identify malicious inclusions in their supply chain, and better understand how actions like fine-tuning adversely impact alignment. It also streamlines communication with other teams like security and compliance, who can then independently verify that models have undergone rigorous validation and comply with relevant standards.

“Companies need to ensure that the models they rely on have undergone rigorous validation for safety and security,” said Omar Khawaja, VP Security and Field CISO at Databricks. “Not only does the integration between Robust Intelligence and MLflow make security a built-in piece of the model development lifecycle, it also improves communication between AI stakeholders across data science, security, and GRC teams.”

When you integrate both platforms, the registration of a new model in the MLflow Model Registry will automatically and discreetly initiate Robust Intelligence AI Validation testing. Our algorithmic red-teaming evaluates model susceptibility to over 150 security and safety categories across four primary failure types. This identifies vulnerabilities to malicious actions, such as prompt injection and data poisoning, and unintentional outcomes. If left unchecked, these vulnerabilities can jeopardize your users, sensitive data, and ultimately the security of your organization.

  • Abuse failures such as toxicity, bias, hate speech, and malicious code.
  • Privacy failures such as PII leakage, data loss, model info leakage, and privacy infringement.
  • Integrity failures such as factual inconsistencies, hallucination, off-topic, and off-policy responses.
  • Availability failures such as denial of services and increased resource or cost consumption.

These tests can be repeated periodically to identify new vulnerabilities that emerge in development, after instances of fine-tuning, or after the model is deployed in a production AI application.

Historical results are available for review in the Robust Intelligence platform and within MLflow as model artifacts, providing centralized and continuous assurance that models in use have undergone thorough testing and adhere to any internal standards and regulatory requirements. Results from AI Validation map directly to leading AI security standards from NIST, MITRE, and OWASP, making it easier to measure and demonstrate compliance. We also auto-generate model cards that translate the test results into an easy-to-read report that is mapped to industry and regulatory standards.

Automating model validation saves teams a tremendous amount of time and expertise that would otherwise be required to manually and routinely test models. Manual, ad hoc testing often falls short in terms of coverage and consistency, as it’s performed with individual discretion in an arbitrary and unstandardized manner. 

After identifying the vulnerability profile of your model, AI Validation automatically recommends custom guardrails when used in conjunction with the Robust Intelligence AI Firewall. This ensures that protections are precisely tailored to cover the gaps that exist in your AI applications before bad actors can exploit them.

Ready to bring seamless, automated model testing to your AI development process? You can learn more about AI Validation and our integration with the MLflow Model Registry here.

Blog

Related articles

February 3, 2022
-
3
minute read

Introducing our Incredible ML Team!

For:
June 14, 2021
-
4
minute read

The Fallacy of the Hero Lifeguard

For:
March 22, 2022
-
4
minute read

What Is the Best Tool to Save Data Drift?

For:
May 28, 2024
-
5
minute read

Fine-Tuning LLMs Breaks Their Safety and Security Alignment

For:
December 5, 2023
-
5
minute read

Using AI to Automatically Jailbreak GPT-4 and Other LLMs in Under a Minute

For:
December 13, 2022
-
4
minute read

Robust Intelligence Partners with Databricks to Deliver Machine Learning Integrity Through Continuous Validation

For:
Data Science Leaders