November 27, 2023
-
3
minute read

Robust Intelligence Partners with Datadog to Extend AI Firewall Observability

Before deploying generative AI-powered applications, companies need to ensure that their models and data are protected. This is not a trivial task. Attacks on AI systems are increasing in frequency and sophistication. Additionally, models will inevitably generate undesired responses due to both malicious and inadvertent user actions. This exposes companies to a wide range of security, ethical, and operational risks.

Our AI Firewall mitigates these risks by wrapping a protective layer around your models to block malicious inputs and validate model outputs in real time. This blocks malicious inputs including prompt injection, prompt extraction, and sensitive information. AI Firewall also scans model outputs to ensure they are absent of sensitive information, hallucinations, or otherwise harmful content. Responses that fall outside your organization’s standards are blocked from the application, including sensitive data from fine-tuning or connected databases used for retrieval-augmented generation.

This real-time AI security measure is essential to deploying models in production. Our integration with Datadog enables customers to seamlessly monitor real-time AI Firewall security events with out-of-the-box dashboards in their preferred observability platform. This gives security and data science teams the critical information they need to stay informed about the state of their production models. Datadog’s customizable and actionable alerting and notifications can also raise issues within existing workflows for prompt analysis and action.

The integration monitors AI Firewall results through the Datadog Agent, providing metrics for allowed and blocked data points, as well as why each data point was blocked. Our pre-configured Datadog dashboard enables teams to see a number of key metrics, including:

  • Flagged requests percent: the percentage of inputs that have been flagged malicious requests by AI Firewall
  • Tests flagged on input: the number of inputs that were deemed to be malicious or include PII
  • Tests flagged on output: the number of outputs that generated an undesired response
  • Histogram of flagged tests: reasons why inputs and outputs were flagged, including factual inconsistency, prompt injection, toxicity, and PII detection

For existing AI Firewall customers, integration with Datadog is simple.

  1. Click "install" on the integration tile in your Datadog account.
  2. Set up a Datadog agent in your AI Firewall cluster and install the Robust Intelligence AI Firewall integration in it.
  3. Add annotation to the AI Firewall pod to enable auto discovery by the Datadog agent or deploy the AI Firewall with parameter <code inline>enable_datadog_integration=true</code>.
  1. AI Firewall results populate in the dashboard in your Datadog account.

If you’re new to AI Firewall, sign-up here to get started.

November 27, 2023
-
3
minute read

Robust Intelligence Partners with Datadog to Extend AI Firewall Observability

Before deploying generative AI-powered applications, companies need to ensure that their models and data are protected. This is not a trivial task. Attacks on AI systems are increasing in frequency and sophistication. Additionally, models will inevitably generate undesired responses due to both malicious and inadvertent user actions. This exposes companies to a wide range of security, ethical, and operational risks.

Our AI Firewall mitigates these risks by wrapping a protective layer around your models to block malicious inputs and validate model outputs in real time. This blocks malicious inputs including prompt injection, prompt extraction, and sensitive information. AI Firewall also scans model outputs to ensure they are absent of sensitive information, hallucinations, or otherwise harmful content. Responses that fall outside your organization’s standards are blocked from the application, including sensitive data from fine-tuning or connected databases used for retrieval-augmented generation.

This real-time AI security measure is essential to deploying models in production. Our integration with Datadog enables customers to seamlessly monitor real-time AI Firewall security events with out-of-the-box dashboards in their preferred observability platform. This gives security and data science teams the critical information they need to stay informed about the state of their production models. Datadog’s customizable and actionable alerting and notifications can also raise issues within existing workflows for prompt analysis and action.

The integration monitors AI Firewall results through the Datadog Agent, providing metrics for allowed and blocked data points, as well as why each data point was blocked. Our pre-configured Datadog dashboard enables teams to see a number of key metrics, including:

  • Flagged requests percent: the percentage of inputs that have been flagged malicious requests by AI Firewall
  • Tests flagged on input: the number of inputs that were deemed to be malicious or include PII
  • Tests flagged on output: the number of outputs that generated an undesired response
  • Histogram of flagged tests: reasons why inputs and outputs were flagged, including factual inconsistency, prompt injection, toxicity, and PII detection

For existing AI Firewall customers, integration with Datadog is simple.

  1. Click "install" on the integration tile in your Datadog account.
  2. Set up a Datadog agent in your AI Firewall cluster and install the Robust Intelligence AI Firewall integration in it.
  3. Add annotation to the AI Firewall pod to enable auto discovery by the Datadog agent or deploy the AI Firewall with parameter <code inline>enable_datadog_integration=true</code>.
  1. AI Firewall results populate in the dashboard in your Datadog account.

If you’re new to AI Firewall, sign-up here to get started.

Blog

Related articles

February 15, 2023
-
7
minute read

Infusing Security into MLOps

For:
June 14, 2022
-
4
minute read

ML Security Evasion Competition 2022

For:
March 23, 2023
-
4
minute read

Customize AI Model Testing with Robust Intelligence

For:
October 3, 2023
-
4
minute read

Robust Intelligence AI Firewall + MongoDB Atlas Vector Search: AI Security, Supercharged by Your Data

For:
May 31, 2023
-
7
minute read

NeMo Guardrails Early Look: What You Need to Know Before Deploying (Part 1)

For:
March 31, 2023
-
6
minute read

Prompt Injection Attack on GPT-4

For: