February 15, 2023
-
7
minute read

Infusing Security into MLOps

WannaCry. Heartbleed. Shellshock. Logjam.

Even the uninitiated will recognize the names of infamous software security vulnerabilities that have emerged in widely-used software packages. History is replete with examples of software vulnerabilities that comprise organizations that rely on them. Little explanation is needed to motivate that the software systems powering businesses and consumers must be secured.

But what of machine learning (ML)? Fundamentally, ML systems are also software systems, whose failure can impact companies, customers and communities. Who is responsible for ML security? Where are the exercised security muscles for ML systems that we have developed for software?

Where ML library vulnerabilities end, a new class of ML-specific vulnerabilities begin that can be exploited by attackers for financial gain. For example, in our joint published work, colleagues at Norton Research Group disclosed how savvy attackers tricked out phishing webpages to evade ML-based phishing detectors. Two individuals in China evaded live facial recognition authentication in 2018 to gain access to the tax system of a Chinese municipal government to collect $77M through fraudulent tax invoices. Similarly, in 2022 a New Jersey man fooled face-matching software at ID.me to obtained multiple “verified” accounts to file false unemployment claims of $900,000.

The fact remains that the rate of AI adoption is outpacing our ability to secure it. Today, ML development and deployment pipelines represent “unmanaged risk” to corporations and consumers. But, this need not be the case. Building on the foundation of secure software development, we can also infuse security discipline into every phase of the ML development lifecycle. This requires, in part, a “shift-left” mentality that MLSecOps brings.

MLSecOps for the ML Model Pipeline

To reduce the risk of vulnerabilities during software development, organizations have adopted DevSecOps to integrate security into every stage of the software development life cycle. With DevSecOps, security teams work alongside development and operations teams to identify and address security risks before they become critical issues. The key lesson of DevSecOps is that security can’t merely be brushed on.  It must be baked in.

In 2017, colleague Eugene Neelou—who joined Zoltan Balazs and me in organizing the 2022 edition of the ML Security Evasion Competition that, ironically, featured algorithmic evasions of antiphishing and facial recognition models—coined the term MLSecOps. MLSecOps aims to ensure that machine learning models are secure, reliable, and trustworthy, from model training to deployment and management. The transition to MLSecOps is a response to the increasing use of machine learning in sensitive and critical applications. This includes ensuring that data used to train models are secure and protected, as well as ensuring that models are tested and validated to prevent security vulnerabilities, unintentional failures or intentional tampering.

What does this look like in practice? For a detailed look, I refer you to Eugene’s work. How can one get started? As your organization matures, you should begin implementing CI/CD pipelines that test the robustness of ML models and fail when insufficient, just as what happens in system integration tests. You should fill the ML security gaps that traditional security tooling doesn’t cover, for example, with respect to pickle file vulnerabilities in ML model files. Mature organizations can implement AI Red Teaming exercises against pre-production and production models, such as work that my former team at Microsoft has done.

Vulnerabilities in the ML Supply Chain

MLSecOps has become important as corporations begin to rely more on models from third-party sources. Since ML models run on software, ML inherits the vulnerabilities of traditional software systems. These include vulnerabilities in the software tools required for model training or inference and arbitrary code execution in the files that store model weights.

The traditional software code that operate ML models can be analyzed by developers or automated tools for offending lines of software to be corrected. This is a key reason for regular security updates and software patches. Although safer alternatives exist, most popular ML models are still persisted via fundamentally insecure storage protocols such as pickle or yaml. Most simply ignore the risks inherent in ingesting files that can lead to arbitrary code execution and more. But, organizations serious about security should incorporate rigorous measures to reduce the risk exposed by these file formats.

Additionally, third-party models themselves must also be tested. Developers of third-party models often report performance metrics for the dataset or task they were developed for. Even the few that may come with security tests should be verified by another source. Since ML models are not written explicitly by humans—unlike code scanners, a careful inspection of the model weights cannot easily reveal their vulnerabilities. And even if model vulnerabilities are discovered, there are no editing tools to surgically correct them. Where a software engineer can isolate and correct a few lines of code, today’s machine learning engineer can’t force a model to unlearn a back door or poison vulnerability. In essence, ML’s bugs can’t be patched.

But, they can be detected.

Mitigating Vulnerabilities in the ML Supply Chain

A comprehensive set of security measures can dramatically reduce the risks inherent in the ML model supply chain:

  1. Scan the software dependencies of a model for known vulnerabilities. This can be handled by existing software security tooling.
  2. Verify that the model file format that encodes the model weights does not include unnecessary or unsafe vulnerabilities. Today, this is not handled by traditional software security tooling.
  3. Complete an independent assessment on the performance, fairness and security of the ML model on your data. These scans amount to dynamic analysis of the ML model to uncover any algorithmic vulnerabilities latent in the model.
  4. Include post-deployment protection and monitoring of models from unintentional and intentional failure modes. As with the antiphishing and face detection examples, models can be tampered with post-deployment. Logging, monitoring and firewalling these assets are good security practice.

ML Security is a Process

As with DevSecOps, MLSecOps doesn't aim to turn data scientists and ML engineers into security experts, but rather educate them in best practices that promote more secure development processes. It promotes secure ML standards and provides automated and repeatable testing. It promotes practices to continuously monitors the environment for security threats and provides visible governance metrics for both security teams and data science orgs.

With the increasing use of machine learning models in sensitive and critical applications, it is imperative that organizations implement MLSecOps to ensure the security, reliability, and trustworthiness of their machine learning models. By incorporating security into every stage of the machine learning development process, organizations can minimize the risk of security vulnerabilities and attacks. In the same way that DevSecOps has become an integral part of software development, MLSecOps must become an integral part of machine learning development. Organizations that embrace MLSecOps will be better prepared to protect themselves against security threats and ensure the security and privacy of their data and applications.

To learn more about new vulnerabilities that ML brings, check out Not with a Bug, But with a Sticker: Attacks on Machine Learning Systems and What To Do About Them by Ram Shankar Siva Kumar and Hyrum Anderson.  All author proceeds are donated to charities Black in AI and Bountiful Children’s Foundation.

February 15, 2023
-
7
minute read

Infusing Security into MLOps

WannaCry. Heartbleed. Shellshock. Logjam.

Even the uninitiated will recognize the names of infamous software security vulnerabilities that have emerged in widely-used software packages. History is replete with examples of software vulnerabilities that comprise organizations that rely on them. Little explanation is needed to motivate that the software systems powering businesses and consumers must be secured.

But what of machine learning (ML)? Fundamentally, ML systems are also software systems, whose failure can impact companies, customers and communities. Who is responsible for ML security? Where are the exercised security muscles for ML systems that we have developed for software?

Where ML library vulnerabilities end, a new class of ML-specific vulnerabilities begin that can be exploited by attackers for financial gain. For example, in our joint published work, colleagues at Norton Research Group disclosed how savvy attackers tricked out phishing webpages to evade ML-based phishing detectors. Two individuals in China evaded live facial recognition authentication in 2018 to gain access to the tax system of a Chinese municipal government to collect $77M through fraudulent tax invoices. Similarly, in 2022 a New Jersey man fooled face-matching software at ID.me to obtained multiple “verified” accounts to file false unemployment claims of $900,000.

The fact remains that the rate of AI adoption is outpacing our ability to secure it. Today, ML development and deployment pipelines represent “unmanaged risk” to corporations and consumers. But, this need not be the case. Building on the foundation of secure software development, we can also infuse security discipline into every phase of the ML development lifecycle. This requires, in part, a “shift-left” mentality that MLSecOps brings.

MLSecOps for the ML Model Pipeline

To reduce the risk of vulnerabilities during software development, organizations have adopted DevSecOps to integrate security into every stage of the software development life cycle. With DevSecOps, security teams work alongside development and operations teams to identify and address security risks before they become critical issues. The key lesson of DevSecOps is that security can’t merely be brushed on.  It must be baked in.

In 2017, colleague Eugene Neelou—who joined Zoltan Balazs and me in organizing the 2022 edition of the ML Security Evasion Competition that, ironically, featured algorithmic evasions of antiphishing and facial recognition models—coined the term MLSecOps. MLSecOps aims to ensure that machine learning models are secure, reliable, and trustworthy, from model training to deployment and management. The transition to MLSecOps is a response to the increasing use of machine learning in sensitive and critical applications. This includes ensuring that data used to train models are secure and protected, as well as ensuring that models are tested and validated to prevent security vulnerabilities, unintentional failures or intentional tampering.

What does this look like in practice? For a detailed look, I refer you to Eugene’s work. How can one get started? As your organization matures, you should begin implementing CI/CD pipelines that test the robustness of ML models and fail when insufficient, just as what happens in system integration tests. You should fill the ML security gaps that traditional security tooling doesn’t cover, for example, with respect to pickle file vulnerabilities in ML model files. Mature organizations can implement AI Red Teaming exercises against pre-production and production models, such as work that my former team at Microsoft has done.

Vulnerabilities in the ML Supply Chain

MLSecOps has become important as corporations begin to rely more on models from third-party sources. Since ML models run on software, ML inherits the vulnerabilities of traditional software systems. These include vulnerabilities in the software tools required for model training or inference and arbitrary code execution in the files that store model weights.

The traditional software code that operate ML models can be analyzed by developers or automated tools for offending lines of software to be corrected. This is a key reason for regular security updates and software patches. Although safer alternatives exist, most popular ML models are still persisted via fundamentally insecure storage protocols such as pickle or yaml. Most simply ignore the risks inherent in ingesting files that can lead to arbitrary code execution and more. But, organizations serious about security should incorporate rigorous measures to reduce the risk exposed by these file formats.

Additionally, third-party models themselves must also be tested. Developers of third-party models often report performance metrics for the dataset or task they were developed for. Even the few that may come with security tests should be verified by another source. Since ML models are not written explicitly by humans—unlike code scanners, a careful inspection of the model weights cannot easily reveal their vulnerabilities. And even if model vulnerabilities are discovered, there are no editing tools to surgically correct them. Where a software engineer can isolate and correct a few lines of code, today’s machine learning engineer can’t force a model to unlearn a back door or poison vulnerability. In essence, ML’s bugs can’t be patched.

But, they can be detected.

Mitigating Vulnerabilities in the ML Supply Chain

A comprehensive set of security measures can dramatically reduce the risks inherent in the ML model supply chain:

  1. Scan the software dependencies of a model for known vulnerabilities. This can be handled by existing software security tooling.
  2. Verify that the model file format that encodes the model weights does not include unnecessary or unsafe vulnerabilities. Today, this is not handled by traditional software security tooling.
  3. Complete an independent assessment on the performance, fairness and security of the ML model on your data. These scans amount to dynamic analysis of the ML model to uncover any algorithmic vulnerabilities latent in the model.
  4. Include post-deployment protection and monitoring of models from unintentional and intentional failure modes. As with the antiphishing and face detection examples, models can be tampered with post-deployment. Logging, monitoring and firewalling these assets are good security practice.

ML Security is a Process

As with DevSecOps, MLSecOps doesn't aim to turn data scientists and ML engineers into security experts, but rather educate them in best practices that promote more secure development processes. It promotes secure ML standards and provides automated and repeatable testing. It promotes practices to continuously monitors the environment for security threats and provides visible governance metrics for both security teams and data science orgs.

With the increasing use of machine learning models in sensitive and critical applications, it is imperative that organizations implement MLSecOps to ensure the security, reliability, and trustworthiness of their machine learning models. By incorporating security into every stage of the machine learning development process, organizations can minimize the risk of security vulnerabilities and attacks. In the same way that DevSecOps has become an integral part of software development, MLSecOps must become an integral part of machine learning development. Organizations that embrace MLSecOps will be better prepared to protect themselves against security threats and ensure the security and privacy of their data and applications.

To learn more about new vulnerabilities that ML brings, check out Not with a Bug, But with a Sticker: Attacks on Machine Learning Systems and What To Do About Them by Ram Shankar Siva Kumar and Hyrum Anderson.  All author proceeds are donated to charities Black in AI and Bountiful Children’s Foundation.

Blog

Related articles

February 10, 2022
-
4
minute read

Bias in Hiring, the EEOC, and How RI Can Help

For:
Compliance Teams
September 27, 2021
-
5
minute read

Blaine Nelson: Using his Adversarial Machine Learning Research to improve RIME

For:
January 23, 2023
-
4
minute read

Robust Intelligence Recognized in Gartner’s 2023 Market Guide for AI Trust, Risk and Security Management

For:
No items found.