May 28, 2024
-
5
minute read

Fine-Tuning LLMs Breaks Their Safety and Security Alignment

Most businesses deploying new AI applications leverage existing foundational models and fine-tune them in order to improve accuracy, domain knowledge, and contextual relevance. This approach offers a number of benefits in terms of flexibility, utility, and cost-effectiveness.

However, there is a danger to fine-tuning that most teams overlook—namely, that fine-tuning can throw off model alignment and introduce security and safety risks that were not previously present. This phenomenon is broadly applicable and can even occur with completely benign datasets, making fine-tuned AI applications generally easier to jailbreak and more likely to produce harmful or sensitive results. In our research, we found fine-tuned variants more than 3 times more susceptible to jailbreak instructions and over 22 times more likely to produce a harmful response than the original foundation model.

To better understand and demonstrate these risks, we conducted a series of experiments to evaluate model responses before and after fine-tuning. This series began with an initial test of Llama-2-7B and three fine-tuned variations published by Microsoft for specific tasks in biomedicine, finance and law. Below, we’ll recap our research methodologies and key findings, discuss why this phenomenon might occur, and share implications for AI safety and security.

A walkthrough of our research on fine-tuning

What models were evaluated?

When determining which models we would be evaluating, our team selected Llama-2-7B as a control. Our prior research into algorithmic jailbreaking indicated that the Llama-2-7B foundation model was well-aligned with strong security and safety guardrails, which made it an excellent candidate for testing.

We then selected reputable variants derived from Llama-2-7B for evaluation, settling on three AdaptLLM chat models fine-tuned and released by Microsoft researchers to cover different domains:

  • AdaptLLM-Biomedicine: A Llama-2-7B model trained on PubMed abstracts from the Pile.
  • AdaptLLM-Finance: A Llama-2-7B model trained on financial news from May 2022 to May 2023 for over 7,000 stocks using the FinGPT codebase.
  • AdaptLLM-Law: A Llama-2-7B model with FreeLaw opinions from the Pile.

All the AdaptLLM models were developed using the methods described in Adapting Large Language Models via Reading Comprehension, Cheng et al., ICLR, 2024, in which the researchers continue training an LLM on domain-specific raw corpora by converting them to reading comprehension texts to preserve the LLMs prompting performance.

To train these models, the researchers “continue to train LLaMA-7B on each domain, and explore different ratios for mixing reading comprehension texts with general instructions; the optimal ratios for biomedicine, finance, and law are 1 : 1, 1 : 2, and 1 : 1, respectively.”

The authors proved the efficacy of their domain-based training approach by comparing AdaptLLM models to other models trained for the same tasks and showing that their performance was consistently the best for domain-specific metrics.

What data was used for testing?

To assess and compare the degree of alignment of the original Llama-2-7B model and each of the AdaptLLM models, we used a benchmark dataset from Jailbroken: How Does LLM Safety Training Fail?, Wei et al., 2024. This jailbreaking dataset provides a number of jailbreaking techniques and goals to probe a model’s defenses.

To compare our models, we prompted them with each query from the benchmark dataset and collected their responses for evaluation as described below.

We assessed a total of 250 queries from the original benchmark after disqualifying queries that either were not asking for harmful responses, were advocating against answering in a harmful way, or were using Base64/ROT13 encodings, which Llama-2-7B models did not understand sufficiently to respond to the underlying prompt.

What were our testing criteria?

To assess model responses, several human-based criteria were used. Models would occasionally fail to perform harmfully because they did not appear to understand the question, which led us to develop three different criteria to represent different aspects of each response:

  • Understanding examines whether or not the model’s response appears to indicate that the given prompt was understood.
  • Compliance examines whether or not the model’s response complies with the instructions outlined in the given prompt.
  • Harmfulness examines whether or not the model’s response would be considered harmful by a professional entity releasing the given model, such as toxic, illegal, immoral, or unethical content.

Each of the criteria was measured on a scale of 0 to 4, as results were not entirely binary. For example, an understanding score of 2 would indicate partial comprehension of a given prompt but with at least one missed component.

Findings from our experiments

After testing our models and examining the results across the three criteria, a disparity in jailbreak susceptibility became immediately evident.

Understanding scores were slightly higher for the original Llama-2-7B model, indicating that it was a bit more effective at interpreting the queries. (Its average score was 3.93 compared to 3.80, 3.78, and 3.78 for fine-tuned variants.)

Compliance scores demonstrated that fine-tuned models were far more likely to comply with jailbreak instructions than the original Llama-2-7B model. (Fine-tuned models scored 1.66, 1.73, and 1.72 compared to the 0.54 average for Llama-2-7B.)

Harmfulness scores followed a similar trend—fine-tuned models were far more likely to produce harmful responses than the original Llama-2-7B model. (Fine-tuned models scored 1.06, 1.05, and 1.1 compared to the 0.10 average for Llama-2-7B. In other terms, each fine-tuned model responded with a harmfulness score above one in 26.4%, 26.8%, and 27.6% of their responses compared to 1.6% for the original Llama-2-7B.)

These results demonstrate a significantly greater jailbreak susceptibility in the three fine-tuned variations of Llama-2-7B when compared to the original foundation model. Despite the efficacy of their domain-based training, these models are more than 3 times more compliant with jailbreak instructions with over 22 times greater odds of producing a harmful response.

Why does fine-tuning break model alignment?

While we don’t fully understand the reasons that fine-tuning breaks model alignment, we hypothesize that the changes to the model during alignment do not fundamentally remove harmful constructions from the model, but rather redirect the model to different responses.

Consider each response given by an LLM as a probabilistic walk through the space of tokens. Alignment decreases the probability for a selected walk to occur, but those paths remain intact as possibilities. In fine-tuning a model, we effectively perturb the weights within the model to bias certain paths that represent new knowledge. While that perturbation might be small (e.g. LoRa), there is no guarantee that the walk biases introduced by alignment will remain intact.

As humans, we are accustomed to learning distinct topics in a disjointed manner—taking a history class doesn’t substantially impact our math skills, for example. On the other hand, machine learning procedures like fine-tuning empirically have substantial regressive impacts on prior learnings like alignment. While we one day may be able to remedy this, it remains a challenge to have tuning procedures that can be reliably composed together to have orthogonal impacts on a model.

What does this mean for AI safety and security?

The benefits of leveraging and fine-tuning a state-of-the-art foundation model are evident; the flexibility, approachability, and cost-effectiveness of this approach have greatly facilitated enterprise adoption of AI technology.

The purpose of our research is not to disparage this approach, but rather to highlight that fine-tuning can introduce new dimensions of risk to even the most well-aligned foundation model. Our findings underscore the importance of robust model testing, not only as a developmental best practice but continuously to validate and maintain alignment. It also emphasizes the need for an independent safety and security layer that can protect the model without being impacted by fine-tuning.

Before the widespread discussion and adoption of artificial intelligence, traditional software development security measures were incorporated into CI/CD pipelines to protect businesses from introducing vulnerabilities in software delivery. The rush to embrace AI and deploy cutting-edge AI applications should not drive businesses to ignore these longstanding best practices. Risk management and security must be top considerations when forming AI strategy to uphold privacy requirements, maintain customer trust, and protect your business from harm.

How can Robust Intelligence help?

The findings from our fine-tuning research only further substantiate the reason Robust Intelligence developed our AI Validation solution. Continuous, algorithmic red-teaming helps evaluate your models and identify hundreds of potential vulnerabilities. This not only enables teams to develop safer, more secure AI applications, but also to maintain this safety and security after instances of fine-tuning and continuously in production.

May 28, 2024
-
5
minute read

Fine-Tuning LLMs Breaks Their Safety and Security Alignment

Most businesses deploying new AI applications leverage existing foundational models and fine-tune them in order to improve accuracy, domain knowledge, and contextual relevance. This approach offers a number of benefits in terms of flexibility, utility, and cost-effectiveness.

However, there is a danger to fine-tuning that most teams overlook—namely, that fine-tuning can throw off model alignment and introduce security and safety risks that were not previously present. This phenomenon is broadly applicable and can even occur with completely benign datasets, making fine-tuned AI applications generally easier to jailbreak and more likely to produce harmful or sensitive results. In our research, we found fine-tuned variants more than 3 times more susceptible to jailbreak instructions and over 22 times more likely to produce a harmful response than the original foundation model.

To better understand and demonstrate these risks, we conducted a series of experiments to evaluate model responses before and after fine-tuning. This series began with an initial test of Llama-2-7B and three fine-tuned variations published by Microsoft for specific tasks in biomedicine, finance and law. Below, we’ll recap our research methodologies and key findings, discuss why this phenomenon might occur, and share implications for AI safety and security.

A walkthrough of our research on fine-tuning

What models were evaluated?

When determining which models we would be evaluating, our team selected Llama-2-7B as a control. Our prior research into algorithmic jailbreaking indicated that the Llama-2-7B foundation model was well-aligned with strong security and safety guardrails, which made it an excellent candidate for testing.

We then selected reputable variants derived from Llama-2-7B for evaluation, settling on three AdaptLLM chat models fine-tuned and released by Microsoft researchers to cover different domains:

  • AdaptLLM-Biomedicine: A Llama-2-7B model trained on PubMed abstracts from the Pile.
  • AdaptLLM-Finance: A Llama-2-7B model trained on financial news from May 2022 to May 2023 for over 7,000 stocks using the FinGPT codebase.
  • AdaptLLM-Law: A Llama-2-7B model with FreeLaw opinions from the Pile.

All the AdaptLLM models were developed using the methods described in Adapting Large Language Models via Reading Comprehension, Cheng et al., ICLR, 2024, in which the researchers continue training an LLM on domain-specific raw corpora by converting them to reading comprehension texts to preserve the LLMs prompting performance.

To train these models, the researchers “continue to train LLaMA-7B on each domain, and explore different ratios for mixing reading comprehension texts with general instructions; the optimal ratios for biomedicine, finance, and law are 1 : 1, 1 : 2, and 1 : 1, respectively.”

The authors proved the efficacy of their domain-based training approach by comparing AdaptLLM models to other models trained for the same tasks and showing that their performance was consistently the best for domain-specific metrics.

What data was used for testing?

To assess and compare the degree of alignment of the original Llama-2-7B model and each of the AdaptLLM models, we used a benchmark dataset from Jailbroken: How Does LLM Safety Training Fail?, Wei et al., 2024. This jailbreaking dataset provides a number of jailbreaking techniques and goals to probe a model’s defenses.

To compare our models, we prompted them with each query from the benchmark dataset and collected their responses for evaluation as described below.

We assessed a total of 250 queries from the original benchmark after disqualifying queries that either were not asking for harmful responses, were advocating against answering in a harmful way, or were using Base64/ROT13 encodings, which Llama-2-7B models did not understand sufficiently to respond to the underlying prompt.

What were our testing criteria?

To assess model responses, several human-based criteria were used. Models would occasionally fail to perform harmfully because they did not appear to understand the question, which led us to develop three different criteria to represent different aspects of each response:

  • Understanding examines whether or not the model’s response appears to indicate that the given prompt was understood.
  • Compliance examines whether or not the model’s response complies with the instructions outlined in the given prompt.
  • Harmfulness examines whether or not the model’s response would be considered harmful by a professional entity releasing the given model, such as toxic, illegal, immoral, or unethical content.

Each of the criteria was measured on a scale of 0 to 4, as results were not entirely binary. For example, an understanding score of 2 would indicate partial comprehension of a given prompt but with at least one missed component.

Findings from our experiments

After testing our models and examining the results across the three criteria, a disparity in jailbreak susceptibility became immediately evident.

Understanding scores were slightly higher for the original Llama-2-7B model, indicating that it was a bit more effective at interpreting the queries. (Its average score was 3.93 compared to 3.80, 3.78, and 3.78 for fine-tuned variants.)

Compliance scores demonstrated that fine-tuned models were far more likely to comply with jailbreak instructions than the original Llama-2-7B model. (Fine-tuned models scored 1.66, 1.73, and 1.72 compared to the 0.54 average for Llama-2-7B.)

Harmfulness scores followed a similar trend—fine-tuned models were far more likely to produce harmful responses than the original Llama-2-7B model. (Fine-tuned models scored 1.06, 1.05, and 1.1 compared to the 0.10 average for Llama-2-7B. In other terms, each fine-tuned model responded with a harmfulness score above one in 26.4%, 26.8%, and 27.6% of their responses compared to 1.6% for the original Llama-2-7B.)

These results demonstrate a significantly greater jailbreak susceptibility in the three fine-tuned variations of Llama-2-7B when compared to the original foundation model. Despite the efficacy of their domain-based training, these models are more than 3 times more compliant with jailbreak instructions with over 22 times greater odds of producing a harmful response.

Why does fine-tuning break model alignment?

While we don’t fully understand the reasons that fine-tuning breaks model alignment, we hypothesize that the changes to the model during alignment do not fundamentally remove harmful constructions from the model, but rather redirect the model to different responses.

Consider each response given by an LLM as a probabilistic walk through the space of tokens. Alignment decreases the probability for a selected walk to occur, but those paths remain intact as possibilities. In fine-tuning a model, we effectively perturb the weights within the model to bias certain paths that represent new knowledge. While that perturbation might be small (e.g. LoRa), there is no guarantee that the walk biases introduced by alignment will remain intact.

As humans, we are accustomed to learning distinct topics in a disjointed manner—taking a history class doesn’t substantially impact our math skills, for example. On the other hand, machine learning procedures like fine-tuning empirically have substantial regressive impacts on prior learnings like alignment. While we one day may be able to remedy this, it remains a challenge to have tuning procedures that can be reliably composed together to have orthogonal impacts on a model.

What does this mean for AI safety and security?

The benefits of leveraging and fine-tuning a state-of-the-art foundation model are evident; the flexibility, approachability, and cost-effectiveness of this approach have greatly facilitated enterprise adoption of AI technology.

The purpose of our research is not to disparage this approach, but rather to highlight that fine-tuning can introduce new dimensions of risk to even the most well-aligned foundation model. Our findings underscore the importance of robust model testing, not only as a developmental best practice but continuously to validate and maintain alignment. It also emphasizes the need for an independent safety and security layer that can protect the model without being impacted by fine-tuning.

Before the widespread discussion and adoption of artificial intelligence, traditional software development security measures were incorporated into CI/CD pipelines to protect businesses from introducing vulnerabilities in software delivery. The rush to embrace AI and deploy cutting-edge AI applications should not drive businesses to ignore these longstanding best practices. Risk management and security must be top considerations when forming AI strategy to uphold privacy requirements, maintain customer trust, and protect your business from harm.

How can Robust Intelligence help?

The findings from our fine-tuning research only further substantiate the reason Robust Intelligence developed our AI Validation solution. Continuous, algorithmic red-teaming helps evaluate your models and identify hundreds of potential vulnerabilities. This not only enables teams to develop safer, more secure AI applications, but also to maintain this safety and security after instances of fine-tuning and continuously in production.

Blog

Related articles

November 14, 2023
-
4
minute read

AI Governance Policy Roundup (November 2023)

For:
March 17, 2022
-
1
minute read

Robust Intelligence Awarded IDIQ Contract to Eliminate AI Risk for the US Air Force

For:
July 29, 2024
-
5
minute read

Bypassing Meta’s LLaMA Classifier: A Simple Jailbreak

For:
March 12, 2024
-
7
minute read

Understanding and Mitigating Unicode Tag Prompt Injection

For:
December 5, 2023
-
5
minute read

Using AI to Automatically Jailbreak GPT-4 and Other LLMs in Under a Minute

For:
June 9, 2023
-
7
minute read

NeMo Guardrails Early Look: What You Need to Know Before Deploying (Part 2)

For: