Few Tokens, Big Leverage: Preserving Safety Alignment by Constraining Safety Tokens during Fine-tuning
arXiv:2603.07445v1 Announce Type: new Abstract: Large language models (LLMs) often require fine-tuning (FT) to perform well on downstream tasks, but FT can induce safety-alignment drift even when the training dataset contains only benign data. Prior work shows that introducing a small fraction of harmful data can substantially compromise LLM refusal behavior, causing LLMs to comply with harmful requests. Existing defense methods often rely on model-wide interventions, such as restricting which parameters are updated or injecting additional safety data, which can limit generality and degrade downstream task performance. To address these limitations, we propose a fine-tuning framework called Preserving Safety Alignment via Constrained Tokens (PACT), which stabilizes the model's confidence on safety tokens. Our approach is motivated by the empirical observation that safety-aligned behavior is reflected in the model's token-level output confidence and is often concentrated on a small subs
arXiv:2603.07445v1 Announce Type: new Abstract: Large language models (LLMs) often require fine-tuning (FT) to perform well on downstream tasks, but FT can induce safety-alignment drift even when the training dataset contains only benign data. Prior work shows that introducing a small fraction of harmful data can substantially compromise LLM refusal behavior, causing LLMs to comply with harmful requests. Existing defense methods often rely on model-wide interventions, such as restricting which parameters are updated or injecting additional safety data, which can limit generality and degrade downstream task performance. To address these limitations, we propose a fine-tuning framework called Preserving Safety Alignment via Constrained Tokens (PACT), which stabilizes the model's confidence on safety tokens. Our approach is motivated by the empirical observation that safety-aligned behavior is reflected in the model's token-level output confidence and is often concentrated on a small subset of safety-related tokens. During downstream fine-tuning, we regularize the fine-tuned model to match the aligned reference model's confidence on safety-related tokens at each response step, while leaving non-safety tokens largely unconstrained to allow effective task adaptation. This targeted constraint prevents alignment drift without imposing global restrictions that typically trade off with model utility.
Executive Summary
The article proposes a novel fine-tuning framework called Preserving Safety Alignment via Constrained Tokens (PACT) to address the issue of safety-alignment drift in large language models (LLMs). PACT stabilizes the model's confidence on safety tokens by regularizing the fine-tuned model to match the aligned reference model's confidence on safety-related tokens at each response step. This targeted constraint prevents alignment drift without imposing global restrictions that typically trade off with model utility. The approach is motivated by the empirical observation that safety-aligned behavior is reflected in the model's token-level output confidence and is often concentrated on a small subset of safety-related tokens. The proposed framework has the potential to improve the safety and reliability of LLMs, particularly in high-stakes applications such as healthcare and finance.
Key Points
- ▸ PACT is a novel fine-tuning framework that constrains safety tokens to prevent safety-alignment drift
- ▸ The approach regularizes the fine-tuned model to match the aligned reference model's confidence on safety-related tokens
- ▸ PACT targets a small subset of safety-related tokens to preserve safety alignment without imposing global restrictions
Merits
Strength
The proposed framework addresses a critical issue in LLMs, namely safety-alignment drift, and provides a targeted solution that preserves model utility
Strength
The approach is motivated by empirical observations and has the potential to improve the safety and reliability of LLMs
Demerits
Limitation
The proposed framework may not be effective in scenarios where safety-alignment drift is caused by factors other than token-level confidence
Limitation
The approach may require significant computational resources and expertise to implement and fine-tune
Expert Commentary
The proposed framework is a significant contribution to the field of LLMs, as it addresses a critical issue that has far-reaching implications for their safety and reliability. The approach is well-motivated and has the potential to improve the performance of LLMs in high-stakes applications. However, the proposed framework may require significant computational resources and expertise to implement and fine-tune. Additionally, the approach may not be effective in scenarios where safety-alignment drift is caused by factors other than token-level confidence. Nevertheless, the article provides valuable insights into the importance of safety-alignment in LLMs and highlights the need for further research in this area.
Recommendations
- ✓ Further research is needed to explore the effectiveness of PACT in scenarios where safety-alignment drift is caused by factors other than token-level confidence
- ✓ The proposed framework should be evaluated in high-stakes applications such as healthcare and finance to assess its practical implications