Why Is RLHF Alignment Shallow? A Gradient Analysis
arXiv:2603.04851v1 Announce Type: new Abstract: Why is safety alignment in LLMs shallow? We prove that gradient-based alignment inherently concentrates on positions where harm is decided and vanishes beyond. Using a martingale decomposition of sequence-level harm, we derive an exact characterization of alignment gradients. The gradient at position $t$ equals the covariance between the conditional expected harm and the score function. This implies that positions beyond the harm horizon where the output's harmfulness is already determined receive zero gradient signal during training. This explains empirical observations that KL divergence between aligned and base models concentrates on early tokens. Consequently, standard alignment objectives cannot produce deep alignment, regardless of optimization quality. We introduce the concept of harm information $I_t$, which quantifies each position's influence on harm, and prove that equilibrium KL divergence tracks this quantity. Finally, we de
arXiv:2603.04851v1 Announce Type: new Abstract: Why is safety alignment in LLMs shallow? We prove that gradient-based alignment inherently concentrates on positions where harm is decided and vanishes beyond. Using a martingale decomposition of sequence-level harm, we derive an exact characterization of alignment gradients. The gradient at position $t$ equals the covariance between the conditional expected harm and the score function. This implies that positions beyond the harm horizon where the output's harmfulness is already determined receive zero gradient signal during training. This explains empirical observations that KL divergence between aligned and base models concentrates on early tokens. Consequently, standard alignment objectives cannot produce deep alignment, regardless of optimization quality. We introduce the concept of harm information $I_t$, which quantifies each position's influence on harm, and prove that equilibrium KL divergence tracks this quantity. Finally, we derive an objective based on recovery penalties that creates gradient signal at all positions, providing theoretical grounding for empirically successful data augmentation techniques.
Executive Summary
The article analyzes the limitations of Reinforcement Learning from Human Feedback (RLHF) alignment in large language models, proving that gradient-based alignment is inherently shallow, focusing on positions where harm is decided and vanishing beyond. The authors propose a new objective based on recovery penalties to create gradient signal at all positions, providing a theoretical foundation for empirically successful data augmentation techniques. This research has significant implications for the development of safer and more reliable language models.
Key Points
- ▸ Gradient-based alignment is shallow and focuses on positions where harm is decided
- ▸ The gradient at position $t$ equals the covariance between the conditional expected harm and the score function
- ▸ Standard alignment objectives cannot produce deep alignment, regardless of optimization quality
Merits
Theoretical Rigor
The article provides a rigorous mathematical analysis of the limitations of RLHF alignment, offering a clear understanding of the underlying issues.
Demerits
Limited Scope
The research focuses primarily on the limitations of RLHF alignment, without exploring other potential alignment methods or techniques.
Expert Commentary
The article provides a significant contribution to the field of natural language processing, highlighting the limitations of current alignment methods and proposing a novel solution. The authors' use of mathematical analysis to understand the underlying issues is commendable, and their proposed objective based on recovery penalties shows promise for improving language model safety. However, further research is needed to fully explore the potential of this approach and to address the broader challenges of language model alignment.
Recommendations
- ✓ Further research should be conducted to explore the effectiveness of the proposed objective based on recovery penalties
- ✓ Developers of language models should consider the limitations of current alignment methods and explore alternative techniques to improve model safety