Academic

Stabilizing Reinforcement Learning for Diffusion Language Models

arXiv:2603.06743v1 Announce Type: new Abstract: Group Relative Policy Optimization (GRPO) is highly effective for post-training autoregressive (AR) language models, yet its direct application to diffusion large language models (dLLMs) often triggers reward collapse. We identify two sources of incompatibility. First, GRPO relies on importance ratios defined by sequence probabilities, which are intractable in dLLMs and must be estimated (e.g., via ELBO-based or mean-field likelihood proxies), yielding inherently noisy ratios. Second, standard GRPO's formulation is not designed for estimated ratios: its conditional clipping can be anomalously bypassed by model-agnostic estimation noise, producing gradient spikes, while its fixed group-size normalization amplifies gradient-magnitude fluctuations under high-variance ratio estimates. We show these effects form a self-reinforcing instability loop that drives policy drift and further increases ratio variance. To break this loop, we propose St

arXiv:2603.06743v1 Announce Type: new Abstract: Group Relative Policy Optimization (GRPO) is highly effective for post-training autoregressive (AR) language models, yet its direct application to diffusion large language models (dLLMs) often triggers reward collapse. We identify two sources of incompatibility. First, GRPO relies on importance ratios defined by sequence probabilities, which are intractable in dLLMs and must be estimated (e.g., via ELBO-based or mean-field likelihood proxies), yielding inherently noisy ratios. Second, standard GRPO's formulation is not designed for estimated ratios: its conditional clipping can be anomalously bypassed by model-agnostic estimation noise, producing gradient spikes, while its fixed group-size normalization amplifies gradient-magnitude fluctuations under high-variance ratio estimates. We show these effects form a self-reinforcing instability loop that drives policy drift and further increases ratio variance. To break this loop, we propose StableDRL, a reformulation of GRPO tailored for dLLMs that uses (i) unconditional clipping to suppress outlier-induced spikes and (ii) self-normalization to constrain updates within the convex hull of per-sample gradients. We further extend StableDRL to block-wise diffusion models via a staircase attention mechanism.

Executive Summary

The article proposes StableDRL, a reformulation of Group Relative Policy Optimization (GRPO) tailored for diffusion large language models (dLLMs). The authors identify two sources of incompatibility between GRPO and dLLMs, namely the intractability of sequence probabilities and the formulation of standard GRPO. They demonstrate that these issues form a self-reinforcing instability loop that drives policy drift and increases ratio variance. StableDRL addresses these problems by introducing unconditional clipping and self-normalization, which suppress outlier-induced spikes and constrain updates within the convex hull of per-sample gradients. The authors also extend StableDRL to block-wise diffusion models via a staircase attention mechanism. This work has significant implications for the development of stable reinforcement learning for dLLMs, a critical component of natural language processing and AI research.

Key Points

  • GRPO is highly effective for post-training autoregressive language models but triggers reward collapse when applied directly to dLLMs.
  • Two sources of incompatibility between GRPO and dLLMs are identified: intractable sequence probabilities and the formulation of standard GRPO.
  • StableDRL addresses these issues by introducing unconditional clipping and self-normalization to constrain updates within the convex hull of per-sample gradients.

Merits

Strength

The proposed StableDRL formulation is tailored for dLLMs and addresses the self-reinforcing instability loop that drives policy drift and increases ratio variance.

Demerits

Limitation

The proposed solution may require significant computational resources and expertise in reinforcement learning and large language models.

Expert Commentary

The proposed StableDRL formulation is a significant advancement in the development of stable reinforcement learning for dLLMs. By addressing the self-reinforcing instability loop that drives policy drift and increases ratio variance, StableDRL has the potential to improve the stability and efficiency of reinforcement learning for dLLMs. However, the proposed solution may require significant computational resources and expertise in reinforcement learning and large language models. Furthermore, the authors' extension of StableDRL to block-wise diffusion models via a staircase attention mechanism is a novel and promising direction for future research. Overall, this work has significant implications for the development of stable reinforcement learning for dLLMs and may inform policy decisions regarding AI research and development.

Recommendations

  • Future research should focus on further extending StableDRL to other types of large language models and exploring its applications in natural language processing and AI research.
  • Developers and researchers should carefully evaluate the computational resources and expertise required for implementing StableDRL and consider potential trade-offs between stability and efficiency.

Sources