Academic

Advances in GRPO for Generation Models: A Survey

arXiv:2603.06623v1 Announce Type: new Abstract: Large-scale flow matching models have achieved strong performance across generative tasks such as text-to-image, video, 3D, and speech synthesis. However, aligning their outputs with human preferences and task-specific objectives remains challenging. Flow-GRPO extends Group Relative Policy Optimization (GRPO) to generation models, enabling stable reinforcement learning alignment for generative systems. Since its introduction, Flow-GRPO has triggered rapid research growth, spanning methodological refinements and diverse application domains. This survey provides a comprehensive review of Flow-GRPO and its subsequent developments. We organize existing work along two primary dimensions. First, we analyze methodological advances beyond the original framework, including reward signal design, credit assignment, sampling efficiency, diversity preservation, reward hacking mitigation, and reward model construction. Second, we examine extensions of

Z
Zexiang Liu, Xianglong He, Yangguang Li
· · 1 min read · 15 views

arXiv:2603.06623v1 Announce Type: new Abstract: Large-scale flow matching models have achieved strong performance across generative tasks such as text-to-image, video, 3D, and speech synthesis. However, aligning their outputs with human preferences and task-specific objectives remains challenging. Flow-GRPO extends Group Relative Policy Optimization (GRPO) to generation models, enabling stable reinforcement learning alignment for generative systems. Since its introduction, Flow-GRPO has triggered rapid research growth, spanning methodological refinements and diverse application domains. This survey provides a comprehensive review of Flow-GRPO and its subsequent developments. We organize existing work along two primary dimensions. First, we analyze methodological advances beyond the original framework, including reward signal design, credit assignment, sampling efficiency, diversity preservation, reward hacking mitigation, and reward model construction. Second, we examine extensions of GRPO-based alignment across generative paradigms and modalities, including text-to-image, video generation, image editing, speech and audio, 3D modeling, embodied vision-language-action systems, unified multimodal models, autoregressive and masked diffusion models, and restoration tasks. By synthesizing theoretical insights and practical adaptations, this survey highlights Flow-GRPO as a general alignment framework for modern generative models and outlines key open challenges for scalable and robust reinforcement-based generation.

Executive Summary

This article provides a comprehensive survey of Flow-GRPO, a framework for aligning generative models with human preferences and task-specific objectives through reinforcement learning. The authors analyze methodological advances and extensions of GRPO-based alignment across various generative paradigms and modalities. The survey highlights Flow-GRPO as a general alignment framework for modern generative models and identifies key open challenges for scalable and robust reinforcement-based generation. By synthesizing theoretical insights and practical adaptations, the authors demonstrate the potential of Flow-GRPO to address the challenges of aligning generative models with human objectives. The survey's findings have significant implications for the development of more effective and efficient generative models in various applications.

Key Points

  • Flow-GRPO extends Group Relative Policy Optimization (GRPO) to generation models, enabling stable reinforcement learning alignment for generative systems.
  • The framework has triggered rapid research growth, spanning methodological refinements and diverse application domains.
  • The survey organizes existing work along two primary dimensions: methodological advances and extensions of GRPO-based alignment across generative paradigms and modalities.

Merits

Strength

Comprehensive review of Flow-GRPO and its subsequent developments, providing a thorough understanding of the framework's capabilities and limitations.

Interdisciplinary approach

The survey synthesizes insights from both theoretical and practical perspectives, demonstrating the framework's potential for real-world applications.

Clear organization

The authors organize existing work along two primary dimensions, making it easy to navigate and understand the various methodological advances and extensions of GRPO-based alignment.

Demerits

Limitation

The survey's focus on Flow-GRPO may lead to a narrow perspective on the broader challenges of aligning generative models with human objectives.

Technical complexity

The survey assumes a high level of technical expertise in reinforcement learning and generative models, which may limit its accessibility to non-experts.

Lack of empirical evaluation

The survey primarily focuses on theoretical insights and methodological advances, leaving room for future empirical evaluations to validate the effectiveness of Flow-GRPO in various applications.

Expert Commentary

The survey provides a thorough review of Flow-GRPO and its subsequent developments, demonstrating the framework's potential for addressing the challenges of aligning generative models with human objectives. However, the survey's focus on Flow-GRPO may lead to a narrow perspective on the broader challenges of generative model alignment. Furthermore, the survey assumes a high level of technical expertise in reinforcement learning and generative models, which may limit its accessibility to non-experts. Nevertheless, the survey's findings have significant implications for the development of more effective and efficient generative models in various applications, and its recommendations provide a valuable starting point for future research.

Recommendations

  • Future research should focus on empirical evaluations of Flow-GRPO in various applications to validate its effectiveness and identify potential limitations.
  • The development of more effective generative models requires a multidisciplinary approach, incorporating insights from both theoretical and practical perspectives.

Sources