Controllable Exploration in Hybrid-Policy RLVR for Multi-Modal Reasoning
arXiv:2602.20197v1 Announce Type: new Abstract: Reinforcement Learning with verifiable rewards (RLVR) has emerged as a primary learning paradigm for enhancing the reasoning capabilities of multi-modal large language models (MLLMs). However, during RL training, the enormous state space of MLLM and sparse rewards often leads to entropy collapse, policy degradation, or over-exploitation of suboptimal behaviors. This necessitates an exploration strategy that maintains productive stochasticity while avoiding the drawbacks of uncontrolled random sampling, yielding inefficient exploration. In this paper, we propose CalibRL, a hybrid-policy RLVR framework that supports controllable exploration with expert guidance, enabled by two key mechanisms. First, a distribution-aware advantage weighting scales updates by group rareness to calibrate the distribution, therefore preserving exploration. Meanwhile, the asymmetric activation function (LeakyReLU) leverages the expert knowledge as a calibration
arXiv:2602.20197v1 Announce Type: new Abstract: Reinforcement Learning with verifiable rewards (RLVR) has emerged as a primary learning paradigm for enhancing the reasoning capabilities of multi-modal large language models (MLLMs). However, during RL training, the enormous state space of MLLM and sparse rewards often leads to entropy collapse, policy degradation, or over-exploitation of suboptimal behaviors. This necessitates an exploration strategy that maintains productive stochasticity while avoiding the drawbacks of uncontrolled random sampling, yielding inefficient exploration. In this paper, we propose CalibRL, a hybrid-policy RLVR framework that supports controllable exploration with expert guidance, enabled by two key mechanisms. First, a distribution-aware advantage weighting scales updates by group rareness to calibrate the distribution, therefore preserving exploration. Meanwhile, the asymmetric activation function (LeakyReLU) leverages the expert knowledge as a calibration baseline to moderate overconfident updates while preserving their corrective direction. CalibRL increases policy entropy in a guided manner and clarifies the target distribution by estimating the on-policy distribution through online sampling. Updates are driven by these informative behaviors, avoiding convergence to erroneous patterns. Importantly, these designs help alleviate the distributional mismatch between the model's policy and expert trajectories, thereby achieving a more stable balance between exploration and exploitation. Extensive experiments across eight benchmarks, including both in-domain and out-of-domain settings, demonstrate consistent improvements, validating the effectiveness of our controllable hybrid-policy RLVR training. Code is available at https://github.com/zhh6425/CalibRL.
Executive Summary
The article proposes CalibRL, a hybrid-policy Reinforcement Learning with verifiable rewards (RLVR) framework that enables controllable exploration with expert guidance. CalibRL addresses the challenges of entropy collapse, policy degradation, and over-exploitation in multi-modal large language models (MLLMs) by introducing distribution-aware advantage weighting and asymmetric activation functions. The framework achieves a stable balance between exploration and exploitation, leading to improved performance in various benchmarks. The study contributes to the development of more effective RLVR methods for MLLMs, with potential applications in natural language processing and decision-making systems.
Key Points
- ▸ Introduction of CalibRL, a hybrid-policy RLVR framework
- ▸ Use of distribution-aware advantage weighting and asymmetric activation functions for controllable exploration
- ▸ Evaluation of CalibRL across eight benchmarks, demonstrating consistent improvements
Merits
Effective Exploration-Exploitation Balance
CalibRL's design enables a stable balance between exploration and exploitation, addressing a key challenge in RLVR
Demerits
Limited Generalizability
The study's focus on MLLMs may limit the generalizability of CalibRL to other domains or applications
Expert Commentary
The article presents a significant contribution to the field of RLVR, addressing the long-standing challenge of balancing exploration and exploitation in MLLMs. CalibRL's innovative design, leveraging distribution-aware advantage weighting and asymmetric activation functions, demonstrates a promising approach to achieving controllable exploration. The evaluation across multiple benchmarks provides strong evidence for the framework's effectiveness. However, further research is necessary to explore the generalizability of CalibRL to other domains and applications.
Recommendations
- ✓ Further evaluation of CalibRL in diverse domains and applications
- ✓ Investigation of the potential integration of CalibRL with other RLVR methods