Skip to main content
Academic

Know What You Know: Metacognitive Entropy Calibration for Verifiable RL Reasoning

arXiv:2602.22751v1 Announce Type: new Abstract: Large reasoning models (LRMs) have emerged as a powerful paradigm for solving complex real-world tasks. In practice, these models are predominantly trained via Reinforcement Learning with Verifiable Rewards (RLVR), yet most existing outcome-only RLVR pipelines rely almost exclusively on a binary correctness signal and largely ignore the model's intrinsic uncertainty. We term this discrepancy the uncertainty-reward mismatch, under which high- and low-uncertainty solutions are treated equivalently, preventing the policy from "Know What You Know" and impeding the shift from optimizing for correct answers to optimizing effective reasoning paths. This limitation is especially critical in reasoning-centric tasks such as mathematics and question answering, where performance hinges on the quality of the model's internal reasoning process rather than mere memorization of final answers. To address this, we propose EGPO, a metacognitive entropy cal

arXiv:2602.22751v1 Announce Type: new Abstract: Large reasoning models (LRMs) have emerged as a powerful paradigm for solving complex real-world tasks. In practice, these models are predominantly trained via Reinforcement Learning with Verifiable Rewards (RLVR), yet most existing outcome-only RLVR pipelines rely almost exclusively on a binary correctness signal and largely ignore the model's intrinsic uncertainty. We term this discrepancy the uncertainty-reward mismatch, under which high- and low-uncertainty solutions are treated equivalently, preventing the policy from "Know What You Know" and impeding the shift from optimizing for correct answers to optimizing effective reasoning paths. This limitation is especially critical in reasoning-centric tasks such as mathematics and question answering, where performance hinges on the quality of the model's internal reasoning process rather than mere memorization of final answers. To address this, we propose EGPO, a metacognitive entropy calibration framework that explicitly integrates intrinsic uncertainty into RLVR for enhancing LRMs. EGPO estimates per-sample uncertainty using a zero-overhead entropy proxy derived from token-level likelihoods and aligns it with extrinsic correctness through an asymmetric calibration mechanism that preserves correct reasoning while selectively regulating overconfident failures, thereby enabling stable and uncertainty-aware policy optimization. Moreover, EGPO recovers informative learning signals from otherwise degenerate group-based rollouts without modifying the verifier or reward definition. Extensive experiments across multiple benchmarks demonstrate that the proposed EGPO leads to substantial and consistent improvements in reasoning performance, establishing a principled path for advancing LRMs through metacognitive entropy calibration.

Executive Summary

The article proposes a novel framework, EGPO, for metacognitive entropy calibration in Reinforcement Learning with Verifiable Rewards (RLVR) to enhance Large Reasoning Models (LRMs). The framework addresses the uncertainty-reward mismatch by integrating intrinsic uncertainty into RLVR, enabling stable and uncertainty-aware policy optimization. EGPO estimates per-sample uncertainty using a zero-overhead entropy proxy and aligns it with extrinsic correctness through an asymmetric calibration mechanism. The authors demonstrate substantial and consistent improvements in reasoning performance across multiple benchmarks, establishing a principled path for advancing LRMs.

Key Points

  • EGPO addresses the uncertainty-reward mismatch in RLVR pipelines.
  • The framework integrates intrinsic uncertainty into RLVR for enhancing LRMs.
  • EGPO's metacognitive entropy calibration mechanism enables stable and uncertainty-aware policy optimization.

Merits

Strength in Theory

The EGPO framework is grounded in a well-defined theoretical framework, providing a principled approach to addressing the uncertainty-reward mismatch.

Strength in Practice

The authors provide extensive experimental results demonstrating substantial improvements in reasoning performance across multiple benchmarks.

Demerits

Limitation in Scalability

The EGPO framework may require significant computational resources to estimate per-sample uncertainty and align it with extrinsic correctness.

Limitation in Generalizability

The authors primarily focus on reasoning-centric tasks such as mathematics and question answering; it is unclear whether EGPO can generalize to other domains.

Expert Commentary

The EGPO framework is a significant contribution to the field of Large Reasoning Models, addressing a critical limitation in existing RLVR pipelines. The authors' emphasis on metacognitive entropy calibration provides a principled approach to advancing LRMs. However, further research is needed to investigate the scalability and generalizability of the EGPO framework. Additionally, the framework's potential applications and implications for policy decisions warrant further exploration.

Recommendations

  • Researchers should investigate the scalability of the EGPO framework for large-scale applications.
  • Policymakers should consider the EGPO framework's implications for the development and deployment of Large Reasoning Models.

Sources