RFEval: Benchmarking Reasoning Faithfulness under Counterfactual Reasoning Intervention in Large Reasoning Models
arXiv:2602.17053v1 Announce Type: new Abstract: Large Reasoning Models (LRMs) exhibit strong performance, yet often produce rationales that sound plausible but fail to reflect their true decision process, undermining reliability and trust. We introduce a formal framework for reasoning faithfulness, defined by two testable conditions: stance consistency (a coherent stance linking reasoning to answer) and causal influence (the stated reasoning causally drives the answer under output-level interventions), explicitly decoupled from accuracy. To operationalize this, we present RFEval, a benchmark of 7,186 instances across seven tasks that probes faithfulness via controlled, output-level counterfactual interventions. Evaluating twelve open-source LRMs, we find unfaithfulness in 49.7% of outputs, predominantly from stance inconsistency. Failures are concentrated in brittle, convergent domains such as math and code, and correlate more with post-training regimes than with scale: within-family
arXiv:2602.17053v1 Announce Type: new Abstract: Large Reasoning Models (LRMs) exhibit strong performance, yet often produce rationales that sound plausible but fail to reflect their true decision process, undermining reliability and trust. We introduce a formal framework for reasoning faithfulness, defined by two testable conditions: stance consistency (a coherent stance linking reasoning to answer) and causal influence (the stated reasoning causally drives the answer under output-level interventions), explicitly decoupled from accuracy. To operationalize this, we present RFEval, a benchmark of 7,186 instances across seven tasks that probes faithfulness via controlled, output-level counterfactual interventions. Evaluating twelve open-source LRMs, we find unfaithfulness in 49.7% of outputs, predominantly from stance inconsistency. Failures are concentrated in brittle, convergent domains such as math and code, and correlate more with post-training regimes than with scale: within-family ablations indicate that adding current RL-style objectives on top of supervised fine-tuning can reduce reasoning faithfulness, even when accuracy is maintained. Crucially, accuracy is neither a sufficient nor a reliable proxy for faithfulness: once controlling for model and task, the accuracy-faithfulness link is weak and statistically insignificant. Our work establishes a rigorous methodology for auditing LRM reliability and shows that trustworthy AI requires optimizing not only for correct outcomes but also for the structural integrity of the reasoning process. Our code and dataset can be found at project page: $\href{https://aidaslab.github.io/RFEval/}{https://aidaslab.github.io/RFEval/}$
Executive Summary
The article introduces RFEval, a benchmark for evaluating the faithfulness of large reasoning models. It defines two testable conditions for faithfulness: stance consistency and causal influence, and finds that 49.7% of model outputs are unfaithful, primarily due to stance inconsistency. The study suggests that accuracy is not a reliable proxy for faithfulness and that optimizing for the structural integrity of the reasoning process is crucial for trustworthy AI.
Key Points
- ▸ Introduction of RFEval, a benchmark for evaluating reasoning faithfulness
- ▸ Definition of two testable conditions for faithfulness: stance consistency and causal influence
- ▸ Findings of unfaithfulness in 49.7% of model outputs, primarily due to stance inconsistency
Merits
Rigorous Methodology
The article establishes a rigorous methodology for auditing the reliability of large reasoning models, which is essential for trustworthy AI.
Comprehensive Evaluation
The study evaluates twelve open-source models across seven tasks, providing a comprehensive understanding of the faithfulness of large reasoning models.
Demerits
Limited Scope
The study focuses on large reasoning models and may not be generalizable to other types of AI models or applications.
Lack of Explanatory Insights
The article does not provide detailed explanatory insights into why certain models or tasks are more prone to unfaithfulness.
Expert Commentary
The article makes a significant contribution to the field of AI by highlighting the importance of faithfulness in large reasoning models. The introduction of RFEval provides a valuable tool for evaluating and improving the reliability of AI systems. However, the study's findings also raise important questions about the current state of AI development and the need for a more nuanced understanding of the relationship between accuracy and faithfulness. As the field continues to evolve, it is essential to prioritize the development of trustworthy AI that prioritizes both accuracy and faithfulness.
Recommendations
- ✓ Developers should incorporate RFEval into their model development and evaluation pipelines
- ✓ Further research is needed to explore the relationship between accuracy and faithfulness in AI models
- ✓ Regulators and policymakers should consider incorporating faithfulness evaluations into AI development standards and guidelines