Believe Your Model: Distribution-Guided Confidence Calibration
arXiv:2603.03872v1 Announce Type: new Abstract: Large Reasoning Models have demonstrated remarkable performance with the advancement of test-time scaling techniques, which enhances prediction accuracy by generating multiple candidate responses and selecting the most reliable answer. While prior work has analyzed that internal model signals like confidence scores can partly indicate response correctness and exhibit a distributional correlation with accuracy, such distributional information has not been fully utilized to guide answer selection. Motivated by this, we propose DistriVoting, which incorporates distributional priors as another signal alongside confidence during voting. Specifically, our method (1) first decomposes the mixed confidence distribution into positive and negative components using Gaussian Mixture Models, (2) then applies a reject filter based on positive/negative samples from them to mitigate overlap between the two distributions. Besides, to further alleviate the
arXiv:2603.03872v1 Announce Type: new Abstract: Large Reasoning Models have demonstrated remarkable performance with the advancement of test-time scaling techniques, which enhances prediction accuracy by generating multiple candidate responses and selecting the most reliable answer. While prior work has analyzed that internal model signals like confidence scores can partly indicate response correctness and exhibit a distributional correlation with accuracy, such distributional information has not been fully utilized to guide answer selection. Motivated by this, we propose DistriVoting, which incorporates distributional priors as another signal alongside confidence during voting. Specifically, our method (1) first decomposes the mixed confidence distribution into positive and negative components using Gaussian Mixture Models, (2) then applies a reject filter based on positive/negative samples from them to mitigate overlap between the two distributions. Besides, to further alleviate the overlap from the perspective of distribution itself, we propose SelfStepConf, which uses step-level confidence to dynamically adjust inference process, increasing the separation between the two distributions to improve the reliability of confidences in voting. Experiments across 16 models and 5 benchmarks demonstrate that our method significantly outperforms state-of-the-art approaches.
Executive Summary
The article proposes a novel approach, DistriVoting, which enhances answer selection in large reasoning models by incorporating distributional priors alongside confidence scores. By decomposing the mixed confidence distribution into positive and negative components using Gaussian Mixture Models and applying a reject filter, DistriVoting improves the reliability of confidences in voting. The method also introduces SelfStepConf, which dynamically adjusts the inference process to increase the separation between the two distributions. Experiments demonstrate that DistriVoting outperforms state-of-the-art approaches across 16 models and 5 benchmarks.
Key Points
- ▸ DistriVoting incorporates distributional priors to guide answer selection
- ▸ Gaussian Mixture Models are used to decompose the mixed confidence distribution
- ▸ SelfStepConf dynamically adjusts the inference process to improve confidence reliability
Merits
Improved Confidence Calibration
DistriVoting's use of distributional priors and SelfStepConf enhances the reliability of confidence scores, leading to more accurate answer selection
Demerits
Increased Computational Complexity
The introduction of Gaussian Mixture Models and SelfStepConf may increase the computational requirements of the model, potentially impacting scalability
Expert Commentary
The article presents a significant contribution to the field of large reasoning models, highlighting the importance of distributional priors and confidence calibration in improving answer selection. The use of Gaussian Mixture Models and SelfStepConf demonstrates a nuanced understanding of the underlying statistical properties of the model's confidence scores. However, further research is needed to fully explore the implications of these methods and to address potential limitations, such as increased computational complexity. The article's findings have important implications for the development of more accurate and reliable AI models, with potential applications in a range of fields.
Recommendations
- ✓ Further research is needed to explore the applicability of DistriVoting and SelfStepConf to other types of AI models and applications
- ✓ The development of more efficient and scalable implementations of Gaussian Mixture Models and SelfStepConf is necessary to facilitate widespread adoption