RewardUQ: A Unified Framework for Uncertainty-Aware Reward Models
arXiv:2602.24040v1 Announce Type: cross Abstract: Reward models are central to aligning large language models (LLMs) with human preferences. Yet most approaches rely on pointwise reward …
Daniel Yang, Samuel Stante, Florian Redhardt, Lena Libon, Parnian Kassraie, Ido Hakimi, Barna P\'asztor, Andreas Krause
10 views