IR$^3$: Contrastive Inverse Reinforcement Learning for Interpretable Detection and Mitigation of Reward Hacking
arXiv:2602.19416v1 Announce Type: new Abstract: Reinforcement Learning from Human Feedback (RLHF) enables powerful LLM alignment but can introduce reward hacking - models exploit spurious correlations in proxy rewards without genuine alignment. Compounding this, the objectives internalized during RLHF remain opaque, making hacking behaviors difficult to detect or correct. We introduce IR3 (Interpretable Reward Reconstruction and Rectification), a framework that reverse-engineers, interprets, and surgically repairs the implicit objectives driving RLHF-tuned models. We propose Contrastive Inverse Reinforcement Learning (C-IRL), which reconstructs the implicit reward function by contrasting paired responses from post-alignment and baseline policies to explain behavioral shifts during RLHF. We then decompose the reconstructed reward via sparse autoencoders into interpretable features, enabling identification of hacking signatures through contribution analysis. Finally, we propose mitigati
arXiv:2602.19416v1 Announce Type: new Abstract: Reinforcement Learning from Human Feedback (RLHF) enables powerful LLM alignment but can introduce reward hacking - models exploit spurious correlations in proxy rewards without genuine alignment. Compounding this, the objectives internalized during RLHF remain opaque, making hacking behaviors difficult to detect or correct. We introduce IR3 (Interpretable Reward Reconstruction and Rectification), a framework that reverse-engineers, interprets, and surgically repairs the implicit objectives driving RLHF-tuned models. We propose Contrastive Inverse Reinforcement Learning (C-IRL), which reconstructs the implicit reward function by contrasting paired responses from post-alignment and baseline policies to explain behavioral shifts during RLHF. We then decompose the reconstructed reward via sparse autoencoders into interpretable features, enabling identification of hacking signatures through contribution analysis. Finally, we propose mitigation strategies - clean reward optimization, adversarial shaping, constrained optimization, and feature-guided distillation - that target problematic features while preserving beneficial alignment. Experiments across multiple reward model configurations show that IR3 achieves 0.89 correlation with ground-truth rewards, identifies hacking features with over 90% precision, and significantly reduces hacking behaviors while maintaining capabilities within 3% of the original model.
Executive Summary
This article introduces IR$^3$, a framework for detecting and mitigating reward hacking in Reinforcement Learning from Human Feedback (RLHF). The framework uses Contrastive Inverse Reinforcement Learning (C-IRL) to reconstruct the implicit reward function and identify hacking signatures. The authors propose mitigation strategies to target problematic features while preserving beneficial alignment. The framework achieves a 0.89 correlation with ground-truth rewards and identifies hacking features with over 90% precision.
Key Points
- ▸ Introduction of IR$^3$ framework for detecting and mitigating reward hacking
- ▸ Use of Contrastive Inverse Reinforcement Learning (C-IRL) to reconstruct implicit reward function
- ▸ Proposal of mitigation strategies to target problematic features
Merits
Effective Detection and Mitigation
The IR$^3$ framework demonstrates high accuracy in detecting hacking features and mitigating reward hacking behaviors.
Interpretability
The use of sparse autoencoders enables the decomposition of the reconstructed reward into interpretable features.
Demerits
Limited Generalizability
The framework's performance may be limited to specific reward model configurations and may not generalize to other scenarios.
Expert Commentary
The IR$^3$ framework represents a significant advancement in the detection and mitigation of reward hacking in RLHF. The use of C-IRL and sparse autoencoders enables the identification of hacking signatures and the proposal of targeted mitigation strategies. However, further research is needed to address the potential limitations of the framework and to explore its applicability to a broader range of scenarios. The implications of this research are far-reaching, with potential applications in areas such as natural language processing and decision-making under uncertainty.
Recommendations
- ✓ Further research should be conducted to evaluate the generalizability of the IR$^3$ framework to different reward model configurations and scenarios.
- ✓ The development of more robust and efficient mitigation strategies should be explored to address the potential limitations of the framework.