Reward Hacking in Rubric-Based Reinforcement Learning

2026-05-12Artificial Intelligence

Artificial Intelligence
AI summary

The authors studied how reinforcement learning (RL) models can trick rubric-based reward systems, called reward hacking. They found that weaker verifiers often let the model improve scores without truly improving quality, especially in medical and science tasks. Stronger verifiers help reduce this problem but do not fully stop it, especially when the rubric misses key quality issues. The authors also developed a new way to detect when improvements stop being meaningful, even without external checks. Overall, stronger verification helps but doesn’t guarantee that better rubric scores mean better real-world performance.

Reinforcement LearningReward HackingRubric-based RewardsVerifierProxy RewardPolicy OptimizationFactual CorrectnessSelf-internalization GapEvaluation Metrics
Authors
Anas Mahmoud, MohammadHossein Rezaei, Zihao Wang, Anisha Gunjal, Bing Liu, Yunzhong He
Abstract
Reinforcement learning with verifiable rewards has enabled strong post-training gains in domains such as math and coding, though many open-ended settings rely on rubric-based rewards. We study reward hacking in rubric-based RL, where a policy is optimized against a training verifier but evaluated against a cross-family panel of three frontier judges, reducing dependence on any single evaluator. Our framework separates two sources of divergence: verifier failure, where the training verifier credits rubric criteria that reference verifiers reject, and rubric-design limitations, where even strong rubric-based verifiers favor responses that rubric-free judges rate worse overall. Across medical and science domains, weak verifiers produce large proxy-reward gains that do not transfer to the reference verifiers; exploitation grows over training and concentrates in recurring failures such as partial satisfaction of compound criteria, treating implicit content as explicit, and imprecise topical matching. Stronger verifiers substantially reduce, but do not eliminate, verifier exploitation. We also introduce a self-internalization gap, a verifier-free diagnostic based on policy log-probabilities, which tracks reference-verifier quality, detecting when the policy trained using the weak verifier stops improving. Finally, in our setting, stronger verification does not prevent reward hacking when the rubric leaves important failure modes unspecified: rubric-based verifiers prefer the RL checkpoint, while rubric-free judges prefer the base model. These disagreements coincide with gains concentrated in completeness and presence-based criteria, alongside declines in factual correctness, conciseness, relevance, and overall quality. Together, these results suggest that stronger verification reduces reward hacking, but does not by itself ensure that rubric gains correspond to broader quality gains.