When Errors Can Be Beneficial: A Categorization of Imperfect Rewards for Policy Gradient
2026-04-28 • Machine Learning
Machine LearningArtificial Intelligence
AI summaryⓘ
The authors study how language models learn from rewards that are not perfect copies of the true desired goals. They explain that not all mistakes in these rewards are bad; some can actually help the model learn better by avoiding getting stuck on mediocre answers. Their research leads to new ways to measure how good these reward systems are, especially when humans give feedback. They also offer advice on designing reward systems when the true rewards can be checked. Overall, they show that how well a reward works depends a lot on the starting model and the learning method used.
language modelsreinforcement learningproxy rewardsground truth rewardpolicy gradient optimizationreward model evaluationreinforcement learning from human feedback (RLHF)reward designinitial policylearning algorithm
Authors
Shuning Shang, Hubert Strauss, Stanley Wei, Sanjeev Arora, Noam Razin
Abstract
Training language models via reinforcement learning often relies on imperfect proxy rewards, since ground truth rewards that precisely define the intended behavior are rarely available. Standard metrics for assessing the quality of proxy rewards, such as ranking accuracy, treat incorrect rewards as strictly harmful. In this work, however, we highlight that not all deviations from the ground truth are equal. By theoretically analyzing which outputs attract probability during policy gradient optimization, we categorize reward errors according to their effect on the increase in ground truth reward. The analysis establishes that reward errors, though conventionally viewed as harmful, can also be benign or even beneficial by preventing the policy from stalling around outputs with mediocre ground truth reward. We then present two practical implications of our theory. First, for reinforcement learning from human feedback (RLHF), we develop reward model evaluation metrics that account for the harmfulness of reward errors. Compared to standard ranking accuracy, these metrics typically correlate better with the performance of a language model after RLHF, yet gaps remain in robustly evaluating reward models. Second, we provide insights for reward design in settings with verifiable rewards. A key theme underlying our results is that the effectiveness of a proxy reward function depends heavily on its interaction with the initial policy and learning algorithm.