MARS: Margin-Aware Reward-Modeling with Self-Refinement

2026-02-19Machine Learning

Machine LearningArtificial IntelligenceInformation Theory
AI summary

The authors study how to improve training of reward models, which help teach AI what to do based on human preferences. Since getting enough human-labeled examples is expensive, they propose MARS, a method that focuses data augmentation on the trickiest cases where the model is unsure. This approach helps the model learn better by concentrating on examples it currently struggles with, leading to more reliable results. Their tests show MARS outperforms simpler, random augmentation strategies.

reward modelingRLHFdata augmentationpreference learningmargin-aware samplingpolicy optimizationPPOloss functionmodel uncertaintyhard-sample mining
Authors
Payel Bhattacharjee, Osvaldo Simeone, Ravi Tandon
Abstract
Reward modeling is a core component of modern alignment pipelines including RLHF and RLAIF, underpinning policy optimization methods including PPO and TRPO. However, training reliable reward models relies heavily on human-labeled preference data, which is costly and limited, motivating the use of data augmentation. Existing augmentation approaches typically operate at the representation or semantic level and remain agnostic to the reward model's estimation difficulty. In this paper, we propose MARS, an adaptive, margin-aware augmentation and sampling strategy that explicitly targets ambiguous and failure modes of the reward model. Our proposed framework, MARS, concentrates augmentation on low-margin (ambiguous) preference pairs where the reward model is most uncertain, and iteratively refines the training distribution via hard-sample augmentation. We provide theoretical guarantees showing that this strategy increases the average curvature of the loss function hence enhance information and improves conditioning, along with empirical results demonstrating consistent gains over uniform augmentation for robust reward modeling.