Beyond Pairs: Your Language Model is Secretly Optimizing a Preference Graph
2026-05-08 • Machine Learning
Machine LearningArtificial Intelligence
AI summaryⓘ
The authors improve how language models learn from human preferences by moving beyond simple pairwise comparisons. Instead, they use graphs to represent all preferences between multiple responses, which helps keep the order consistent and avoids confusion during training. Their method, GraphDPO, efficiently uses this richer preference information while still being simple to compute. They also add ways to include trusted examples to guide the training early on. Tests show this graph-based approach works better for tasks like reasoning and coding.
Direct Preference OptimizationReinforcement Learning from Human FeedbackPreference GraphsPlackett–Luce ModelTransitivityLog-sum-expLanguage ModelsProgram SynthesisAnchoringRanking Aggregation
Authors
Ning Liu, Chuanneng Sun, Kristina Klinkner, Shervin Malmasi
Abstract
Direct Preference Optimization (DPO) aligns language models using pairwise preference comparisons, offering a simple and effective alternative to Reinforcement Learning (RL) from human feedback. However, in many practical settings, training data consists of multiple rollouts per prompt, inducing rich preference structure that pairwise DPO fails to exploit. Collapsing such data into independent pairs discards transitivity, introduces redundant or conflicting supervision, and can lead to unstable optimization. We propose Graph Direct Preference Optimization (GraphDPO), a principled generalization of DPO that operates over directed acyclic preference graphs induced by rollout rankings. GraphDPO encodes dominance relations as edges and optimizes a graph-structured Plackett--Luce-inspired objective that aggregates supervision over graph neighborhoods, enforcing transitivity while recovering standard DPO as a special case. To handle discrete or sparse signals, we introduce an equivalence-class construction where responses with identical preferences form graph layers, and intra-layer edges contribute zero loss, preventing spurious gradients. Despite leveraging full graph structure, GraphDPO maintains linear per-prompt complexity via efficient log-sum-exp aggregation. We further incorporate optional ground-truth anchoring by inserting verified solutions as dominant nodes and applying an annealed schedule that stabilizes early training while gradually relaxing oracle supervision. Experiments on reasoning and program synthesis tasks demonstrate superior performance, suggesting that graph-structured preference modeling is a scalable and robust alternative to pairwise and listwise alignment objectives.