Provable Last-Iterate Convergence for Multi-Objective Safe LLM Alignment via Optimistic Primal-Dual
2026-02-25 • Machine Learning
Machine LearningArtificial Intelligence
AI summaryⓘ
The authors study how to make large language models better match human preferences using reinforcement learning with safety rules, called RLHF. They show that existing methods can be unstable or fail to converge when training these models. To fix this, the authors propose a new approach called the optimistic primal-dual (OPD) algorithm that improves stability and ensures better convergence in both ideal and practical settings. Their work helps connect theory with real-world applications in aligning AI behavior safely.
Reinforcement Learning from Human FeedbackLarge Language ModelsPrimal-dual OptimizationSaddle-point ProblemPolicy ParameterizationOptimistic Primal-Dual AlgorithmConvergence GuaranteesSafe Reinforcement LearningAlignment AlgorithmsOscillations in Optimization
Authors
Yining Li, Peizhong Ju, Ness Shroff
Abstract
Reinforcement Learning from Human Feedback (RLHF) plays a significant role in aligning Large Language Models (LLMs) with human preferences. While RLHF with expected reward constraints can be formulated as a primal-dual optimization problem, standard primal-dual methods only guarantee convergence with a distributional policy where the saddle-point problem is in convex-concave form. Moreover, standard primal-dual methods may exhibit instability or divergence in the last iterate under policy parameterization in practical applications. In this work, we propose a universal primal-dual framework for safe RLHF that unifies a broad class of existing alignment algorithms, including safe-RLHF, one-shot, and multi-shot based methods. Building on this framework, we introduce an optimistic primal-dual (OPD) algorithm that incorporates predictive updates for both primal and dual variables to stabilize saddle-point dynamics. We establish last-iterate convergence guarantees for the proposed method, covering both exact policy optimization in the distributional space and convergence to a neighborhood of the optimal solution whose gap is related to approximation error and bias under parameterized policies. Our analysis reveals that optimism plays a crucial role in mitigating oscillations inherent to constrained alignment objectives, thereby closing a key theoretical gap between constrained RL and practical RLHF.