Learning Over-Relaxation Policies for ADMM with Convergence Guarantees

2026-04-29Machine Learning

Machine Learning
AI summary

The authors studied a popular method called ADMM used to solve certain math problems efficiently. They focused on improving its speed by learning how to adjust a parameter called the relaxation parameter during problem solving, especially in cases where similar problems are solved repeatedly, like in Model Predictive Control. Their approach avoids expensive computations needed for other parameter updates, making it more practical. They proved their method still converges reliably and showed it works faster on test problems compared to a standard solver.

ADMMconvex optimizationrelaxation parameterpenalty parameterModel Predictive Controlquadratic programmingOSQP solverconvergence guarantees
Authors
Junan Lin, Paul J. Goulart, Luca Furieri
Abstract
The Alternating Direction Method of Multipliers (ADMM) is a widely used method for structured convex optimization, and its practical performance depends strongly on the choice of penalty and relaxation parameters. Motivated by settings such as Model Predictive Control (MPC), where one repeatedly solves related optimization problems with fixed structure and changing parameter values, we propose learning online updates of the relaxation parameter to improve performance on problem classes of interest. This choice is computationally attractive in OSQP-like architectures, since adapting relaxation does not trigger the matrix refactorizations associated with penalty updates. We establish convergence guarantees for ADMM with time-varying penalty and relaxation parameters under mild assumptions, and show on benchmark quadratic programs that the resulting learned policies improve both iteration count and wall-clock time over baseline OSQP.