Improved Regret Guarantees for Online Mirror Descent using a Portfolio of Mirror Maps

2026-02-13Data Structures and Algorithms

Data Structures and AlgorithmsMachine Learning
AI summary

The authors study a method called Online Mirror Descent (OMD), which helps make good decisions step-by-step when outcomes depend on past choices. They focus on improving OMD by picking better underlying shapes (mirror maps) that help handle situations where only a few factors matter (sparse losses). They show that using mirror maps based on block norms can significantly reduce mistakes compared to standard approaches, especially when the number of important factors is unknown. To deal with this uncertainty, they create a meta-algorithm that adaptively picks the best shape, leading to better overall performance. Their work helps understand how choosing the right geometry can improve online learning in complex environments.

Online Mirror Descent (OMD)Mirror MapOnline Convex Optimization (OCO)Sparse Loss FunctionsBlock NormsRegretMultiplicative WeightsGeometry SelectionOnline LearningAdaptive Algorithms
Authors
Swati Gupta, Jai Moondra, Mohit Singh
Abstract
OMD and its variants give a flexible framework for OCO where the performance depends crucially on the choice of the mirror map. While the geometries underlying OPGD and OEG, both special cases of OMD, are well understood, it remains a challenging open question on how to construct an optimal mirror map for any given constrained set and a general family of loss functions, e.g., sparse losses. Motivated by parameterizing a near-optimal set of mirror maps, we consider a simpler question: is it even possible to obtain polynomial gains in regret by using mirror maps for geometries that interpolate between $L_1$ and $L_2$, which may not be possible by restricting to only OEG ($L_1$) or OPGD ($L_2$). Our main result answers this question positively. We show that mirror maps based on block norms adapt better to the sparsity of loss functions, compared to previous $L_p$ (for $p \in [1, 2]$) interpolations. In particular, we construct a family of online convex optimization instances in $\mathbb{R}^d$, where block norm-based mirror maps achieve a provable polynomial (in $d$) improvement in regret over OEG and OPGD for sparse loss functions. We then turn to the setting in which the sparsity level of the loss functions is unknown. In this case, the choice of geometry itself becomes an online decision problem. We first show that naively switching between OEG and OPGD can incur linear regret, highlighting the intrinsic difficulty of geometry selection. To overcome this issue, we propose a meta-algorithm based on multiplicative weights that dynamically selects among a family of uniform block norms. We show that this approach effectively tunes OMD to the sparsity of the losses, yielding adaptive regret guarantees. Overall, our results demonstrate that online mirror-map selection can significantly enhance the ability of OMD to exploit sparsity in online convex optimization.