AI summaryⓘ
The authors study how to properly scale gradients in deep learning when parameters are grouped as matrices, focusing on a range of normalization methods called spectral normalizations. They introduce new distance measures between parameter distributions, called Spectral Wasserstein distances, which generalize existing concepts like quadratic Wasserstein distance and the geometry motivating the Muon algorithm. Their work includes mathematical formulations, proofs, and closed-form solutions for special cases such as Gaussian distributions. They also connect these distances to gradient flows and transport equations, providing insights into how these normalizations impact optimization dynamics in deep learning.
Gradient normalizationSpectral normalizationWasserstein distanceSchatten normsMean-field regimeKantorovich formulationBrenier theoremGaussian distributionsGradient flowOptimal transport
Abstract
Gradient normalization is central in deep-learning optimization because it stabilizes training and reduces sensitivity to scale. For deep architectures, parameters are naturally grouped into matrices or blocks, so spectral normalizations are often more faithful than coordinatewise Euclidean ones; Muon is the main motivating example of this paper. More broadly, we study a family of spectral normalization rules, ranging from ordinary gradient descent to Muon and intermediate Schatten-type schemes, in a mean-field regime where parameters are modeled by probability measures. We introduce a family of Spectral Wasserstein distances indexed by a norm gamma on positive semidefinite matrices. The trace norm recovers the classical quadratic Wasserstein distance, the operator norm recovers the Muon geometry, and intermediate Schatten norms interpolate between them. We develop the static Kantorovich formulation, prove comparison bounds with W2, derive a max-min representation, and obtain a conditional Brenier theorem. For Gaussian marginals, the problem reduces to a constrained optimization on covariance matrices, extending the Bures formula and yielding a closed form for commuting covariances in the Schatten family. For monotone norms, including all Schatten cases, we prove the equivalence between the static and dynamic Benamou-Brenier formulations, deduce that the resulting transport cost is a genuine metric equivalent to W2 in fixed dimension, and show that the induced Gaussian covariance cost is also a metric. We then interpret the associated normalized continuity equation as a Spectral Wasserstein gradient flow, identify its exact finite-particle counterpart as a normalized matrix flow, obtain first geodesic-convexity results, and show how positively homogeneous mean-field models induce a spectral unbalanced transport on the sphere.