Rethinking Language Model Scaling under Transferable Hypersphere Optimization

2026-03-30Machine Learning

Machine Learning
AI summary

The authors present HyperP, a new method that helps train large language models more stably by keeping weight matrices on a fixed-size hypersphere and using a special optimizer called Muon. They show that learning rates tuned on small models can be effectively transferred to much bigger models, improving efficiency and preventing training problems as models grow. Additionally, they introduce SqrtGate, a new setup for Mixture-of-Experts models that maintains stable outputs and better balances expert usage. Their approach leads to more stable and efficient scaling of large language models, with code available for others to use.

large language modelsoptimizerhypersphere optimizationHyperPMuon optimizerlearning rate transferMixture-of-Experts (MoE)SqrtGateweight decayscaling laws
Authors
Liliang Ren, Yang Liu, Yelong Shen, Weizhu Chen
Abstract
Scaling laws for large language models depend critically on the optimizer and parameterization. Existing hyperparameter transfer laws are mainly developed for first-order optimizers, and they do not structurally prevent training instability at scale. Recent hypersphere optimization methods constrain weight matrices to a fixed-norm hypersphere, offering a promising alternative for more stable scaling. We introduce HyperP (Hypersphere Parameterization), the first framework for transferring optimal learning rates across model width, depth, training tokens, and Mixture-of-Experts (MoE) granularity under the Frobenius-sphere constraint with the Muon optimizer. We prove that weight decay is a first-order no-op on the Frobenius sphere, show that Depth-$μ$P remains necessary, and find that the optimal learning rate follows the same data-scaling power law with the "magic exponent" 0.32 previously observed for AdamW. A single base learning rate tuned at the smallest scale transfers across all compute budgets under HyperP, yielding $1.58\times$ compute efficiency over a strong Muon baseline at $6\times10^{21}$ FLOPs. Moreover, HyperP delivers transferable stability: all monitored instability indicators, including $Z$-values, output RMS, and activation outliers, remain bounded and non-increasing under training FLOPs scaling. We also propose SqrtGate, an MoE gating mechanism derived from the hypersphere constraint that preserves output RMS across MoE granularities for improved granularity scaling, and show that hypersphere optimization enables substantially larger auxiliary load-balancing weights, yielding both strong performance and good expert balance. We release our training codebase at https://github.com/microsoft/ArchScale.