Routers Learn the Geometry of Their Experts: Geometric Coupling in Sparse Mixture-of-Experts
2026-05-12 • Machine Learning
Machine LearningComputation and Language
AI summaryⓘ
The authors study how decisions are made in Sparse Mixture-of-Experts (SMoE) models, where different parts (experts) handle different inputs. They found that the way routers choose experts is closely linked to how those experts respond, both aligned along similar directions in the model's calculations. They also show that common methods to balance workload disrupt this alignment, making routers less distinct. Finally, the authors propose a simple K-Means style routing method that keeps this alignment and balances expert use effectively without extra loss functions.
Sparse Mixture-of-Expertsroutinggradientload balancinggeometric couplingK-Means clusteringcosine similaritylanguage modelsperplexity
Authors
Sagi Ahrac, Noya Hochwald, Mor Geva
Abstract
Sparse Mixture-of-Experts (SMoE) models enable scaling language models efficiently, but training them remains challenging, as routing can collapse onto few experts and auxiliary load-balancing losses can reduce specialization. Motivated by these hurdles, we study how routing decisions in SMoEs are formed mechanistically. First, we reveal a geometric coupling between routers and their corresponding experts. For a given token, the router weights for the selected expert and the expert weights processing it receive gradients along the same input direction, differing only in scalar coefficients. Thus, matched router--expert directions accumulate the same routed token history. This theoretical coupling also appears empirically in routing dynamics. In a $1$B SMoE trained from scratch, higher router scores predict stronger expert neuron activations, showing that routing decisions are mirrored inside the selected expert. Next, we analyze the effects of auxiliary load balancing on the router--expert geometric coupling, showing that such losses break this structure by spreading input-directed gradients across router weights, making distinct router directions nearly three times more similar to each other. Last, we demonstrate the centrality of geometric coupling for effective routing with a parameter-free online K-Means router, in which each expert maintains a running average of the hidden states routed to it and tokens are assigned based on cosine similarity. Compared with auxiliary-loss and loss-free balancing, this router achieves the lowest load imbalance with only a modest perplexity increase, indicating that geometric coupling captures a substantial part of what the router learns. Overall, our results explain how routers form assignment geometry that supports an effective division of labor.