UniPool: A Globally Shared Expert Pool for Mixture-of-Experts

2026-05-07Machine Learning

Machine LearningArtificial Intelligence
AI summary

The authors study how Mixture-of-Experts (MoE) models usually assign different expert groups to each layer, which makes the number of experts grow a lot as the model gets deeper. They find that this method wastes expert capacity because even random routing at deeper layers only slightly reduces performance. To fix this, they propose UniPool, which shares a single pool of experts across all layers instead of giving each layer its own experts. Their method trains stably and improves model performance, using fewer total expert parameters while still matching or beating traditional MoE setups. This means expert capacity can grow more slowly with depth, making models more efficient.

Mixture-of-Experts (MoE)TransformerExpert routingModel scalingParameter efficiencyUniPoolTop-k routingValidation lossPerplexityAuxiliary loss
Authors
Minbin Huang, Han Shi, Chuanyang Zheng, Yimeng Wu, Guoxuan Chen, Xintong Yu, Yichun Yin, Hong Cheng
Abstract
Modern Mixture-of-Experts (MoE) architectures allocate expert capacity through a rigid per-layer rule: each transformer layer owns a separate expert set. This convention couples depth scaling with linear expert-parameter growth and assumes that every layer needs isolated expert capacity. However, recent analyses and our routing probe challenge this allocation rule: replacing a deeper layer's learned top-k router with uniform random routing drops downstream accuracy by only 1.0-1.6 points across multiple production MoE models. Motivated by this redundancy, we propose UniPool, an MoE architecture that treats expert capacity as a global architectural budget by replacing per-layer expert ownership with a single shared pool accessed by independent per-layer routers. To enable stable and balanced training under sharing, we introduce a pool-level auxiliary loss that balances expert utilization across the entire pool, and adopt NormRouter to provide sparse and scale-stable routing into the shared expert pool. Across five LLaMA-architecture model scales (182M, 469M, 650M, 830M, and 978M parameters) trained on 30B tokens from the Pile, UniPool consistently improves validation loss and perplexity over the matched vanilla MoE baselines. Across these scales, UniPool reduces validation loss by up to 0.0386 relative to vanilla MoE. Beyond raw loss improvement, our results identify pool size as an explicit depth-scaling hyperparameter: reduced-pool UniPool variants using only 41.6%-66.7% of the vanilla expert-parameter budget match or outperform layer-wise MoE at the tested scales. This shows that, under a shared-pool design, expert parameters need not grow linearly with depth; they can grow sublinearly while remaining more efficient and effective than vanilla MoE. Further analysis shows that UniPool's benefits compose with finer-grained expert decomposition.