Mamba-3: Improved Sequence Modeling using State Space Principles
2026-03-16 • Machine Learning
Machine Learning
AI summaryⓘ
The authors focus on making large language models faster and more efficient during use, without losing quality. They improve a type of model called linear models by introducing three new techniques inspired by state space models, which help the model remember information better and work with multiple inputs and outputs simultaneously. Their new model, Mamba-3, performs better on tasks like language understanding and tracking information compared to previous models, even using less memory. Overall, the authors show that their design balances efficiency and accuracy more effectively.
large language modelsinference efficiencyTransformerlinear modelsstate space modelrecurrencecomplex-valued state updatemulti-input multi-output (MIMO)perplexitystate tracking
Authors
Aakash Lahoti, Kevin Y. Li, Berlin Chen, Caitlin Wang, Aviv Bick, J. Zico Kolter, Tri Dao, Albert Gu
Abstract
Scaling inference-time compute has emerged as an important driver of LLM performance, making inference efficiency a central focus of model design alongside model quality. While the current Transformer-based models deliver strong model quality, their quadratic compute and linear memory make inference expensive. This has spurred the development of sub-quadratic models with reduced linear compute and constant memory requirements. However, many recent linear models trade off model quality and capability for algorithmic efficiency, failing on tasks such as state tracking. Moreover, their theoretically linear inference remains hardware-inefficient in practice. Guided by an inference-first perspective, we introduce three core methodological improvements inspired by the state space model (SSM) viewpoint of linear models. We combine: (1) a more expressive recurrence derived from SSM discretization, (2) a complex-valued state update rule that enables richer state tracking, and (3) a multi-input, multi-output (MIMO) formulation for better model performance without increasing decode latency. Together with architectural refinements, our Mamba-3 model achieves significant gains across retrieval, state-tracking, and downstream language modeling tasks. At the 1.5B scale, Mamba-3 improves average downstream accuracy by 0.6 percentage points compared to the next best model (Gated DeltaNet), with Mamba-3's MIMO variant further improving accuracy by another 1.2 points for a total 1.8 point gain. Across state-size experiments, Mamba-3 achieves comparable perplexity to Mamba-2 despite using half of its predecessor's state size. Our evaluations demonstrate Mamba-3's ability to advance the performance-efficiency Pareto frontier.