Solve the Loop: Attractor Models for Language and Reasoning
2026-05-12 • Machine Learning
Machine LearningArtificial IntelligenceComputation and LanguageNeural and Evolutionary Computing
AI summaryⓘ
The authors propose Attractor Models, which improve language understanding by repeatedly refining their guesses until reaching a stable solution. Unlike older methods that have fixed steps and are hard to train, their method adapts how many refinement steps it needs, making training more efficient and stable. They show these models work better than standard Transformers on language tasks and reasoning puzzles, even with fewer resources. Also, Attractor Models learn to start close to the right answer, so later they can skip the refinement step without losing much accuracy.
Transformerslatent representationsfixed pointimplicit differentiationlanguage modelingiterative refinementconvergenceperplexityreasoning tasksequilibrium internalization
Authors
Jacob Fein-Ashley, Paria Rashidinejad
Abstract
Looped Transformers offer a promising alternative to purely feed-forward computation by iteratively refining latent representations, improving language modeling and reasoning. Yet recurrent architectures remain unstable to train, costly to optimize and deploy, and constrained to small, fixed recurrence depths. We introduce Attractor Models, in which a backbone module first proposes output embeddings, then an attractor module refines them by solving for the fixed point, with gradients obtained through implicit differentiation. Thus, training memory remains constant in effective depth, and iterations are chosen adaptively by convergence. Empirically, Attractor Models outperform existing models across two regimes, large-scale language-model pretraining and reasoning with tiny models. In language modeling, Attractor Models deliver a Pareto improvement over standard Transformers and stable looped models across sizes, improving perplexity by up to 46.6% and downstream accuracy by up to 19.7% while reducing training cost. Notably, a 770M Attractor Model outperforms a 1.3B Transformer trained on twice as many tokens. On challenging reasoning tasks, we show that our model with only 27M parameters and approximately 1000 examples achieves 91.4% accuracy on Sudoku-Extreme and 93.1% on Maze-Hard, scaling favorably where frontier models like Claude and GPT o3, fail completely, and specialized recursive reasoners collapse at larger sizes. Lastly, we show that Attractor Models exhibit a novel phenomenon, which we call equilibrium internalization: fixed-point training places the model's initial output embedding near equilibrium, allowing the solver to be removed at inference time with little degradation. Together, these results suggest that Attractor Models make iterative refinement scalable by turning recurrence into a computation the model can learn to internalize.