Recursive Multi-Agent Systems

2026-04-28Artificial Intelligence

Artificial IntelligenceComputation and LanguageMachine Learning
AI summary

The authors explore how multiple AI agents can work together better by using a method called RecursiveMAS, which links agents in a loop so they share thoughts and improve through repeated steps. They created a special learning process that helps all agents get better together during these loops. Their method is more efficient and stable than traditional multi-agent setups, and it performs better on various tasks like math and medicine, while also being faster and using less data. The authors tested their approach across many benchmarks and found consistent improvements.

Recursive computationMulti-agent systemsLatent statesGradient-based learningCollaborative AIInner-outer loop optimizationCross-agent communicationInference speedToken efficiency
Authors
Xiyuan Yang, Jiaru Zou, Rui Pan, Ruizhong Qiu, Pan Lu, Shizhe Diao, Jindong Jiang, Hanghang Tong, Tong Zhang, Markus J. Buehler, Jingrui He, James Zou
Abstract
Recursive or looped language models have recently emerged as a new scaling axis by iteratively refining the same model computation over latent states to deepen reasoning. We extend such scaling principle from a single model to multi-agent systems, and ask: Can agent collaboration itself be scaled through recursion? To this end, we introduce RecursiveMAS, a recursive multi-agent framework that casts the entire system as a unified latent-space recursive computation. RecursiveMAS connects heterogeneous agents as a collaboration loop through the lightweight RecursiveLink module, enabling in-distribution latent thoughts generation and cross-agent latent state transfer. To optimize our framework, we develop an inner-outer loop learning algorithm for iterative whole-system co-optimization through shared gradient-based credit assignment across recursion rounds. Theoretical analyses of runtime complexity and learning dynamics establish that RecursiveMAS is more efficient than standard text-based MAS and maintains stable gradients during recursive training. Empirically, we instantiate RecursiveMAS under 4 representative agent collaboration patterns and evaluate across 9 benchmarks spanning mathematics, science, medicine, search, and code generation. In comparison with advanced single/multi-agent and recursive computation baselines, RecursiveMAS consistently delivers an average accuracy improvement of 8.3%, together with 1.2$\times$-2.4$\times$ end-to-end inference speedup, and 34.6%-75.6% token usage reduction. Code and Data are provided in https://recursivemas.github.io.