Language Model Teams as Distributed Systems

2026-03-12Multiagent Systems

Multiagent Systems
AI summary

The authors look at teams of large language models (LLMs) working together, which is becoming more common. They point out that it's unclear when teams help, how many LLMs to use, and if teams perform better than single models. Instead of guessing, the authors suggest using ideas from distributed systems (the study of multiple computers working together) as a guide to understand and improve LLM teams. They find that many issues in distributed computing also appear with LLM teams, offering useful insights from both fields.

large language modelsLLM teamsdistributed systemsmulti-agent systemsperformance evaluationsystem structurescalabilitycollaborationcomputing challengesagent coordination
Authors
Elizabeth Mieczkowski, Katherine M. Collins, Ilia Sucholutsky, Natalia Vélez, Thomas L. Griffiths
Abstract
Large language models (LLMs) are growing increasingly capable, prompting recent interest in LLM teams. Yet, despite increased deployment of LLM teams at scale, we lack a principled framework for addressing key questions such as when a team is helpful, how many agents to use, how structure impacts performance -- and whether a team is better than a single agent. Rather than designing and testing these possibilities through trial-and-error, we propose using distributed systems as a principled foundation for creating and evaluating LLM teams. We find that many of the fundamental advantages and challenges studied in distributed computing also arise in LLM teams, highlighting the rich practical insights that can come from the cross-talk of these two fields of study.