Pythia: Toward Predictability-Driven Agent-Native LLM Serving
2026-04-28 • Multiagent Systems
Multiagent SystemsDistributed, Parallel, and Cluster Computing
AI summaryⓘ
The authors study how large language model (LLM) applications use multiple agents working together to handle complex tasks in a structured way. They find that current systems don't take full advantage of this structure, causing slowdowns and inefficiencies like long wait times and resource conflicts. To fix this, the authors created Pythia, a system that understands the workflow better and optimizes how agents share resources. This leads to faster processing and improved overall performance compared to existing methods.
Large Language ModelsMulti-Agent SystemsWorkflow DecompositionAgent CollaborationResource ContentionPrefix CacheJob SchedulingThroughputLatency Optimization
Authors
Shan Yu, Junyi Shu, Yuanjiang Ni, Kun Qian, Xue Li, Yang Wang, Jinyuan Zhang, Ziyi Xu, Shuo Yang, Lingjun Zhu, Ennan Zhai, Qingda Lu, Jiarong Xing, Youyou Lu, Xin Jin, Xuanzhe Liu, Harry Xu
Abstract
As LLM applications grow more complex, developers are increasingly adopting multi-agent architectures to decompose workflows into specialized, collaborative components, introducing structure that constrains agent behavior and exposes useful semantic predictability. Unlike traditional LLM serving, which operates under highly dynamic and uncertain conditions, this structured topology enables opportunities to reduce runtime uncertainty -- yet existing systems fail to exploit it, treating agentic workloads as generic traffic and incurring significant inefficiencies. Our analysis of production traces from an agent-serving platform and an internal coding assistant reveals key bottlenecks, including low prefix cache hit rates, severe resource contention from long-context requests, and substantial queuing delays due to suboptimal scaling. To address these challenges, we propose Pythia, a multi-agent serving system that captures workflow semantics through a simple interface at the serving layer, unlocking new optimization opportunities and substantially improving throughput and job completion time over state-of-the-art baselines.