Untied Ulysses: Memory-Efficient Context Parallelism via Headwise Chunking

2026-02-24Machine Learning

Machine LearningDistributed, Parallel, and Cluster Computing
AI summary

The authors propose UPipe, a new method to help Transformer models handle very long sequences more efficiently by splitting tasks in a finer way inside the attention mechanism. This reduces the memory needed to keep track of information during training by up to 87.5%, allowing much longer contexts to be processed without slowing down training. Compared to older methods, UPipe supports longer sequences (up to 5 million tokens) on typical hardware, making it easier to train large models on big inputs.

Transformer modelscontext parallelismself-attentionactivation memorytraining throughputchunkingDeepSpeed UlyssesLlama3-8BH100 nodedistributed training
Authors
Ravi Ghadia, Maksim Abraham, Sergei Vorobyov, Max Ryabinin
Abstract
Efficiently processing long sequences with Transformer models usually requires splitting the computations across accelerators via context parallelism. The dominant approaches in this family of methods, such as Ring Attention or DeepSpeed Ulysses, enable scaling over the context dimension but do not focus on memory efficiency, which limits the sequence lengths they can support. More advanced techniques, such as Fully Pipelined Distributed Transformer or activation offloading, can further extend the possible context length at the cost of training throughput. In this paper, we present UPipe, a simple yet effective context parallelism technique that performs fine-grained chunking at the attention head level. This technique significantly reduces the activation memory usage of self-attention, breaking the activation memory barrier and unlocking much longer context lengths. Our approach reduces intermediate tensor memory usage in the attention layer by as much as 87.5$\%$ for 32B Transformers, while matching previous context parallelism techniques in terms of training speed. UPipe can support the context length of 5M tokens when training Llama3-8B on a single 8$\times$H100 node, improving upon prior methods by over 25$\%$.