KV-Fold: One-Step KV-Cache Recurrence for Long-Context Inference
2026-05-12 • Machine Learning
Machine LearningArtificial IntelligenceComputation and Language
AI summaryⓘ
The authors present KV-Fold, a method that lets existing transformer models handle very long contexts without any retraining. It works by breaking the input into chunks and repeatedly updating a key-value cache, which the model uses to remember past information as it processes new parts. This approach is simple, stable, and works across different model sizes and chunk lengths. Their tests showed KV-Fold can perfectly recall information over very long sequences while staying within normal memory limits. This means you can use big pretrained models on longer texts without changing their design or training.
transformerskey-value cachelong-context inferencesequence chunkslatent multi-agent communicationfoldlnumerical precisionexact-match retrievalLlama-3frozen pretrained models
Authors
Alireza Nadali, Patrick Cooper, Ashutosh Trivedi, Alvaro Velasquez
Abstract
We introduce KV-Fold, a simple, training-free long-context inference protocol that treats the key-value (KV) cache as the accumulator in a left fold over sequence chunks. At each step, the model processes the next chunk conditioned on the accumulated cache, appends the newly produced keys and values, and passes the enlarged cache forward; the same one-step update is applied repeatedly, analogous to foldl in functional programming. Building on the KV cache concatenation primitive introduced for latent multi-agent communication, we repurpose it as a chunk-to-chunk recurrence for long-context inference. When processing chunk t, the model attends to the KV cache carried from earlier chunks as a prefix, reusing its internal state across segments without modifying or retraining the model. Despite its simplicity, the induced recurrence is stable: per-step drift rises briefly and then saturates into a flat plateau that persists across deep chains. This plateau is insensitive to a 10,000x change in numerical precision, robust across chunk sizes, and consistent across model families. At the task level, KV-Fold preserves exact information over long distances. On a needle-in-a-haystack benchmark, it achieves 100% exact-match retrieval across 152 trials spanning contexts from 16K to 128K tokens and chain depths up to 511 on Llama-3.1-8B, while remaining within the memory limits of a single 40GB GPU. Compared to streaming methods, which trade fidelity for bounded memory, KV-Fold maintains long-range retrieval while operating as a sequence of tractable forward passes. Overall, our results show that frozen pretrained transformers already support a stable form of KV-cache recurrence, providing a practical route to long-context inference without architectural changes or training.