Memory Caching: RNNs with Growing Memory
2026-02-27 • Machine Learning
Machine LearningArtificial Intelligence
AI summaryⓘ
The authors point out that Transformers, while powerful for handling long sequences, are slow to run because their memory use grows quickly as sequences get longer. They focus on recurrent models, which are faster but usually have limited memory size, making them weaker for tasks needing lots of recall. To fix this, the authors introduce Memory Caching, a method that saves snapshots of the model's memory over time, effectively letting the memory grow with sequence length. Their experiments show this method improves recurrent models and narrows the performance gap with Transformers, especially on tasks that require remembering long contexts.
TransformersRecurrent Neural Networks (RNNs)Memory capacitySequence modelingQuadratic complexityMemory cachingLanguage modelingLong-context understandingHidden statesIn-context recall
Authors
Ali Behrouz, Zeman Li, Yuan Deng, Peilin Zhong, Meisam Razaviyayn, Vahab Mirrokni
Abstract
Transformers have been established as the de-facto backbones for most recent advances in sequence modeling, mainly due to their growing memory capacity that scales with the context length. While plausible for retrieval tasks, it causes quadratic complexity and so has motivated recent studies to explore viable subquadratic recurrent alternatives. Despite showing promising preliminary results in diverse domains, such recurrent architectures underperform Transformers in recall-intensive tasks, often attributed to their fixed-size memory. In this paper, we introduce Memory Caching (MC), a simple yet effective technique that enhances recurrent models by caching checkpoints of their memory states (a.k.a. hidden states). Memory Caching allows the effective memory capacity of RNNs to grow with sequence length, offering a flexible trade-off that interpolates between the fixed memory (i.e., $O(L)$ complexity) of RNNs and the growing memory (i.e., $O(L^2)$ complexity) of Transformers. We propose four variants of MC, including gated aggregation and sparse selective mechanisms, and discuss their implications on both linear and deep memory modules. Our experimental results on language modeling, and long-context understanding tasks show that MC enhances the performance of recurrent models, supporting its effectiveness. The results of in-context recall tasks indicate that while Transformers achieve the best accuracy, our MC variants show competitive performance, close the gap with Transformers, and performs better than state-of-the-art recurrent models.