DySCO: Dynamic Attention-Scaling Decoding for Long-Context LMs

2026-02-25Computation and Language

Computation and Language
AI summary

The authors created DySCO, a method to help language models pay better attention to the most important parts of very long texts while they generate answers. DySCO works by focusing on certain attention heads that find relevant information dynamically during the writing process. This approach does not require retraining the models and can be used with existing language models. They found that DySCO improves performance on difficult tasks involving extremely long contexts, making the models more accurate without much extra computing power.

language modelslong-context reasoningattention headsdecoding algorithmretrieval headsdynamic attentioninstruction tuningcontext windowinterpretabilitybenchmark
Authors
Xi Ye, Wuwei Zhang, Fangcong Yin, Howard Yen, Danqi Chen
Abstract
Understanding and reasoning over long contexts is a crucial capability for language models (LMs). Although recent models support increasingly long context windows, their accuracy often deteriorates as input length grows. In practice, models often struggle to keep attention aligned with the most relevant context throughout decoding. In this work, we propose DySCO, a novel decoding algorithm for improving long-context reasoning. DySCO leverages retrieval heads--a subset of attention heads specialized for long-context retrieval--to identify task-relevant tokens at each decoding step and explicitly up-weight them. By doing so, DySCO dynamically adjusts attention during generation to better utilize relevant context. The method is training-free and can be applied directly to any off-the-shelf LMs. Across multiple instruction-tuned and reasoning models, DySCO consistently improves performance on challenging long-context reasoning benchmarks, yielding relative gains of up to 25% on MRCR and LongBenchV2 at 128K context length with modest additional compute. Further analysis highlights the importance of both dynamic attention rescaling and retrieval-head-guided selection for the effectiveness of the method, while providing interpretability insights into decoding-time attention behavior. Our code is available at https://github.com/princeton-pli/DySCO.