Characterizing the Expressivity of Local Attention in Transformers

2026-05-01Computation and Language

Computation and Language
AI summary

The authors studied how two ways of paying attention in transformer models—global attention (looking at all previous words) and local attention (looking at just nearby words)—affect the model's ability to understand patterns in language. They found that using local attention adds new abilities to the model's logic, making it better at recognizing certain language patterns that global attention alone can't handle. Also, combining both attention types gives the most powerful understanding. Experiments confirmed that models using both types of attention work better than those using only global attention.

TransformerGlobal AttentionLocal AttentionNeural NetworksLinear Temporal LogicPast OperatorRegular LanguagesLanguage ModelingExpressivityHybrid Attention
Authors
Jiaoda Li, Ryan Cotterell
Abstract
The transformer is the most popular neural architecture for language modeling. The cornerstone of the transformer is its global attention mechanism, which lets the model aggregate information from all preceding tokens before generating the next token. One common variant of attention is called local attention, which restricts each token to aggregating information from a bounded window of predecessors, reducing the quadratic cost of global attention to linear. Although this restriction is usually motivated by efficiency, it has also been found to improve model quality, a phenomenon that has so far lacked a satisfactory explanation. We provide a formal account of this phenomenon in terms of recognizer expressivity. It has been shown that fixed-precision transformers with global attention correspond to a fragment of linear temporal logic containing a single past operator. We additionally prove that adding local attention introduces a second temporal operator, strictly enlarging the class of recognizable regular languages. Moreover, global and local attention are expressively complementary: neither subsumes the other, and combining them yields the richest fragment. Experiments on formal language recognition and natural language modeling corroborate the theory, showing that hybrid global--local transformers outperform their global-only counterparts.