Convergent Evolution: How Different Language Models Learn Similar Number Representations
2026-04-22 • Computation and Language
Computation and LanguageArtificial IntelligenceMachine Learning
AI summaryⓘ
The authors studied how language models understand numbers and found that many models represent numbers with repeating patterns in their internal signals. However, not all models create patterns that can clearly separate numbers based on their remainder when divided by certain values (like mod 2, 5, or 10). They show that simple repeating patterns in signals aren't enough for this clear separation; the models also need specific structure. They also found that factors like the data used, model design, and training method affect whether models learn these clear features, and models can develop them either from natural language cues or from solving addition problems involving multiple tokens.
language modelsperiodic featuresFourier domainmodular arithmeticgeometric separabilityTransformersRNNsLSTMstokenizerfeature learning
Authors
Deqing Fu, Tianyi Zhou, Mikhail Belkin, Vatsal Sharan, Robin Jia
Abstract
Language models trained on natural text learn to represent numbers using periodic features with dominant periods at $T=2, 5, 10$. In this paper, we identify a two-tiered hierarchy of these features: while Transformers, Linear RNNs, LSTMs, and classical word embeddings trained in different ways all learn features that have period-$T$ spikes in the Fourier domain, only some learn geometrically separable features that can be used to linearly classify a number mod-$T$. To explain this incongruity, we prove that Fourier domain sparsity is necessary but not sufficient for mod-$T$ geometric separability. Empirically, we investigate when model training yields geometrically separable features, finding that the data, architecture, optimizer, and tokenizer all play key roles. In particular, we identify two different routes through which models can acquire geometrically separable features: they can learn them from complementary co-occurrence signals in general language data, including text-number co-occurrence and cross-number interaction, or from multi-token (but not single-token) addition problems. Overall, our results highlight the phenomenon of convergent evolution in feature learning: A diverse range of models learn similar features from different training signals.