Symmetry in language statistics shapes the geometry of model representations
2026-02-16 • Machine Learning
Machine LearningComputation and Language
AI summaryⓘ
The authors studied how words are represented inside language models and found that these word representations form simple shapes, like circles for months or lines for years. They explain this by showing that language uses patterns that depend only on distances in time (like how often months appear near each other), which creates these shapes mathematically. Even when the usual language patterns are changed a lot, the shapes in the word representations stay mostly the same. This happens because the hidden factors controlling word relationships change smoothly over time. The authors tested and confirmed their ideas on different language models.
word embeddinglanguage modelsco-occurrence statisticsmanifoldtranslation symmetrylatent variablelinear proberepresentation learninggeometric structuretime interval
Authors
Dhruva Karkada, Daniel J. Korchinski, Andres Nava, Matthieu Wyart, Yasaman Bahri
Abstract
Although learned representations underlie neural networks' success, their fundamental properties remain poorly understood. A striking example is the emergence of simple geometric structures in LLM representations: for example, calendar months organize into a circle, years form a smooth one-dimensional manifold, and cities' latitudes and longitudes can be decoded by a linear probe. We show that the statistics of language exhibit a translation symmetry -- e.g., the co-occurrence probability of two months depends only on the time interval between them -- and we prove that the latter governs the aforementioned geometric structures in high-dimensional word embedding models. Moreover, we find that these structures persist even when the co-occurrence statistics are strongly perturbed (for example, by removing all sentences in which two months appear together) and at moderate embedding dimension. We show that this robustness naturally emerges if the co-occurrence statistics are collectively controlled by an underlying continuous latent variable. We empirically validate this theoretical framework in word embedding models, text embedding models, and large language models.