On the Proper Treatment of Units in Surprisal Theory

2026-04-30Computation and Language

Computation and Language
AI summary

The authors discuss surprisal theory, which connects how hard it is to understand language to how predictable the next word or piece of language is. They point out that experiments usually look at words as units, but language models break text into smaller pieces called tokens, which don’t always match up with words. This mismatch causes confusion in how surprisal is measured. The authors propose a clear way to separate these ideas, saying it’s better to think of tokenization as just a technical detail rather than a core part of the theory.

Surprisal theoryLanguage processingTokenizationPretrained language modelsPredictabilityLinguistic unitsProbability massExperimental stimuliRegions of interest
Authors
Samuel Kiegeland, Vésteinn Snæbjarnarson, Tim Vieira, Ryan Cotterell
Abstract
Surprisal theory links human processing effort to the predictability of an upcoming linguistic unit, but empirical work often leaves the notion of a unit underspecified. In practice, experimental stimuli are segmented into linguistically motivated units (e.g., words), while pretrained language models assign probability mass to a fixed token alphabet that typically does not align with those units. As a result, surprisal-based predictors depend implicitly on ad hoc procedures that conflate two distinct modeling choices: the definition of the unit of analysis and the choice of regions of interest over which predictions are evaluated. In this paper, we disentangle these choices and give a unified framework for reasoning about surprisal over arbitrary unit inventories. We argue that surprisal-based analyses should make these choices explicit and treat tokenization as an implementation detail rather than a scientific primitive.