Semantic Chunking and the Entropy of Natural Language
2026-02-13 • Computation and Language
Computation and LanguageArtificial Intelligence
AI summaryⓘ
The authors created a new model to explain why printed English has a lot of predictability, or redundancy, meaning it carries about one bit of new information per character. Their model works by breaking text into meaningful chunks at different levels, from whole ideas down to individual words. This helps explain the complex structure of language and matches real data and large language model predictions. They also found that more complex texts have higher entropy rates, meaning they carry more new information per character.
entropy rateredundancyprinted Englishlarge language modelssemantic hierarchyself-similar segmentationstatistical modelinformation theorycorpus complexity
Authors
Weishun Zhong, Doron Sivan, Tankut Can, Mikhail Katkov, Misha Tsodyks
Abstract
The entropy rate of printed English is famously estimated to be about one bit per character, a benchmark that modern large language models (LLMs) have only recently approached. This entropy rate implies that English contains nearly 80 percent redundancy relative to the five bits per character expected for random text. We introduce a statistical model that attempts to capture the intricate multi-scale structure of natural language, providing a first-principles account of this redundancy level. Our model describes a procedure of self-similarly segmenting text into semantically coherent chunks down to the single-word level. The semantic structure of the text can then be hierarchically decomposed, allowing for analytical treatment. Numerical experiments with modern LLMs and open datasets suggest that our model quantitatively captures the structure of real texts at different levels of the semantic hierarchy. The entropy rate predicted by our model agrees with the estimated entropy rate of printed English. Moreover, our theory further reveals that the entropy rate of natural language is not fixed but should increase systematically with the semantic complexity of corpora, which are captured by the only free parameter in our model.