Cram Less to Fit More: Training Data Pruning Improves Memorization of Facts
2026-04-09 • Computation and Language
Computation and Language
AI summaryⓘ
The authors studied why large language models (LLMs) sometimes forget facts or make things up. They found that if the training data has too many facts or if some facts appear way more often than others, the model can't remember everything well. To fix this, they created a way to pick which facts to train on that balances how often facts appear and how much the model can handle. This method helped a smaller GPT2 model remember more facts, almost as well as a much bigger model trained on all the data. So, by smartly choosing training data, models can learn facts more efficiently.
large language modelsfact memorizationtraining data distributioninformation theorypower law distributiondata selectiontraining lossGPT2model capacityknowledge-intensive tasks
Authors
Jiayuan Ye, Vitaly Feldman, Kunal Talwar
Abstract
Large language models (LLMs) can struggle to memorize factual knowledge in their parameters, often leading to hallucinations and poor performance on knowledge-intensive tasks. In this paper, we formalize fact memorization from an information-theoretic perspective and study how training data distributions affect fact accuracy. We show that fact accuracy is suboptimal (below the capacity limit) whenever the amount of information contained in the training data facts exceeds model capacity. This is further exacerbated when the fact frequency distribution is skewed (e.g. a power law). We propose data selection schemes based on the training loss alone that aim to limit the number of facts in the training data and flatten their frequency distribution. On semi-synthetic datasets containing high-entropy facts, our selection method effectively boosts fact accuracy to the capacity limit. When pretraining language models from scratch on an annotated Wikipedia corpus, our selection method enables a GPT2-Small model (110m parameters) to memorize 1.3X more entity facts compared to standard training, matching the performance of a 10X larger model (1.3B parameters) pretrained on the full dataset.