NanoKnow: How to Know What Your Language Model Knows

2026-02-23Computation and Language

Computation and LanguageArtificial IntelligenceInformation RetrievalMachine Learning
AI summary

The authors studied how small language models, called nanochat, learn and remember facts by using a dataset where they know exactly what information the model was trained on. They created NanoKnow, a set of questions divided by whether the answers were in the model's training data or not, to see how the model answers when it has seen the information before. Their experiments showed that the model is better at answering questions with frequent answers in its training data, but providing extra evidence helps all answers. They also found that the model still does better if it has seen the answer before, even with help, and unrelated information can confuse it. This work helps understand what knowledge language models remember internally and how external information affects them.

large language modelspre-training dataparametric knowledgeNanoKnowNatural QuestionsSQuADclosed-book accuracyexternal evidencefrequency dependencenon-relevant information
Authors
Lingwei Gu, Nour Jedidi, Jimmy Lin
Abstract
How do large language models (LLMs) know what they know? Answering this question has been difficult because pre-training data is often a "black box" -- unknown or inaccessible. The recent release of nanochat -- a family of small LLMs with fully open pre-training data -- addresses this as it provides a transparent view into where a model's parametric knowledge comes from. Towards the goal of understanding how knowledge is encoded by LLMs, we release NanoKnow, a benchmark dataset that partitions questions from Natural Questions and SQuAD into splits based on whether their answers are present in nanochat's pre-training corpus. Using these splits, we can now properly disentangle the sources of knowledge that LLMs rely on when producing an output. To demonstrate NanoKnow's utility, we conduct experiments using eight nanochat checkpoints. Our findings show: (1) closed-book accuracy is strongly influenced by answer frequency in the pre-training data, (2) providing external evidence can mitigate this frequency dependence, (3) even with external evidence, models are more accurate when answers were seen during pre-training, demonstrating that parametric and external knowledge are complementary, and (4) non-relevant information is harmful, with accuracy decreasing based on both the position and the number of non-relevant contexts. We release all NanoKnow artifacts at https://github.com/castorini/NanoKnow.