Reasoning Theater: Disentangling Model Beliefs from Chain-of-Thought
2026-03-05 • Computation and Language
Computation and LanguageArtificial IntelligenceMachine Learning
AI summaryⓘ
The authors studied how large reasoning models sometimes act confident about their answers but keep generating text without clearly showing their actual thought process. They found that for easy questions, the models' final answers could be predicted much earlier than monitors could detect, suggesting the models might just be 'performing' reasoning. For harder questions, actual changes in the model's beliefs matched moments of real uncertainty. Using probes to catch when the model is sure lets them stop the process early, saving computation without losing accuracy.
chain-of-thoughtactivation probinglanguage modelsconfidencemultihop reasoningMMLUGPQA-Diamondearly exitadaptive computationbelief shifting
Authors
Siddharth Boppana, Annabel Ma, Max Loeffler, Raphael Sarfati, Eric Bigelow, Atticus Geiger, Owen Lewis, Jack Merullo
Abstract
We provide evidence of performative chain-of-thought (CoT) in reasoning models, where a model becomes strongly confident in its final answer, but continues generating tokens without revealing its internal belief. Our analysis compares activation probing, early forced answering, and a CoT monitor across two large models (DeepSeek-R1 671B & GPT-OSS 120B) and find task difficulty-specific differences: The model's final answer is decodable from activations far earlier in CoT than a monitor is able to say, especially for easy recall-based MMLU questions. We contrast this with genuine reasoning in difficult multihop GPQA-Diamond questions. Despite this, inflection points (e.g., backtracking, 'aha' moments) occur almost exclusively in responses where probes show large belief shifts, suggesting these behaviors track genuine uncertainty rather than learned "reasoning theater." Finally, probe-guided early exit reduces tokens by up to 80% on MMLU and 30% on GPQA-Diamond with similar accuracy, positioning attention probing as an efficient tool for detecting performative reasoning and enabling adaptive computation.