Think Before You Lie: How Reasoning Improves Honesty

2026-03-10Artificial Intelligence

Artificial IntelligenceComputation and LanguageMachine Learning
AI summary

The authors study why large language models (LLMs) sometimes lie and how reasoning affects honesty. They find that, unlike humans who may lie more when given time, LLMs actually become more honest when they think things through. This happens because the model’s internal representation space makes dishonest answers less stable and easier to change when the model contemplates or changes its inputs. So, the authors suggest that reasoning helps the model move toward more honest responses because these are more stable defaults in its 'mental' landscape.

large language modelsdeceptive behaviormoral trade-offsreasoningrepresentational spacemetastabilityinput paraphrasingoutput resamplingactivation noise
Authors
Ann Yuan, Asma Ghandeharioun, Carter Blum, Alicia Machado, Jessica Hoffmann, Daphne Ippolito, Martin Wattenberg, Lucas Dixon, Katja Filippova
Abstract
While existing evaluations of large language models (LLMs) measure deception rates, the underlying conditions that give rise to deceptive behavior are poorly understood. We investigate this question using a novel dataset of realistic moral trade-offs where honesty incurs variable costs. Contrary to humans, who tend to become less honest given time to deliberate (Capraro, 2017; Capraro et al., 2019), we find that reasoning consistently increases honesty across scales and for several LLM families. This effect is not only a function of the reasoning content, as reasoning traces are often poor predictors of final behaviors. Rather, we show that the underlying geometry of the representational space itself contributes to the effect. Namely, we observe that deceptive regions within this space are metastable: deceptive answers are more easily destabilized by input paraphrasing, output resampling, and activation noise than honest ones. We interpret the effect of reasoning in this vein: generating deliberative tokens as part of moral reasoning entails the traversal of a biased representational space, ultimately nudging the model toward its more stable, honest defaults.