AI summaryⓘ
The authors studied latent reasoning models (LRMs), which try to solve problems by looking at hidden reasoning steps instead of spelling them out. They found that often these hidden steps aren't even needed for the model to get the right answer, which might explain why LRMs don't always do better than models that explain their thinking explicitly. When the hidden reasoning is important, the authors showed they can often match the model’s reasoning with known correct explanations, suggesting the model’s thinking is somewhat clear. They also developed a way to extract understandable reasoning from the model’s hidden steps without knowing the correct answer first, and this is more successful when the model is right. Overall, the authors show LRMs can be partly understood and that understanding their reasoning relates to whether their answers are correct.
latent reasoning modelsinterpretabilitylogical reasoning datasetsreasoning tokensexplicit reasoningreasoning traceinference costmodel decodingprediction correctness
Authors
Connor Dilgren, Sarah Wiegreffe
Abstract
Latent reasoning models (LRMs) have attracted significant research interest due to their low inference cost (relative to explicit reasoning models) and theoretical ability to explore multiple reasoning paths in parallel. However, these benefits come at the cost of reduced interpretability: LRMs are difficult to monitor because they do not reason in natural language. This paper presents an investigation into LRM interpretability by examining two state-of-the-art LRMs. First, we find that latent reasoning tokens are often unnecessary for LRMs' predictions; on logical reasoning datasets, LRMs can almost always produce the same final answers without using latent reasoning at all. This underutilization of reasoning tokens may partially explain why LRMs do not consistently outperform explicit reasoning methods and raises doubts about the stated role of these tokens in prior work. Second, we demonstrate that when latent reasoning tokens are necessary for performance, we can decode gold reasoning traces up to 65-93% of the time for correctly predicted instances. This suggests LRMs often implement the expected solution rather than an uninterpretable reasoning process. Finally, we present a method to decode a verified natural language reasoning trace from latent tokens without knowing a gold reasoning trace a priori, demonstrating that it is possible to find a verified trace for a majority of correct predictions but only a minority of incorrect predictions. Our findings highlight that current LRMs largely encode interpretable processes, and interpretability itself can be a signal of prediction correctness.