🤖 AI Summary
This study investigates whether latent reasoning models (LRMs) genuinely rely on latent variables to perform interpretable reasoning. By analyzing the behavior of two state-of-the-art LRMs on logical reasoning tasks, the authors propose a method to decode and validate natural language reasoning paths without requiring prior ground-truth reasoning trajectories. Experimental results show that for 65%–93% of correctly predicted samples, coherent and logically valid reasoning traces can be successfully reconstructed, and their natural language explanations largely pass validation checks. In contrast, erroneous predictions rarely yield valid reasoning paths. These findings demonstrate a strong correlation between prediction accuracy and interpretability in LRMs, suggesting that their reasoning processes indeed involve structured logical inference within the latent variable space.
📝 Abstract
Latent reasoning models (LRMs) have attracted significant research interest due to their low inference cost (relative to explicit reasoning models) and theoretical ability to explore multiple reasoning paths in parallel. However, these benefits come at the cost of reduced interpretability: LRMs are difficult to monitor because they do not reason in natural language. This paper presents an investigation into LRM interpretability by examining two state-of-the-art LRMs. First, we find that latent reasoning tokens are often unnecessary for LRMs' predictions; on logical reasoning datasets, LRMs can almost always produce the same final answers without using latent reasoning at all. This underutilization of reasoning tokens may partially explain why LRMs do not consistently outperform explicit reasoning methods and raises doubts about the stated role of these tokens in prior work. Second, we demonstrate that when latent reasoning tokens are necessary for performance, we can decode gold reasoning traces up to 65-93% of the time for correctly predicted instances. This suggests LRMs often implement the expected solution rather than an uninterpretable reasoning process. Finally, we present a method to decode a verified natural language reasoning trace from latent tokens without knowing a gold reasoning trace a priori, demonstrating that it is possible to find a verified trace for a majority of correct predictions but only a minority of incorrect predictions. Our findings highlight that current LRMs largely encode interpretable processes, and interpretability itself can be a signal of prediction correctness.