🤖 AI Summary
This study investigates whether large language models can discover and execute multi-step implicit planning strategies when trained solely with supervision on final answers and without intermediate signals. Using a controlled graph pathfinding task, the authors combine from-scratch trained small Transformers, fine-tuned GPT-4o/Qwen3-32B models, and GPT-5.4 with few-shot prompting to quantitatively establish the upper bound of implicit planning depth for the first time. Results reveal that models learn strategies up to five steps during training, with GPT-5.4 generalizing to eight steps at test time; however, a hard ceiling at seven steps constrains strategy discovery, exposing a fundamental limitation in implicit reasoning. The work also reports the first observation of a decoupling between strategy discovery and execution capabilities, offering theoretical support for chain-of-thought monitoring mechanisms.
📝 Abstract
The viability of chain-of-thought (CoT) monitoring hinges on models being unable to reason effectively in their latent representations. Yet little is known about the limits of such latent reasoning in LLMs. We test these limits by studying whether models can discover multi-step planning strategies without supervision on intermediate steps and execute them latently, within a single forward pass. Using graph path-finding tasks that precisely control the number of required latent planning steps, we uncover a striking limitation unresolved by massive scaling: tiny transformers trained from scratch discover strategies requiring up to three latent steps, fine-tuned GPT-4o and Qwen3-32B reach five, and GPT-5.4 attains seven under few-shot prompting. Although the maximum latent planning depth models can learn during training is five, the discovered strategy generalizes up to eight latent steps at test-time. This reveals a dissociation between the ability to discover a latent strategy under final-answer supervision alone and the ability to execute it once discovered. If similar limits hold more broadly, strategies requiring multiple coordinated latent planning steps may need to be explicitly taught or externalized, lending credence to CoT monitoring.