🤖 AI Summary
This work proposes DeepLatent Reasoning, a novel framework that addresses the tendency of large language models to rely on statistical pattern matching rather than logical deduction in complex multi-step reasoning tasks. To overcome the limitations of conventional reinforcement learning in discrete token spaces—such as low sampling efficiency, high gradient variance, and catastrophic forgetting—the method shifts the reinforcement learning process into a continuous latent space. A lightweight assistant model samples and encodes reasoning chains, which are then evaluated via a dual-reward mechanism based on correctness and format quality. High-quality latent trajectories are decoded in a single pass by a frozen main model, thereby eliminating catastrophic forgetting. Additionally, bidirectional contrastive learning is introduced to enhance exploration. The approach achieves more stable convergence under identical computational budgets, supports longer reasoning chains, and consistently improves reasoning performance.
📝 Abstract
While Large Language Models (LLMs) demonstrate exceptional performance in surface-level text generation, their nature in handling complex multi-step reasoning tasks often remains one of ``statistical fitting''rather than systematic logical deduction. Traditional Reinforcement Learning (RL) attempts to mitigate this by introducing a ``think-before-speak''paradigm. However, applying RL directly in high-dimensional, discrete token spaces faces three inherent challenges: sample-inefficient rollouts, high gradient estimation variance, and the risk of catastrophic forgetting. To fundamentally address these structural bottlenecks, we propose \textbf{DeepLatent Reasoning (DLR)}, a latent-space bidirectional contrastive reinforcement learning framework. This framework shifts the trial-and-error cost from expensive token-level full sequence generation to the continuous latent manifold. Specifically, we introduce a lightweight assistant model to efficiently sample $K$ reasoning chain encodings within the latent space. These encodings are filtered via a dual reward mechanism based on correctness and formatting; only high-value latent trajectories are fed into a \textbf{frozen main model} for single-pass decoding. To maximize reasoning diversity while maintaining coherence, we design a contrastive learning objective to enable directed exploration within the latent space. Since the main model parameters remain frozen during optimization, this method mathematically eliminates catastrophic forgetting. Experiments demonstrate that under comparable GPU computational budgets, DLR achieves more stable training convergence, supports longer-horizon reasoning chains, and facilitates the sustainable accumulation of reasoning capabilities, providing a viable path toward reliable and scalable reinforcement learning for LLMs.