🤖 AI Summary
Base large language models (LLMs) lack explicit training for multi-step reasoning and thus struggle with complex inference. Existing hidden-state manipulation techniques—such as linear activation steering—impose rigid constraints that often induce distributional shift and text degeneration. To address this, we propose a gradient-driven latent state optimization framework that formalizes reasoning-path steering as a probabilistic conditional generation problem with prior regularization, jointly preserving logical coherence and textual fluency. For the first time, we enable controllable chain-of-thought elicitation in a fully unsupervised setting by optimizing both likelihood and prior-regularized objectives. Our approach integrates gradient-based optimization, latent-state fine-tuning, and conditional generative modeling. Empirical evaluation across mathematical, commonsense, and logical reasoning benchmarks demonstrates substantial improvements over state-of-the-art activation steering methods—yielding higher reasoning accuracy and superior generation quality—thereby unlocking the latent reasoning capabilities of base LLMs.
📝 Abstract
Chain-of-Thought (CoT) reasoning is a critical capability for large language models (LLMs), enabling them to tackle com- plex multi-step tasks. While base LLMs, pre-trained on general text corpora, often struggle with reasoning due to a lack of specialized training, recent studies reveal their latent reason- ing potential tied to hidden states. However, existing hidden state manipulation methods, such as linear activation steering, suffer from limitations due to their rigid and unconstrained nature, often leading to distribution shifts and degraded text quality. In this work, we propose a novel approach for elic- iting CoT reasoning from base LLMs through hidden state manipulation grounded in probabilistic conditional generation. By reformulating the challenge as an optimization problem with a balanced likelihood and prior regularization framework, our method guides hidden states toward reasoning-oriented trajectories while preserving linguistic coherence. Extensive evaluations across mathematical, commonsense, and logical reasoning benchmarks demonstrate that our approach con- sistently outperforms existing steering methods, offering a theoretically principled and effective solution for enhancing reasoning capabilities in base LLMs.