Eliciting Chain-of-Thought in Base LLMs via Gradient-Based Representation Optimization

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Base large language models (LLMs) lack explicit training for multi-step reasoning and thus struggle with complex inference. Existing hidden-state manipulation techniques—such as linear activation steering—impose rigid constraints that often induce distributional shift and text degeneration. To address this, we propose a gradient-driven latent state optimization framework that formalizes reasoning-path steering as a probabilistic conditional generation problem with prior regularization, jointly preserving logical coherence and textual fluency. For the first time, we enable controllable chain-of-thought elicitation in a fully unsupervised setting by optimizing both likelihood and prior-regularized objectives. Our approach integrates gradient-based optimization, latent-state fine-tuning, and conditional generative modeling. Empirical evaluation across mathematical, commonsense, and logical reasoning benchmarks demonstrates substantial improvements over state-of-the-art activation steering methods—yielding higher reasoning accuracy and superior generation quality—thereby unlocking the latent reasoning capabilities of base LLMs.

Technology Category

Application Category

📝 Abstract
Chain-of-Thought (CoT) reasoning is a critical capability for large language models (LLMs), enabling them to tackle com- plex multi-step tasks. While base LLMs, pre-trained on general text corpora, often struggle with reasoning due to a lack of specialized training, recent studies reveal their latent reason- ing potential tied to hidden states. However, existing hidden state manipulation methods, such as linear activation steering, suffer from limitations due to their rigid and unconstrained nature, often leading to distribution shifts and degraded text quality. In this work, we propose a novel approach for elic- iting CoT reasoning from base LLMs through hidden state manipulation grounded in probabilistic conditional generation. By reformulating the challenge as an optimization problem with a balanced likelihood and prior regularization framework, our method guides hidden states toward reasoning-oriented trajectories while preserving linguistic coherence. Extensive evaluations across mathematical, commonsense, and logical reasoning benchmarks demonstrate that our approach con- sistently outperforms existing steering methods, offering a theoretically principled and effective solution for enhancing reasoning capabilities in base LLMs.
Problem

Research questions and friction points this paper is trying to address.

Enhancing reasoning in base LLMs via hidden state optimization
Overcoming limitations of rigid activation steering methods
Improving CoT reasoning while maintaining text quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimizes hidden states via gradient-based representation
Uses probabilistic conditional generation for reasoning
Balances likelihood and prior regularization framework
🔎 Similar Papers
No similar papers found.
Z
Zijian Wang
School of Computer Science, The University of Sydney
Yanxiang Ma
Yanxiang Ma
PhD Student, University of Sydney
Deep LearningAdversarial RobustnessImage Classification
C
Chang Xu
School of Computer Science, The University of Sydney