Improving Reasoning Performance in Large Language Models via Representation Engineering

📅 2025-04-28
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of precisely controlling large language models’ (LLMs) reasoning capabilities without fine-tuning. We propose a representation-level intervention method that identifies task-relevant activations within the residual stream, constructs task-specific control vectors from them, and directly modifies representations during inference to enhance inductive, deductive, and mathematical reasoning. To our knowledge, this is the first systematic application of representation engineering to LLM reasoning control—relying solely on forward-pass activation extraction and residual-stream intervention, thereby revealing the intrinsic decomposability of reasoning abilities. Evaluated on Mistral-7B-Instruct and Pythia models, our approach consistently improves accuracy across diverse reasoning benchmarks, stabilizes logit distributions, and its mechanistic validity is confirmed via KL divergence and entropy analysis. All code and analytical tools are publicly released to support reproducible representation intervention research.

Technology Category

Application Category

📝 Abstract
Recent advancements in large language models (LLMs) have resulted in increasingly anthropomorphic language concerning the ability of LLMs to reason. Whether reasoning in LLMs should be understood to be inherently different is, however, widely debated. We propose utilizing a representation engineering approach wherein model activations are read from the residual stream of an LLM when processing a reasoning task. The activations are used to derive a control vector that is applied to the model as an inference-time intervention, modulating the representational space of the model, to improve performance on the specified task. We publish the code for deriving control vectors and analyzing model representations. The method allows us to improve performance on reasoning benchmarks and assess how control vectors influence the final logit distribution of a model via metrics such as KL divergence and entropy. We apply control vectors to Mistral-7B-Instruct and a range of Pythia models on an inductive, a deductive and mathematical reasoning task. We show that an LLM can, to a certain degree, be controlled to improve its perceived reasoning ability by modulating activations. The intervention is dependent upon the ability to reliably extract the model's typical state when correctly solving a task. Our results suggest that reasoning performance can be modulated in the same manner as other information-processing tasks performed by LLMs and demonstrate that we are capable of improving performance on specific tasks via a simple intervention on the residual stream with no additional training.
Problem

Research questions and friction points this paper is trying to address.

Enhancing reasoning in LLMs via activation modulation
Deriving control vectors to improve task performance
Assessing reasoning modulation via KL divergence metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Representation engineering modulates model activations
Control vectors improve reasoning without training
Intervention on residual stream enhances performance
🔎 Similar Papers
No similar papers found.
B
Bertram Hojer
Department of Computer Science, IT University of Copenhagen, Denmark
O
Oliver Jarvis
Department of Computer Science, IT University of Copenhagen, Denmark
Stefan Heinrich
Stefan Heinrich
Associate Professor, IT University of Copenhagen
Machine LearningNatural Language ProcessingCognitive Modelling