Mitigating Content Effects on Reasoning in Language Models through Fine-Grained Activation Steering

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently conflate material reasoning (content plausibility) with formal reasoning (logical validity), leading to formal reasoning biases that undermine their reliability and generalization in high-consistency tasks. To address this, we propose Knowledge-Conditioned Activation Steering (K-CAST), a kNN-based, test-time fine-grained activation modulation method that dynamically decouples formal from material reasoning—first of its kind. K-CAST leverages a controllable syllogism dataset, layer-wise localization analysis, and contrastive activation steering. Evaluated across multiple mainstream LLMs, it achieves up to 15% absolute improvement in formal reasoning accuracy. It exhibits strong prompt robustness, minimal degradation in language modeling performance, and partial out-of-distribution (OOD) generalization capability. Crucially, K-CAST effectively mitigates reasoning biases in scenarios where static intervention methods fail.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) frequently demonstrate reasoning limitations, often conflating content plausibility (i.e., material inference) with logical validity (i.e., formal inference). This can result in biased inferences, where plausible arguments are incorrectly deemed logically valid or vice versa. Mitigating this limitation is critical, as it undermines the trustworthiness and generalizability of LLMs in applications that demand rigorous logical consistency. This paper investigates the problem of mitigating content biases on formal reasoning through activation steering. Specifically, we curate a controlled syllogistic reasoning dataset to disentangle formal validity from content plausibility. After localising the layers responsible for formal and material inference, we investigate contrastive activation steering methods for test-time interventions. An extensive empirical analysis on different LLMs reveals that contrastive steering consistently supports linear control over content biases. However, we observe that a static approach is insufficient for improving all the tested models. We then leverage the possibility to control content effects by dynamically determining the value of the steering parameters via fine-grained conditional methods. We found that conditional steering is effective on unresponsive models, achieving up to 15% absolute improvement in formal reasoning accuracy with a newly introduced kNN-based method (K-CAST). Finally, additional experiments reveal that steering for content effects is robust to prompt variations, incurs minimal side effects on language modeling capabilities, and can partially generalize to out-of-distribution reasoning tasks. Practically, this paper demonstrates that activation-level interventions can offer a scalable strategy for enhancing the robustness of LLMs, contributing towards more systematic and unbiased formal reasoning.
Problem

Research questions and friction points this paper is trying to address.

Mitigating content biases in LLM formal reasoning
Disentangling logical validity from content plausibility
Enhancing robustness via activation steering interventions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-grained activation steering mitigates content biases
Dynamic conditional methods improve formal reasoning accuracy
kNN-based K-CAST enhances robustness in LLMs
Marco Valentino
Marco Valentino
University of Sheffield
Natural Language ProcessingNeurosymbolic AIExplanation
Geonhee Kim
Geonhee Kim
University of Manchester
Natural Language ProcessingMechanistic Interpretability
D
Dhairya Dalal
University of Galway, Ireland
Z
Zhixue Zhao
University of Sheffield, UK
A
André Freitas
Idiap Research Institute, Switzerland; University of Manchester, UK; National Biomarker Centre, CRUK-MI, UK