Steering Risk Preferences in Large Language Models by Aligning Behavioral and Neural Representations

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Controlling large language models’ (LLMs) risk preferences without fine-tuning or retraining remains challenging due to the lack of interpretable, systematic mechanisms for preference modulation. Method: We propose a behavior–neuro dual-path representational alignment framework: (1) LLM-driven MCMC sampling constructs interpretable behavioral representations of risk decisions; (2) concurrent analysis of residual stream activations in Transformer layers extracts neural activation patterns; and (3) alignment between behavioral and neural representations enables identification and construction of “steering vectors” across risk dimensions (e.g., conservative vs. risk-seeking tendencies). Contribution/Results: Our method achieves an average 32.7% accuracy improvement across diverse risk-related tasks, demonstrates strong robustness to input perturbations, and generalizes across distinct LLM architectures. It establishes the first systematic, representation-driven paradigm for controllable and interpretable preference editing—moving beyond opaque, ad-hoc interventions toward principled, neuro-behaviorally grounded control.

Technology Category

Application Category

📝 Abstract
Changing the behavior of large language models (LLMs) can be as straightforward as editing the Transformer's residual streams using appropriately constructed"steering vectors."These modifications to internal neural activations, a form of representation engineering, offer an effective and targeted means of influencing model behavior without retraining or fine-tuning the model. But how can such steering vectors be systematically identified? We propose a principled approach for uncovering steering vectors by aligning latent representations elicited through behavioral methods (specifically, Markov chain Monte Carlo with LLMs) with their neural counterparts. To evaluate this approach, we focus on extracting latent risk preferences from LLMs and steering their risk-related outputs using the aligned representations as steering vectors. We show that the resulting steering vectors successfully and reliably modulate LLM outputs in line with the targeted behavior.
Problem

Research questions and friction points this paper is trying to address.

Systematically identify steering vectors for LLMs
Align behavioral and neural representations for risk preferences
Modulate LLM outputs using aligned steering vectors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Editing Transformer residual streams with steering vectors
Aligning behavioral and neural latent representations
Modulating LLM outputs using aligned steering vectors
🔎 Similar Papers
No similar papers found.