From Passive to Persuasive: Steering Emotional Nuance in Human-AI Negotiation

📅 2025-11-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited emotional nuance and anthropomorphism of large language models (LLMs) in human–AI negotiation, where existing fine-tuning–based approaches suffer from poor interpretability and parameter inefficiency. We propose a parameter-free activation engineering framework that localizes task-critical neurons via attribution-based patching and constructs fine-grained, interpretable emotion-directional activation signals using contrastive text pairs to encode affective differences. Evaluated on LLaMA-3.1-8B, our method significantly enhances expression of positive emotions (e.g., joy, trust) and increases first-person pronoun usage—thereby improving the AI’s perceived engagement, warmth, and persuasiveness in negotiation. Our core contribution is the first integration of attribution-driven neuron localization with contrastive emotion vector decomposition, achieving simultaneous gains in controllability, transparency, and real-time applicability without model modification.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) demonstrate increasing conversational fluency, yet instilling them with nuanced, human-like emotional expression remains a significant challenge. Current alignment techniques often address surface-level output or require extensive fine-tuning. This paper demonstrates that targeted activation engineering can steer LLaMA 3.1-8B to exhibit more human-like emotional nuances. We first employ attribution patching to identify causally influential components, to find a key intervention locus by observing activation patterns during diagnostic conversational tasks. We then derive emotional expression vectors from the difference in the activations generated by contrastive text pairs (positive vs. negative examples of target emotions). Applying these vectors to new conversational prompts significantly enhances emotional characteristics: steered responses show increased positive sentiment (e.g., joy, trust) and more frequent first-person pronoun usage, indicative of greater personal engagement. Our findings offer a precise and interpretable framework and new directions for the study of conversational AI.
Problem

Research questions and friction points this paper is trying to address.

Steering emotional nuance in human-AI negotiation using targeted activation engineering
Enhancing emotional characteristics in LLM responses through contrastive text pairs
Developing interpretable framework for human-like emotional expression in conversational AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Targeted activation engineering steers emotional nuance
Attribution patching identifies causally influential model components
Emotional expression vectors derived from contrastive activations
🔎 Similar Papers
No similar papers found.