LLM Assertiveness can be Mechanistically Decomposed into Emotional and Logical Components

📅 2025-08-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the intrinsic mechanisms underlying overconfidence in large language models (LLMs) within high-stakes settings. Addressing the critical issue of poor uncertainty calibration in LLMs’ assertive outputs, the study employs mechanistic interpretability techniques on fine-tuned Llama 3.2 to identify key residual layers. Using human-annotated data and activation similarity analysis, it first disentangles assertive representations into two orthogonal subcomponents—*affective* and *logical*—revealing a dual-path psychological mechanism. The authors then construct two targeted intervention vectors: affective vectors significantly modulate assertiveness strength, while logical vectors govern reasoning rigor. Causal ablation experiments confirm their distinct, interpretable effects. This decomposition enhances transparency into overconfident behavior and establishes a novel, controllable pathway for uncertainty-aware model calibration.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) often display overconfidence, presenting information with unwarranted certainty in high-stakes contexts. We investigate the internal basis of this behavior via mechanistic interpretability. Using open-sourced Llama 3.2 models fine-tuned on human annotated assertiveness datasets, we extract residual activations across all layers, and compute similarity metrics to localize assertive representations. Our analysis identifies layers most sensitive to assertiveness contrasts and reveals that high-assertive representations decompose into two orthogonal sub-components of emotional and logical clusters-paralleling the dual-route Elaboration Likelihood Model in Psychology. Steering vectors derived from these sub-components show distinct causal effects: emotional vectors broadly influence prediction accuracy, while logical vectors exert more localized effects. These findings provide mechanistic evidence for the multi-component structure of LLM assertiveness and highlight avenues for mitigating overconfident behavior.
Problem

Research questions and friction points this paper is trying to address.

Mechanistic decomposition of LLM assertiveness into emotional and logical components
Investigating internal basis of LLM overconfidence via interpretability methods
Identifying causal effects of emotional versus logical steering vectors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mechanistic interpretability analyzes residual activations
Emotional and logical orthogonal sub-components identified
Steering vectors show distinct causal effects
🔎 Similar Papers
No similar papers found.
H
Hikaru Tsujimura
Department of Psychology, Cardiff University, Cardiff, UK
Arush Tagade
Arush Tagade
PhD Student, George Washington University
AI Safety