🤖 AI Summary
This work investigates the intrinsic mechanisms underlying overconfidence in large language models (LLMs) within high-stakes settings. Addressing the critical issue of poor uncertainty calibration in LLMs’ assertive outputs, the study employs mechanistic interpretability techniques on fine-tuned Llama 3.2 to identify key residual layers. Using human-annotated data and activation similarity analysis, it first disentangles assertive representations into two orthogonal subcomponents—*affective* and *logical*—revealing a dual-path psychological mechanism. The authors then construct two targeted intervention vectors: affective vectors significantly modulate assertiveness strength, while logical vectors govern reasoning rigor. Causal ablation experiments confirm their distinct, interpretable effects. This decomposition enhances transparency into overconfident behavior and establishes a novel, controllable pathway for uncertainty-aware model calibration.
📝 Abstract
Large Language Models (LLMs) often display overconfidence, presenting information with unwarranted certainty in high-stakes contexts. We investigate the internal basis of this behavior via mechanistic interpretability. Using open-sourced Llama 3.2 models fine-tuned on human annotated assertiveness datasets, we extract residual activations across all layers, and compute similarity metrics to localize assertive representations. Our analysis identifies layers most sensitive to assertiveness contrasts and reveals that high-assertive representations decompose into two orthogonal sub-components of emotional and logical clusters-paralleling the dual-route Elaboration Likelihood Model in Psychology. Steering vectors derived from these sub-components show distinct causal effects: emotional vectors broadly influence prediction accuracy, while logical vectors exert more localized effects. These findings provide mechanistic evidence for the multi-component structure of LLM assertiveness and highlight avenues for mitigating overconfident behavior.