🤖 AI Summary
This paper identifies a critical “knowledge conflict” in large language models, wherein parametric memory and contextual knowledge interfere within individual attention heads—not merely coexisting but exhibiting pronounced *hyper-superposition*: memory and context signals become deeply entangled and inseparable at the head level. To address this, we propose JUICE, a training-free, test-time intervention framework grounded in dual-path reasoning and head-wise importance estimation. JUICE employs a dual-run mechanism to jointly suppress conflicting signals and enhance alignment between memory and context representations. Crucially, it abandons the conventional dichotomy of “memory heads” versus “context heads,” requiring neither fine-tuning nor architectural modification. Evaluated across 11 diverse datasets and 6 mainstream LLM families, JUICE significantly improves robustness and generalization under knowledge-conflict conditions, establishing new state-of-the-art performance.
📝 Abstract
Language Models (LMs) often encounter knowledge conflicts when parametric memory contradicts contextual knowledge. Previous works attribute this conflict to the interplay between"memory heads"and"context heads", attention heads assumed to promote either memory or context exclusively. In this study, we go beyond this fundamental assumption by uncovering a critical phenomenon we term the"superposition of contextual information and parametric memory", where highly influential attention heads could simultaneously contribute to both memory and context. Building upon this insight, we propose Just Run Twice (JUICE), a test-time attention intervention method that steers LMs toward either parametric beliefs or contextual knowledge without requiring fine-tuning. JUICE identifies a set of reliable attention heads and leverages a dual-run approach to mitigate the superposition effects. Extensive experiments across 11 datasets and 6 model architectures demonstrate that JUICE sets the new state-of-the-art performance and robust generalization, achieving significant and consistent improvement across different domains under various conflict types. Finally, we theoretically analyze knowledge conflict and the superposition of contextual information and parametric memory in attention heads, which further elucidates the effectiveness of JUICE in these settings.