On the Generalizability of "Competition of Mechanisms: Tracing How Language Models Handle Facts and Counterfactuals"

📅 2025-06-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the generality of the “mechanism competition” phenomenon—where factual knowledge retrieval competes with in-context counterfactual repetition—in language models. We conduct systematic cross-model experiments across architectures and scales (GPT-2, Pythia 6.9B, Llama 3.1 8B), complemented by attention head localization, ablation studies, prompt structure variations, and multi-domain task evaluation. Our results confirm the phenomenon’s broad applicability but reveal critical boundary conditions: (1) attention heads exhibit markedly reduced specialization in larger models (e.g., Llama 3.1 8B); (2) prompt structure and domain-specific biases significantly modulate competition dynamics, causing the original method to fail in certain domains. Thus, while we replicate and extend prior findings, we also precisely delineate the limits of mechanism competition—highlighting its dependence on model scale, architecture, prompting strategy, and data domain. These findings provide key empirical grounding for understanding how large language models balance parametric memory retrieval against contextual adaptation.

Technology Category

Application Category

📝 Abstract
We present a reproduction study of "Competition of Mechanisms: Tracing How Language Models Handle Facts and Counterfactuals" (Ortu et al., 2024), which investigates competition of mechanisms in language models between factual recall and counterfactual in-context repetition. Our study successfully reproduces their primary findings regarding the localization of factual and counterfactual information, the dominance of attention blocks in mechanism competition, and the specialization of attention heads in handling competing information. We reproduce their results on both GPT-2 (Radford et al., 2019) and Pythia 6.9B (Biderman et al., 2023). We extend their work in three significant directions. First, we explore the generalizability of these findings to even larger models by replicating the experiments on Llama 3.1 8B (Grattafiori et al., 2024), discovering greatly reduced attention head specialization. Second, we investigate the impact of prompt structure by introducing variations where we avoid repeating the counterfactual statement verbatim or we change the premise word, observing a marked decrease in the logit for the counterfactual token. Finally, we test the validity of the authors' claims for prompts of specific domains, discovering that certain categories of prompts skew the results by providing the factual prediction token as part of the subject of the sentence. Overall, we find that the attention head ablation proposed in Ortu et al. (2024) is ineffective for domains that are underrepresented in their dataset, and that the effectiveness varies based on model architecture, prompt structure, domain and task.
Problem

Research questions and friction points this paper is trying to address.

Investigates how language models handle factual recall and counterfactual repetition.
Explores generalizability of findings to larger models like Llama 3.1 8B.
Examines impact of prompt structure and domain on model performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reproduce findings on GPT-2 and Pythia 6.9B models
Extend study to Llama 3.1 8B with new insights
Investigate prompt structure impact on model behavior
🔎 Similar Papers
No similar papers found.
A
Asen Dotsinski
University of Amsterdam
U
Udit Thakur
University of Amsterdam
M
Marko Ivanov
University of Amsterdam
M
Mohammad Hafeez Khan
University of Amsterdam
Maria Heuss
Maria Heuss
University of Amsterdam
Artificial IntelligenceExplainabilityFairnessInformation Retrieval