Fact Recall, Heuristics or Pure Guesswork? Precise Interpretations of Language Models for Fact Completion

📅 2024-10-18
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the ambiguity in explaining language models’ (LMs’) factual completion behavior by systematically distinguishing four underlying prediction mechanisms: exact factual recall, heuristic reasoning, name bias, and pure guessing—the first such taxonomy. We propose PrISM, a method to construct a scenario-controlled, mechanism-decoupled diagnostic dataset and design a fine-grained attribution evaluation protocol. Experiments reveal: (i) substantial heterogeneity in internal attribution patterns across mechanisms; (ii) mainstream attribution methods (e.g., causal tracing) suffer from dominant-signal interference on mixed samples, yielding distorted explanations; and (iii) mixed-evaluation paradigms systematically inflate reliability estimates. Our contributions include: the first reproducible analytical framework for disentangling factual reasoning mechanisms; empirical evidence of the intrinsic heterogeneity of LMs’ factual capabilities; and a methodological benchmark for trustworthy attribution.

Technology Category

Application Category

📝 Abstract
Previous interpretations of language models (LMs) miss important distinctions in how these models process factual information. For example, given the query"Astrid Lindgren was born in"with the corresponding completion"Sweden", no difference is made between whether the prediction was based on having the exact knowledge of the birthplace of the Swedish author or assuming that a person with a Swedish-sounding name was born in Sweden. In this paper, we investigate four different prediction scenarios for which the LM can be expected to show distinct behaviors. These scenarios correspond to different levels of model reliability and types of information being processed - some being less desirable for factual predictions. To facilitate precise interpretations of LMs for fact completion, we propose a model-specific recipe called PrISM for constructing datasets with examples of each scenario based on a set of diagnostic criteria. We apply a popular interpretability method, causal tracing (CT), to the four prediction scenarios and find that while CT produces different results for each scenario, aggregations over a set of mixed examples may only represent the results from the scenario with the strongest measured signal. In summary, we contribute tools for a more granular study of fact completion in language models and analyses that provide a more nuanced understanding of how LMs process fact-related queries.
Problem

Research questions and friction points this paper is trying to address.

Distinguish fact recall from guesswork in LMs
Analyze prediction scenarios using interpretability methods
Understand LM processing of fact-related queries
Innovation

Methods, ideas, or system contributions that make the work stand out.

PrISM dataset construction for prediction scenarios
Causal tracing and information flow analysis
Focus on mid-range and late MLP sublayers
🔎 Similar Papers
No similar papers found.