🤖 AI Summary
This work investigates the foundational assumption of uniqueness in mechanistic interpretability (MI). Grounded in statistical identifiability theory, it systematically evaluates two dominant MI paradigms—“localize-then-interpret” and “hypothesize-then-align”—on Boolean functions and small MLPs. We formally define the MI identifiability problem and uncover four sources of non-uniqueness: functionally equivalent circuits, semantically equivalent explanations, causally equivalent algorithms, and ambiguous subspace mappings. Empirical results demonstrate that a single network behavior admits multiple valid interpretations, thereby challenging the strong uniqueness hypothesis. In response, we propose predictive fidelity and controllability—as empirically grounded, pragmatic alternatives to uniqueness—as core evaluation criteria for interpretability. To operationalize this shift, we introduce a comprehensive analytical framework integrating causal alignment, circuit enumeration, subspace probing, and multi-criteria validation.
📝 Abstract
As AI systems are used in high-stakes applications, ensuring interpretability is crucial. Mechanistic Interpretability (MI) aims to reverse-engineer neural networks by extracting human-understandable algorithms to explain their behavior. This work examines a key question: for a given behavior, and under MI's criteria, does a unique explanation exist? Drawing on identifiability in statistics, where parameters are uniquely inferred under specific assumptions, we explore the identifiability of MI explanations. We identify two main MI strategies: (1)"where-then-what,"which isolates a circuit replicating model behavior before interpreting it, and (2)"what-then-where,"which starts with candidate algorithms and searches for neural activation subspaces implementing them, using causal alignment. We test both strategies on Boolean functions and small multi-layer perceptrons, fully enumerating candidate explanations. Our experiments reveal systematic non-identifiability: multiple circuits can replicate behavior, a circuit can have multiple interpretations, several algorithms can align with the network, and one algorithm can align with different subspaces. Is uniqueness necessary? A pragmatic approach may require only predictive and manipulability standards. If uniqueness is essential for understanding, stricter criteria may be needed. We also reference the inner interpretability framework, which validates explanations through multiple criteria. This work contributes to defining explanation standards in AI.