Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language Models

📅 2024-03-28
🏛️ arXiv.org
📈 Citations: 98
Influential: 11
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from poor interpretability, as existing methods rely on high-ambiguity, low-interpretable black-box units—e.g., neurons or attention heads. Method: This paper proposes a sparse feature circuit discovery and editing framework that for the first time employs human-interpretable, fine-grained semantic features as atomic units to construct causal subnetworks. It integrates causal discovery, sparse feature encoding, human-guided intervention (SHIFT), unsupervised behavioral clustering, and circuit mapping to achieve scalable and verifiable mechanistic analysis. Results: The framework successfully identifies thousands of interpretable sparse circuits across multiple mainstream LLMs. After human evaluation to filter irrelevant features and subsequent ablation, classifier generalization improves significantly. Its core contribution is establishing the “interpretable feature–sparse causal circuit” paradigm, bridging model interpretability with rigorous mechanistic understanding.

Technology Category

Application Category

📝 Abstract
We introduce methods for discovering and applying sparse feature circuits. These are causally implicated subnetworks of human-interpretable features for explaining language model behaviors. Circuits identified in prior work consist of polysemantic and difficult-to-interpret units like attention heads or neurons, rendering them unsuitable for many downstream applications. In contrast, sparse feature circuits enable detailed understanding of unanticipated mechanisms. Because they are based on fine-grained units, sparse feature circuits are useful for downstream tasks: We introduce SHIFT, where we improve the generalization of a classifier by ablating features that a human judges to be task-irrelevant. Finally, we demonstrate an entirely unsupervised and scalable interpretability pipeline by discovering thousands of sparse feature circuits for automatically discovered model behaviors.
Problem

Research questions and friction points this paper is trying to address.

Discovering interpretable causal graphs in language models
Improving classifier generalization via task-irrelevant feature ablation
Scaling interpretability with unsupervised sparse feature circuits
Innovation

Methods, ideas, or system contributions that make the work stand out.

Discovering sparse feature circuits for interpretability
Using SHIFT to improve classifier generalization
Unsupervised scalable pipeline for circuit discovery
🔎 Similar Papers
No similar papers found.