Weight-sparse transformers have interpretable circuits

📅 2025-11-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of mechanistic interpretability in language models by proposing weight-sparsity-aware training to construct structurally transparent, human-interpretable neural circuits. Methodologically, it jointly employs sparsity-constrained training, structured pruning, and task-driven circuit isolation, augmented by fine-grained analysis of residual streams and neuron activations to precisely extract task-relevant subcircuits. Key contributions include: (i) the first systematic construction of sparse, interpretable circuits in Transformers that exhibit explicit, natural-language semantic correspondences; (ii) empirical validation that sparse models retain task performance while substantially enhancing interpretability—and crucially, that scaling model size mitigates the interpretability–capability trade-off; and (iii) successful transfer of circuit-level explanations from sparse to dense models, establishing a novel paradigm for mechanistic interpretability.

Technology Category

Application Category

📝 Abstract
Finding human-understandable circuits in language models is a central goal of the field of mechanistic interpretability. We train models to have more understandable circuits by constraining most of their weights to be zeros, so that each neuron only has a few connections. To recover fine-grained circuits underlying each of several hand-crafted tasks, we prune the models to isolate the part responsible for the task. These circuits often contain neurons and residual channels that correspond to natural concepts, with a small number of straightforwardly interpretable connections between them. We study how these models scale and find that making weights sparser trades off capability for interpretability, and scaling model size improves the capability-interpretability frontier. However, scaling sparse models beyond tens of millions of nonzero parameters while preserving interpretability remains a challenge. In addition to training weight-sparse models de novo, we show preliminary results suggesting our method can also be adapted to explain existing dense models. Our work produces circuits that achieve an unprecedented level of human understandability and validates them with considerable rigor.
Problem

Research questions and friction points this paper is trying to address.

Finding interpretable circuits in language models through weight sparsity
Trading off model capability for improved human understandability
Scaling sparse models while preserving interpretability remains challenging
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training weight-sparse transformers with zero constraints
Pruning models to isolate task-specific interpretable circuits
Scaling sparse models to improve capability-interpretability frontier
🔎 Similar Papers
2024-06-24Neural Information Processing SystemsCitations: 13
Leo Gao
Leo Gao
EleutherAI
AI AlignmentLanguage ModellingDeep Learning
A
Achyuta Rajaram
OpenAI, San Francisco, California, United States
J
Jacob Coxon
OpenAI, San Francisco, California, United States
S
Soham V. Govande
OpenAI, San Francisco, California, United States
Bowen Baker
Bowen Baker
OpenAI
D
Dan Mossing
OpenAI, San Francisco, California, United States