Beyond Components: Singular Vector-Based Interpretability of Transformer Circuits

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing interpretability methods treat attention heads and MLPs as monolithic units, overlooking their internal, fine-grained functional substructures. Method: We propose a singular-vector-decomposition (SVD)-based fine-grained interpretability framework that decomposes model computation units into orthogonal low-rank directions, thereby isolating coexisting, independent subfunctions—e.g., multiple distinct computational pathways within a single “name mover” head or MLP layer. Contribution/Results: On canonical tasks—including Indirect Object Identification (IOI), Gender Prediction (GP), and Greater-Than (GT)—we demonstrate that critical circuit nodes activate selectively along specific singular directions, confirming that model computations are highly distributed, structured, and composable. This work breaks from conventional module-level analysis by introducing singular directions as fundamental interpretability primitives, establishing a novel, scalable paradigm for mechanistic modeling of Transformer internals.

Technology Category

Application Category

📝 Abstract
Transformer-based language models exhibit complex and distributed behavior, yet their internal computations remain poorly understood. Existing mechanistic interpretability methods typically treat attention heads and multilayer perceptron layers (MLPs) (the building blocks of a transformer architecture) as indivisible units, overlooking possibilities of functional substructure learned within them. In this work, we introduce a more fine-grained perspective that decomposes these components into orthogonal singular directions, revealing superposed and independent computations within a single head or MLP. We validate our perspective on widely used standard tasks like Indirect Object Identification (IOI), Gender Pronoun (GP), and Greater Than (GT), showing that previously identified canonical functional heads, such as the name mover, encode multiple overlapping subfunctions aligned with distinct singular directions. Nodes in a computational graph, that are previously identified as circuit elements show strong activation along specific low-rank directions, suggesting that meaningful computations reside in compact subspaces. While some directions remain challenging to interpret fully, our results highlight that transformer computations are more distributed, structured, and compositional than previously assumed. This perspective opens new avenues for fine-grained mechanistic interpretability and a deeper understanding of model internals.
Problem

Research questions and friction points this paper is trying to address.

Decomposing transformer components into orthogonal singular directions for interpretability
Identifying superposed independent computations within single attention heads and MLPs
Revealing distributed structured computations in compact subspaces of transformers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposing transformer components into singular directions
Revealing superposed computations within single components
Identifying compact subspaces for meaningful computations
🔎 Similar Papers
No similar papers found.
A
Areeb Ahmad
Indian Institute of Technology Kanpur (IIT Kanpur)
A
Abhinav Joshi
Indian Institute of Technology Kanpur (IIT Kanpur)
Ashutosh Modi
Ashutosh Modi
Indian Institute of Technology Kanpur
Natural Language ProcessingMachine and Deep LearningArtificial IntelligenceAffective ComputingLegal AI