Interpreting Language Models Through Concept Descriptions: A Survey

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the interpretability of large language model (LLM) components—including neurons, attention heads—and abstract features extracted by sparse autoencoders. Method: We propose the first systematic survey framework that unifies generation, evaluation, and dataset curation around natural-language concept descriptions; we introduce a generative approach to produce open-vocabulary descriptions and integrate automated metrics with human evaluation for multi-dimensional validation. Contribution/Results: Our analysis exposes critical deficiencies in existing methods—particularly concerning causal attribution and evaluation rigor—and issues the first explicit call for a causally grounded interpretability paradigm. The study delivers a structured technical roadmap for LLM mechanistic transparency, advancing explainable AI from phenomenological description toward principled, mechanism-based verification.

Technology Category

Application Category

📝 Abstract
Understanding the decision-making processes of neural networks is a central goal of mechanistic interpretability. In the context of Large Language Models (LLMs), this involves uncovering the underlying mechanisms and identifying the roles of individual model components such as neurons and attention heads, as well as model abstractions such as the learned sparse features extracted by Sparse Autoencoders (SAEs). A rapidly growing line of work tackles this challenge by using powerful generator models to produce open-vocabulary, natural language concept descriptions for these components. In this paper, we provide the first survey of the emerging field of concept descriptions for model components and abstractions. We chart the key methods for generating these descriptions, the evolving landscape of automated and human metrics for evaluating them, and the datasets that underpin this research. Our synthesis reveals a growing demand for more rigorous, causal evaluation. By outlining the state of the art and identifying key challenges, this survey provides a roadmap for future research toward making models more transparent.
Problem

Research questions and friction points this paper is trying to address.

Surveying methods to interpret neural network decision-making processes
Evaluating natural language concept descriptions for model components
Identifying challenges for rigorous causal evaluation of model transparency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using generator models for concept descriptions
Evaluating descriptions with automated human metrics
Providing roadmap for transparent model research
🔎 Similar Papers
No similar papers found.
Nils Feldhus
Nils Feldhus
TU Berlin, BIFOLD, DFKI (Guest)
Natural Language ProcessingInterpretabilityExplainable AI
L
Laura Kopf
BIFOLD – Berlin Institute for the Foundations of Learning and Data, Technische Universität Berlin