Evaluating SAE interpretability without explanations

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current interpretability evaluation of sparse autoencoders (SAEs) heavily relies on natural language explanation generation, conflating model text-generation capability with genuine interpretability and lacking unified, objective standards. Method: We propose a language-free evaluation paradigm that directly measures the predictive generalization of SAE latent variables in novel contexts—thereby decoupling explanation generation from interpretability assessment. Our framework integrates sparse coding analysis, LLM-based contextual prediction testing, and cross-setting human evaluation as a controlled baseline. Contribution/Results: Experiments demonstrate that our metric achieves high agreement with human judgments (Spearman’s ρ > 0.85) and significantly outperforms existing baselines. To our knowledge, this is the first standardized, language-free, and reproducible evaluation protocol for SAE interpretability—enabling rigorous, quantitative comparison across models and settings.

Technology Category

Application Category

📝 Abstract
Sparse autoencoders (SAEs) and transcoders have become important tools for machine learning interpretability. However, measuring how interpretable they are remains challenging, with weak consensus about which benchmarks to use. Most evaluation procedures start by producing a single-sentence explanation for each latent. These explanations are then evaluated based on how well they enable an LLM to predict the activation of a latent in new contexts. This method makes it difficult to disentangle the explanation generation and evaluation process from the actual interpretability of the latents discovered. In this work, we adapt existing methods to assess the interpretability of sparse coders, with the advantage that they do not require generating natural language explanations as an intermediate step. This enables a more direct and potentially standardized assessment of interpretability. Furthermore, we compare the scores produced by our interpretability metrics with human evaluations across similar tasks and varying setups, offering suggestions for the community on improving the evaluation of these techniques.
Problem

Research questions and friction points this paper is trying to address.

Measure interpretability of sparse autoencoders without explanations
Disentangle explanation generation from latent interpretability evaluation
Compare interpretability metrics with human evaluations for standardization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Assess interpretability without natural language explanations
Direct and standardized evaluation of sparse coders
Compare metrics with human evaluations for validation
🔎 Similar Papers
No similar papers found.