๐ค AI Summary
This study systematically evaluates the reliability of sparse autoencoders (SAEs) for mechanistic interpretability in large language models, with a focus on the generalization and robustness of their feature extraction and targeted intervention capabilities. We conduct the first full-stack stress test of open-source SAEs on Llama 3.1, examining multiple layers, diverse contexts, and varying intervention strengths, complemented by neural activation analysis and cross-layer behavioral assessment. While we successfully reproduce baseline effects, our findings reveal that SAE performance is highly sensitive to layer position, context, and intervention intensity. Critical limitations include difficulty disentangling semantically similar features and fragile intervention outcomes, indicating that current SAEs lack systematic reliability and are insufficiently robust for safety-critical applications.
๐ Abstract
Recent work by Anthropic on Mechanistic interpretability claims to understand and control Large Language Models by extracting human-interpretable features from their neural activation patterns using sparse autoencoders (SAEs). If successful, this approach offers one of the most promising routes for human oversight in AI safety. We conduct an initial stress-test of these claims by replicating their main results with open-source SAEs for Llama 3.1. While we successfully reproduce basic feature extraction and steering capabilities, our investigation suggests that major caution is warranted regarding the generalizability of these claims. We find that feature steering exhibits substantial fragility, with sensitivity to layer selection, steering magnitude, and context. We observe non-standard activation behavior and demonstrate the difficulty to distinguish thematically similar features from one another. While SAE-based interpretability produces compelling demonstrations in selected cases, current methods often fall short of the systematic reliability required for safety-critical applications. This suggests a necessary shift in focus from prioritizing interpretability of internal representations toward reliable prediction and control of model output. Our work contributes to a more nuanced understanding of what mechanistic interpretability has achieved and highlights fundamental challenges for AI safety that remain unresolved.