Group Equivariance Meets Mechanistic Interpretability: Equivariant Sparse Autoencoders

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Sparse autoencoders (SAEs) excel in interpreting language models but suffer from reduced feature disentanglement and downstream utility when directly applied to scientific data exhibiting group symmetries—e.g., rotational invariance—due to neglect of underlying symmetry structure. Method: We propose the Adaptive Equivariant Sparse Autoencoder (AESAE), which integrates group-equivariance theory into the SAE architecture. AESAE dynamically models equivariance strength by learning how neural activations transform under group actions, enforced via matrix representation consistency constraints on encoder outputs during training on synthetic images, alongside learnable equivariance strength parameters. Contribution/Results: Experiments demonstrate that AESAE significantly outperforms standard SAEs on probe tasks. The learned features exhibit superior disentanglement and greater physical interpretability, empirically validating both the effectiveness and necessity of embedding symmetry priors into interpretability tools for structured scientific data.

Technology Category

Application Category

📝 Abstract
Sparse autoencoders (SAEs) have proven useful in disentangling the opaque activations of neural networks, primarily large language models, into sets of interpretable features. However, adapting them to domains beyond language, such as scientific data with group symmetries, introduces challenges that can hinder their effectiveness. We show that incorporating such group symmetries into the SAEs yields features more useful in downstream tasks. More specifically, we train autoencoders on synthetic images and find that a single matrix can explain how their activations transform as the images are rotated. Building on this, we develop adaptively equivariant SAEs that can adapt to the base model's level of equivariance. These adaptive SAEs discover features that lead to superior probing performance compared to regular SAEs, demonstrating the value of incorporating symmetries in mechanistic interpretability tools.
Problem

Research questions and friction points this paper is trying to address.

Adapting sparse autoencoders to scientific data with group symmetries
Explaining how activations transform as images undergo rotation
Developing adaptively equivariant SAEs for mechanistic interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incorporating group symmetries into sparse autoencoders
Explaining activation transformations with a single matrix
Developing adaptively equivariant SAEs for superior probing
🔎 Similar Papers
No similar papers found.