🤖 AI Summary
Sparse autoencoders (SAEs) excel in interpreting language models but suffer from reduced feature disentanglement and downstream utility when directly applied to scientific data exhibiting group symmetries—e.g., rotational invariance—due to neglect of underlying symmetry structure.
Method: We propose the Adaptive Equivariant Sparse Autoencoder (AESAE), which integrates group-equivariance theory into the SAE architecture. AESAE dynamically models equivariance strength by learning how neural activations transform under group actions, enforced via matrix representation consistency constraints on encoder outputs during training on synthetic images, alongside learnable equivariance strength parameters.
Contribution/Results: Experiments demonstrate that AESAE significantly outperforms standard SAEs on probe tasks. The learned features exhibit superior disentanglement and greater physical interpretability, empirically validating both the effectiveness and necessity of embedding symmetry priors into interpretability tools for structured scientific data.
📝 Abstract
Sparse autoencoders (SAEs) have proven useful in disentangling the opaque activations of neural networks, primarily large language models, into sets of interpretable features. However, adapting them to domains beyond language, such as scientific data with group symmetries, introduces challenges that can hinder their effectiveness. We show that incorporating such group symmetries into the SAEs yields features more useful in downstream tasks. More specifically, we train autoencoders on synthetic images and find that a single matrix can explain how their activations transform as the images are rotated. Building on this, we develop adaptively equivariant SAEs that can adapt to the base model's level of equivariance. These adaptive SAEs discover features that lead to superior probing performance compared to regular SAEs, demonstrating the value of incorporating symmetries in mechanistic interpretability tools.