๐ค AI Summary
Sparse autoencoders (SAEs) require extreme width to achieve neural interpretability, incurring prohibitive training costs. This paper proposes Switch SAEโa novel architecture that introduces sparse Mixture-of-Experts (MoE) into SAEs for the first time. It dynamically routes activations to multiple lightweight expert SAEs, enabling efficient scaling of feature capacity under fixed compute budgets. The method facilitates cross-expert feature disentanglement and sharing analysis, integrated with feature geometric modeling and interpretability evaluation. Experiments show that, under identical training budgets, Switch SAE reduces reconstruction error by up to 37% versus standard SAEs and other variants, scales feature count by over 10ร, and preserves human interpretability. Its core innovation is an MoE-driven modular sparse coding paradigm, which substantially breaks the traditional trade-off between reconstruction fidelity and sparsity.
๐ Abstract
Sparse autoencoders (SAEs) are a recent technique for decomposing neural network activations into human-interpretable features. However, in order for SAEs to identify all features represented in frontier models, it will be necessary to scale them up to very high width, posing a computational challenge. In this work, we introduce Switch Sparse Autoencoders, a novel SAE architecture aimed at reducing the compute cost of training SAEs. Inspired by sparse mixture of experts models, Switch SAEs route activation vectors between smaller"expert"SAEs, enabling SAEs to efficiently scale to many more features. We present experiments comparing Switch SAEs with other SAE architectures, and find that Switch SAEs deliver a substantial Pareto improvement in the reconstruction vs. sparsity frontier for a given fixed training compute budget. We also study the geometry of features across experts, analyze features duplicated across experts, and verify that Switch SAE features are as interpretable as features found by other SAE architectures.