🤖 AI Summary
This work addresses the longstanding trade-off between feature interpretability and reconstruction fidelity in deep neural network interpretability. We propose the *transcoder*—a novel architecture replacing conventional sparse autoencoders (SAEs)—designed to jointly optimize human-understandable feature representations and accurate signal reconstruction. Methodologically, we formulate the transcoder with component-level input-output mapping as its primary supervision objective, and introduce the *skip transcoder*, which incorporates affine skip connections alongside sparsity regularization. Evaluation employs a hybrid metric combining human interpretability assessments with automated reconstruction metrics. Experimental results demonstrate that the transcoder significantly improves human interpretability of learned features; moreover, the skip transcoder achieves an 18% reduction in reconstruction error over baseline transcoders under identical model capacity and training data, without compromising interpretability. This work provides the first systematic empirical validation of the transcoder’s dual advantage—superior interpretability *and* reconstruction fidelity—establishing a new paradigm for interpretable AI.
📝 Abstract
Sparse autoencoders (SAEs) extract human-interpretable features from deep neural networks by transforming their activations into a sparse, higher dimensional latent space, and then reconstructing the activations from these latents. Transcoders are similar to SAEs, but they are trained to reconstruct the output of a component of a deep network given its input. In this work, we compare the features found by transcoders and SAEs trained on the same model and data, finding that transcoder features are significantly more interpretable. We also propose _skip transcoders_, which add an affine skip connection to the transcoder architecture, and show that these achieve lower reconstruction loss with no effect on interpretability.