Sparse Autoencoder Neural Operators: Model Recovery in Function Spaces

📅 2025-09-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the fundamental challenge of opaque representation mechanisms and poor interpretability in neural operators (NOs) operating in infinite-dimensional function spaces. We propose lifted-SAE, the first sparse autoencoder framework explicitly designed for infinite-dimensional function spaces. Methodologically, we extend conventional sparse autoencoders to the lifted space, integrating operator-structured priors and resolution-invariant inductive biases to enable stable, multiscale concept recovery. Theoretically, we establish a principled framework for representation learning and analysis in function spaces by leveraging sparse model recovery theory. Empirically, lifted-SAE significantly accelerates smooth concept extraction, enhances training dynamic stability and inference efficiency, and exhibits strong robustness across resolutions. To our knowledge, this is the first approach to provide a mechanistic, interpretable, and generalizable understanding of large-scale neural operators.

Technology Category

Application Category

📝 Abstract
We frame the problem of unifying representations in neural models as one of sparse model recovery and introduce a framework that extends sparse autoencoders (SAEs) to lifted spaces and infinite-dimensional function spaces, enabling mechanistic interpretability of large neural operators (NO). While the Platonic Representation Hypothesis suggests that neural networks converge to similar representations across architectures, the representational properties of neural operators remain underexplored despite their growing importance in scientific computing. We compare the inference and training dynamics of SAEs, lifted-SAE, and SAE neural operators. We highlight how lifting and operator modules introduce beneficial inductive biases, enabling faster recovery, improved recovery of smooth concepts, and robust inference across varying resolutions, a property unique to neural operators.
Problem

Research questions and friction points this paper is trying to address.

Recovering sparse models in function spaces
Extending sparse autoencoders to infinite dimensions
Enabling mechanistic interpretability of neural operators
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends sparse autoencoders to function spaces
Introduces lifting and operator modules for biases
Enables robust inference across varying resolutions
🔎 Similar Papers
No similar papers found.