🤖 AI Summary
To address the trade-off between accuracy and interpretability in Self-Attributing Neural Networks (SANNs) for high-dimensional tasks, this paper proposes the Sum-of-Parts (SOP) framework: an unsupervised, end-to-end method that transforms any differentiable model into a *group-wise* self-attributing network, automatically learning semantically coherent feature groupings. Theoretically, we establish for the first time that group-wise attribution can achieve zero attribution error—surpassing the fundamental performance lower bound of single-feature attribution. We introduce a novel differentiable grouping module and a group-level attribution propagation mechanism. Furthermore, we design a multi-granularity interpretability evaluation suite integrating quantitative metrics and semantic coherence analysis. SOP achieves state-of-the-art self-attribution performance on vision and language benchmarks; its learned groupings demonstrate strong semantic consistency across multiple validation metrics and successfully support model debugging and discovery of novel physics signals in cosmological data analysis.
📝 Abstract
Self-attributing neural networks (SANNs) present a potential path towards interpretable models for high-dimensional problems, but often face significant trade-offs in performance. In this work, we formally prove a lower bound on errors of per-feature SANNs, whereas group-based SANNs can achieve zero error and thus high performance. Motivated by these insights, we propose Sum-of-Parts (SOP), a framework that transforms any differentiable model into a group-based SANN, where feature groups are learned end-to-end without group supervision. SOP achieves state-of-the-art performance for SANNs on vision and language tasks, and we validate that the groups are interpretable on a range of quantitative and semantic metrics. We further validate the utility of SOP explanations in model debugging and cosmological scientific discovery. Code is available at https://github.com/BrachioLab/sop.