Interpretable and Sparse Linear Attention with Decoupled Membership-Subspace Modeling via MCR2 Objective

📅 2026-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the issue of redundant encoding in existing MCR²-driven white-box Transformers, where the membership matrix and subspace matrix U are tightly coupled, leading to suboptimal token projections. For the first time, this study explicitly decouples their functional relationship and derives an interpretable sparse linear attention operator—DMSA—from the gradient expansion of the MCR² objective. By integrating structured representation learning with sparse subspace modeling, the proposed method replaces the ToST attention module on ImageNet-1K and achieves a Top-1 accuracy improvement of 1.08%–1.45%, while simultaneously enhancing computational efficiency and model interpretability.

Technology Category

Application Category

📝 Abstract
Maximal Coding Rate Reduction (MCR2)-driven white-box transformer, grounded in structured representation learning, unifies interpretability and efficiency, providing a reliable white-box solution for visual modeling. However, in existing designs, tight coupling between"membership matrix"and"subspace matrix U"in MCR2 causes redundant coding under incorrect token projection. To this end, we decouple the functional relationship between the"membership matrix"and"subspaces U"in the MCR2 objective and derive an interpretable sparse linear attention operator from unrolled gradient descent of the optimized objective. Specifically, we propose to directly learn the membership matrix from inputs and subsequently derive sparse subspaces from the fullspace S. Consequently, gradient unrolling of the optimized MCR2 objective yields an interpretable sparse linear attention operator: Decoupled Membership-Subspace Attention (DMSA). Experimental results on visual tasks show that simply replacing the attention module in Token Statistics Transformer (ToST) with DMSA (we refer to as DMST) not only achieves a faster coding reduction rate but also outperforms ToST by 1.08%-1.45% in top-1 accuracy on the ImageNet-1K dataset. Compared with vanilla Transformer architectures, DMST exhibits significantly higher computational efficiency and interpretability.
Problem

Research questions and friction points this paper is trying to address.

MCR2
membership matrix
subspace matrix
redundant coding
interpretable attention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decoupled Membership-Subspace Attention
Maximal Coding Rate Reduction
Sparse Linear Attention
Interpretable Transformer
Structured Representation Learning
🔎 Similar Papers
No similar papers found.