🤖 AI Summary
Existing linear-attention Graph Transformers suffer from poor node representation separability and limited classification performance due to low-rank projections and overly uniform attention distributions. To address this, we propose a Rank-Entropy enhancement framework: (1) a gated local graph network branch dynamically increases the effective rank of the attention matrix; and (2) a learnable logarithmic-power function explicitly minimizes attention distribution entropy to enhance focus. The framework preserves linear time complexity while ensuring theoretical interpretability and optimization controllability. Extensive experiments demonstrate that our model achieves state-of-the-art or highly competitive performance on both homophilic and heterophilic graph benchmarks, significantly outperforming existing linear-attention graph models.
📝 Abstract
Linear attention mechanisms have emerged as efficient alternatives to full self-attention in Graph Transformers, offering linear time complexity. However, existing linear attention models often suffer from a significant drop in expressiveness due to low-rank projection structures and overly uniform attention distributions. We theoretically prove that these properties reduce the class separability of node representations, limiting the model's classification ability. To address this, we propose a novel hybrid framework that enhances both the rank and focus of attention. Specifically, we enhance linear attention by attaching a gated local graph network branch to the value matrix, thereby increasing the rank of the resulting attention map. Furthermore, to alleviate the excessive smoothing effect inherent in linear attention, we introduce a learnable log-power function into the attention scores to reduce entropy and sharpen focus. We theoretically show that this function decreases entropy in the attention distribution, enhancing the separability of learned embeddings. Extensive experiments on both homophilic and heterophilic graph benchmarks demonstrate that our method achieves competitive performance while preserving the scalability of linear attention.