🤖 AI Summary
This work investigates whether the performance of Graph Neural Networks (GNNs) is fundamentally constrained by the topological structure of input graphs—specifically, how interactions between local topological features and message-passing mechanisms induce over-smoothing or excessive expressivity.
Method: We introduce *k-hop similarity*, a novel topological metric quantifying structural consistency across k-hop neighborhoods of nodes, and establish it as a critical topological prior governing GNN convergence behavior, over-smoothing thresholds, and representational capacity. Our analysis combines theoretical derivation within the message-passing framework with empirical validation across multiple benchmark datasets.
Contribution/Results: We demonstrate that local topological consistency—measured by k-hop similarity—quantitatively predicts GNN training dynamics: high similarity promotes over-smoothing, whereas low similarity facilitates discriminative representation learning. This work provides a new interpretability-aware perspective on GNN expressivity limits and yields an interpretable, topology-based criterion for characterizing fundamental representational boundaries.
📝 Abstract
Graph Neural Networks (GNNs) have demonstrated remarkable success in learning from graph-structured data. However, the influence of the input graph's topology on GNN behavior remains poorly understood. In this work, we explore whether GNNs are inherently limited by the structure of their input graphs, focusing on how local topological features interact with the message-passing scheme to produce global phenomena such as oversmoothing or expressive representations. We introduce the concept of $k$-hop similarity and investigate whether locally similar neighborhoods lead to consistent node representations. This interaction can result in either effective learning or inevitable oversmoothing, depending on the inherent properties of the graph. Our empirical experiments validate these insights, highlighting the practical implications of graph topology on GNN performance.