Is Rewiring Actually Helpful in Graph Neural Networks?

📅 2023-05-31
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Deepening graph neural networks (GNNs) is hindered by oversmoothing and over-squashing—where the latter arises from inherent topological constraints on long-range information propagation and is often confounded with training dynamics (e.g., gradient vanishing), complicating rigorous evaluation of graph rewiring methods. This paper introduces the first training-free message-passing evaluation framework, explicitly decoupling over-squashing effects from optimization artifacts to isolate the intrinsic impact of topological rewiring. We systematically assess diverse rewiring strategies—including *k*-hop expansion and random edge addition—on node- and graph-level classification tasks. Leveraging parameter-free message passing and multi-scale graph structural analysis across multiple real-world datasets, our experiments reveal that most rewiring techniques yield no statistically significant performance gain. These findings challenge the widely held assumption of universal efficacy for rewiring and expose fundamental limitations in current rewiring paradigms.
📝 Abstract
Graph neural networks compute node representations by performing multiple message-passing steps that consist in local aggregations of node features. Having deep models that can leverage longer-range interactions between nodes is hindered by the issues of over-smoothing and over-squashing. In particular, the latter is attributed to the graph topology which guides the message-passing, causing a node representation to become insensitive to information contained at distant nodes. Many graph rewiring methods have been proposed to remedy or mitigate this problem. However, properly evaluating the benefits of these methods is made difficult by the coupling of over-squashing with other issues strictly related to model training, such as vanishing gradients. Therefore, we propose an evaluation setting based on message-passing models that do not require training to compute node and graph representations. We perform a systematic experimental comparison on real-world node and graph classification tasks, showing that rewiring the underlying graph rarely does confer a practical benefit for message-passing.
Problem

Research questions and friction points this paper is trying to address.

Addressing over-smoothing and over-squashing in graph neural networks
Evaluating graph rewiring methods without training biases
Assessing practical benefits of rewiring in message-passing models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph rewiring to mitigate over-squashing
Training-free message-passing evaluation
Empirical comparison on real-world tasks
🔎 Similar Papers
No similar papers found.