🤖 AI Summary
Scalability and high computational cost hinder existing graph neural networks (GNNs) on large-scale graph-level learning tasks. This paper proposes SHAKE-GNN, a scalable framework featuring: (i) a randomized multi-resolution graph decomposition mechanism grounded in Kirchhoff forests, enabling hierarchical, data-adaptive graph coarsening; and (ii) a lightweight hierarchical GNN architecture coupled with a data-driven parameter selection strategy, facilitating flexible trade-offs between model capacity and computational efficiency. Theoretical analysis establishes its linear time complexity. On multiple large-scale graph classification benchmarks, SHAKE-GNN achieves performance on par with or superior to state-of-the-art baselines—while reducing memory consumption by 41% and accelerating inference by 2.3× on average. The framework thus delivers both strong scalability and high accuracy under significantly lower resource overhead.
📝 Abstract
Graph Neural Networks (GNNs) have achieved remarkable success across a range of learning tasks. However, scaling GNNs to large graphs remains a significant challenge, especially for graph-level tasks. In this work, we introduce SHAKE-GNN, a novel scalable graph-level GNN framework based on a hierarchy of Kirchhoff Forests, a class of random spanning forests used to construct stochastic multi-resolution decompositions of graphs. SHAKE-GNN produces multi-scale representations, enabling flexible trade-offs between efficiency and performance. We introduce an improved, data-driven strategy for selecting the trade-off parameter and analyse the time-complexity of SHAKE-GNN. Experimental results on multiple large-scale graph classification benchmarks demonstrate that SHAKE-GNN achieves competitive performance while offering improved scalability.