Efficient GNN Training Through Structure-Aware Randomized Mini-Batching

πŸ“… 2025-04-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing mini-batch construction methods for Graph Neural Networks (GNNs) face a fundamental trade-off: random sampling disrupts graph structural locality, resulting in low GPU cache hit rates and irregular memory access patterns; conversely, structured sampling improves efficiency but sacrifices randomness, harming convergence and model accuracy. To address this, we propose COMM-RANDβ€”a community-structure-aware, tunable random mini-batch construction method that jointly leverages fast community detection, constraint-guided random sampling, and cache-friendly subgraph batching, achieving dynamic balance between randomness and structural fidelity. COMM-RAND requires no modifications to GNN architectures and is natively compatible with mainstream frameworks such as PyTorch Geometric (PyG) and Deep Graph Library (DGL). Evaluated on four benchmark datasets, it achieves an average 1.8Γ— training speedup (up to 2.76Γ—), with only a marginal 0.42% accuracy degradation, while significantly improving GPU cache utilization and end-to-end throughput.

Technology Category

Application Category

πŸ“ Abstract
Graph Neural Networks (GNNs) enable learning on realworld graphs and mini-batch training has emerged as the de facto standard for training GNNs because it can scale to very large graphs and improve convergence. Current mini-batch construction policies largely ignore efficiency considerations of GNN training. Specifically, existing mini-batching techniques employ randomization schemes to improve accuracy and convergence. However, these randomization schemes are often agnostic to the structural properties of the graph (for eg. community structure), resulting in highly irregular memory access patterns during GNN training that make suboptimal use of on-chip GPU caches. On the other hand, while deterministic mini-batching based solely on graph structure delivers fast runtime performance, the lack of randomness compromises both the final model accuracy and training convergence speed. In this paper, we present Community-structure-aware Randomized Mini-batching (COMM-RAND), a novel methodology that bridges the gap between the above extremes. COMM-RAND allows practitioners to explore the space between pure randomness and pure graph structural awareness during mini-batch construction, leading to significantly more efficient GNN training with similar accuracy. We evaluated COMM-RAND across four popular graph learning benchmarks. COMM-RAND cuts down GNN training time by up to 2.76x (1.8x on average) while achieving an accuracy that is within 1.79% points (0.42% on average) compared to popular random mini-batching approaches.
Problem

Research questions and friction points this paper is trying to address.

Balancing randomness and graph structure in GNN mini-batching
Improving memory access patterns for efficient GNN training
Maintaining accuracy while reducing GNN training time
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structure-aware randomized mini-batching for GNNs
Balances randomness and graph structural awareness
Improves training efficiency with minimal accuracy loss