🤖 AI Summary
Dynamic graph representation learning faces challenges including structural information loss, high memory overhead, and degradation of spiking propagation dynamics—particularly acute when replacing RNNs with spiking neural networks (SNNs). To address these, we propose the Dynamic Spiking Graph Neural Network (DPGNN), the first framework to introduce implicit differential equilibrium optimization into dynamic graph learning. DPGNN features a cross-layer spike information bypass mechanism to preserve fine-grained structural propagation details and integrates binary spatiotemporal encoding to enhance energy efficiency. By avoiding explicit temporal unrolling, it significantly reduces both memory consumption and energy expenditure. Evaluated on three large-scale dynamic graph benchmarks, DPGNN achieves substantial improvements in node classification accuracy while reducing training memory usage by 37% and inference energy consumption by 52%, thereby jointly advancing performance, efficiency, and structural fidelity.
📝 Abstract
The integration of Spiking Neural Networks (SNNs) and Graph Neural Networks (GNNs) is gradually attracting attention due to the low power consumption and high efficiency in processing the non-Euclidean data represented by graphs. However, as a common problem, dynamic graph representation learning faces challenges such as high complexity and large memory overheads. Current work often uses SNNs instead of Recurrent Neural Networks (RNNs) by using binary features instead of continuous ones for efficient training, which would overlooks graph structure information and leads to the loss of details during propagation. Additionally, optimizing dynamic spiking models typically requires propagation of information across time steps, which increases memory requirements. To address these challenges, we present a framework named underline{Dy}namic underline{S}punderline{i}king underline{G}raph underline{N}eural Networks (method{}). To mitigate the information loss problem, method{} propagates early-layer information directly to the last layer for information compensation. To accommodate the memory requirements, we apply the implicit differentiation on the equilibrium state, which does not rely on the exact reverse of the forward computation. While traditional implicit differentiation methods are usually used for static situations, method{} extends it to the dynamic graph setting. Extensive experiments on three large-scale real-world dynamic graph datasets validate the effectiveness of method{} on dynamic node classification tasks with lower computational costs.