🤖 AI Summary
Graph Neural Networks (GNNs) suffer from excessive parameter counts and high energy consumption, hindering their deployment on resource-constrained devices. Method: This paper proposes Quaternion Message Passing Neural Networks (QMPNNs), the first GNN architecture leveraging quaternion algebra to encode node and edge features—reducing parameter count by 75% while preserving accuracy comparable to full-sized real-valued GNNs. Building upon the Graph Lottery Ticket Hypothesis (Graph LTH), we further introduce a dual-path sparse subnetwork search framework applicable to both GNNs and QMPNNs, enabling aggressive trainable parameter compression. Our approach integrates quaternion representation, message passing, graph structure learning, and sparse training. Contribution/Results: Evaluated on node classification, link prediction, and graph classification tasks, the QMPNN+LTH framework significantly reduces computational overhead and energy consumption, achieving substantial improvements in energy efficiency. This work establishes a novel paradigm for lightweight graph representation learning.
📝 Abstract
Graph Neural Networks (GNNs) have emerged as powerful tools for learning representations of graph-structured data. In addition to real-valued GNNs, quaternion GNNs also perform well on tasks on graph-structured data. With the aim of reducing the energy footprint, we reduce the model size while maintaining accuracy comparable to that of the original-sized GNNs. This paper introduces Quaternion Message Passing Neural Networks (QMPNNs), a framework that leverages quaternion space to compute node representations. Our approach offers a generalizable method for incorporating quaternion representations into GNN architectures at one-fourth of the original parameter count. Furthermore, we present a novel perspective on Graph Lottery Tickets, redefining their applicability within the context of GNNs and QMPNNs. We specifically aim to find the initialization lottery from the subnetwork of the GNNs that can achieve comparable performance to the original GNN upon training. Thereby reducing the trainable model parameters even further. To validate the effectiveness of our proposed QMPNN framework and LTH for both GNNs and QMPNNs, we evaluate their performance on real-world datasets across three fundamental graph-based tasks: node classification, link prediction, and graph classification.