InfGraND: An Influence-Guided GNN-to-MLP Knowledge Distillation

๐Ÿ“… 2026-01-12
๐Ÿ›๏ธ Trans. Mach. Learn. Res.
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limited expressiveness of traditional multilayer perceptrons (MLPs) in low-latency or resource-constrained graph data scenarios, where existing GNN-to-MLP knowledge distillation approaches often overlook the importance of node structural roles. To bridge this gap, the authors propose a novel distillation framework that introduces graph structural influence as distillation weights, prioritizing knowledge transfer from structurally critical nodes. Furthermore, by precomputing multi-hop neighborhood features in a single pass and embedding them into the MLP input, the method enhances the student modelโ€™s structural awareness without incurring additional inference overhead. Evaluated across seven homophilic graph benchmarks under both transductive and inductive settings, the approach consistently outperforms current GNN-to-MLP distillation methods, demonstrating its suitability for real-world low-latency applications.

Technology Category

Application Category

๐Ÿ“ Abstract
Graph Neural Networks (GNNs) are the go-to model for graph data analysis. However, GNNs rely on two key operations - aggregation and update, which can pose challenges for low-latency inference tasks or resource-constrained scenarios. Simple Multi-Layer Perceptrons (MLPs) offer a computationally efficient alternative. Yet, training an MLP in a supervised setting often leads to suboptimal performance. Knowledge Distillation (KD) from a GNN teacher to an MLP student has emerged to bridge this gap. However, most KD methods either transfer knowledge uniformly across all nodes or rely on graph-agnostic indicators such as prediction uncertainty. We argue this overlooks a more fundamental, graph-centric inquiry:"How important is a node to the structure of the graph?"We introduce a framework, InfGraND, an Influence-guided Graph KNowledge Distillation from GNN to MLP that addresses this by identifying and prioritizing structurally influential nodes to guide the distillation process, ensuring that the MLP learns from the most critical parts of the graph. Additionally, InfGraND embeds structural awareness in MLPs through one-time multi-hop neighborhood feature pre-computation, which enriches the student MLP's input and thus avoids inference-time overhead. Our rigorous evaluation in transductive and inductive settings across seven homophilic graph benchmark datasets shows InfGraND consistently outperforms prior GNN to MLP KD methods, demonstrating its practicality for numerous latency-critical applications in real-world settings.
Problem

Research questions and friction points this paper is trying to address.

Knowledge Distillation
Graph Neural Networks
Multi-Layer Perceptrons
Node Influence
Graph Structure
Innovation

Methods, ideas, or system contributions that make the work stand out.

Influence-guided
Graph Neural Networks
Knowledge Distillation
Multi-Layer Perceptron
Structural Awareness
๐Ÿ”Ž Similar Papers
No similar papers found.