Learning to Execute Graph Algorithms Exactly with Graph Neural Networks

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether graph neural networks (GNNs) can exactly learn and execute classical graph algorithms under bounded-degree graph constraints and finite-precision arithmetic. To this end, the authors propose a two-stage approach: first training an ensemble of multilayer perceptrons (MLPs) to emulate the local computation rules of individual nodes, then embedding these MLPs into a GNN as its update functions to enable error-free inference. The study establishes the first theoretical framework for the learnability of graph algorithms within the LOCAL model of distributed computing and, leveraging neural tangent kernel (NTK) analysis, proves that algorithms such as message flooding, BFS, DFS, and Bellman-Ford can be learned and executed by GNNs with high probability and exact correctness from a small number of samples, thereby demonstrating the feasibility of GNNs to rigorously implement distributed graph algorithms.

Technology Category

Application Category

📝 Abstract
Understanding what graph neural networks can learn, especially their ability to learn to execute algorithms, remains a central theoretical challenge. In this work, we prove exact learnability results for graph algorithms under bounded-degree and finite-precision constraints. Our approach follows a two-step process. First, we train an ensemble of multi-layer perceptrons (MLPs) to execute the local instructions of a single node. Second, during inference, we use the trained MLP ensemble as the update function within a graph neural network (GNN). Leveraging Neural Tangent Kernel (NTK) theory, we show that local instructions can be learned from a small training set, enabling the complete graph algorithm to be executed during inference without error and with high probability. To illustrate the learning power of our setting, we establish a rigorous learnability result for the LOCAL model of distributed computation. We further demonstrate positive learnability results for widely studied algorithms such as message flooding, breadth-first and depth-first search, and Bellman-Ford.
Problem

Research questions and friction points this paper is trying to address.

graph neural networks
algorithm execution
exact learnability
distributed computation
graph algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph Neural Networks
Exact Algorithm Execution
Neural Tangent Kernel
Distributed Computation
Learnability
🔎 Similar Papers
No similar papers found.