KAGNNs: Kolmogorov-Arnold Networks meet Graph Learning

📅 2024-06-26
🏛️ arXiv.org
📈 Citations: 32
Influential: 1
📄 PDF
🤖 AI Summary
This work addresses the limited expressive power and poor parameter efficiency of traditional multilayer perceptrons (MLPs) in the message-passing modules of graph neural networks (GNNs). We introduce Kolmogorov–Arnold Networks (KANs) into GNNs for the first time, proposing three differentiable and scalable KAN-GNN layers—based on B-splines and radial basis functions (RBFs)—designed to integrate seamlessly with GCN, GAT, and GIN architectures. By replacing MLPs with KANs for nonlinear transformations while preserving standard message-passing mechanisms, our approach significantly enhances representational capacity. Extensive experiments demonstrate that KAN-GNNs match or surpass MLP-based baselines across node classification, link prediction, graph classification, and graph regression tasks. Notably, the RBF-KAN variant achieves competitive performance with only marginal increases in parameter count and training cost over MLPs, offering both high accuracy and practical deployability.

Technology Category

Application Category

📝 Abstract
In recent years, Graph Neural Networks (GNNs) have become the de facto tool for learning node and graph representations. Most GNNs typically consist of a sequence of neighborhood aggregation (a.k.a., message-passing) layers, within which the representation of each node is updated based on those of its neighbors. The most expressive message-passing GNNs can be obtained through the use of the sum aggregator and of MLPs for feature transformation, thanks to their universal approximation capabilities. However, the limitations of MLPs recently motivated the introduction of another family of universal approximators, called Kolmogorov-Arnold Networks (KANs) which rely on a different representation theorem. In this work, we compare the performance of KANs against that of MLPs on graph learning tasks. We implement three new KAN-based GNN layers, inspired respectively by the GCN, GAT and GIN layers. We evaluate two different implementations of KANs using two distinct base families of functions, namely B-splines and radial basis functions. We perform extensive experiments on node classification, link prediction, graph classification and graph regression datasets. Our results indicate that KANs are on-par with or better than MLPs on all tasks studied in this paper. We also show that the size and training speed of RBF-based KANs is only marginally higher than for MLPs, making them viable alternatives. Code available at https://github.com/RomanBresson/KAGNN.
Problem

Research questions and friction points this paper is trying to address.

Compare KANs and MLPs in graph learning tasks.
Implement KAN-based GNN layers for graph tasks.
Evaluate KANs on node and graph classification tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

KANs replace MLPs in GNN layers
KANs use B-splines and radial basis functions
RBF-based KANs offer comparable training speed
🔎 Similar Papers
No similar papers found.