🤖 AI Summary
This work addresses the logical characterization of graph neural networks’ (GNNs) expressive power, aiming to establish precise equivalences between bounded-depth GNN architectures and key fragments of first-order logic (FO). Methodologically, it integrates tools from finite model theory, modal logic, and FO to systematically analyze semantic equivalences between GNNs’ local aggregation mechanisms and the quantifier structures of logical formulas. The paper provides the first bidirectional expressive equivalence proofs linking several logic fragments—including modal logic (ML), graded modal logic (GML), ML with adjacency (ML(A)), two-variable FO (FO₂), and counting two-variable logic (C₂)—with corresponding bounded GNN variants. These results uniformly characterize the expressivity boundaries of diverse GNNs, yielding a tight, formal logical foundation for GNN interpretability. By bridging deep learning theory and classical logic, the work advances the theoretical understanding of GNN capabilities and limitations.
📝 Abstract
Graph Neural Networks (GNNs) address two key challenges in applying deep learning to graph-structured data: they handle varying size input graphs and ensure invariance under graph isomorphism. While GNNs have demonstrated broad applicability, understanding their expressive power remains an important question. In this paper, we show that bounded GNN architectures correspond to specific fragments of first-order logic (FO), including modal logic (ML), graded modal logic (GML), modal logic with the universal modality (ML(A)), the two-variable fragment (FO2) and its extension with counting quantifiers (C2). To establish these results, we apply methods and tools from finite model theory of first-order and modal logics to the domain of graph representation learning. This provides a unifying framework for understanding the logical expressiveness of GNNs within FO.