🤖 AI Summary
Graph Neural Networks (GNNs) suffer from limited analyzability and verifiability due to the lack of precise formal characterizations of their expressive power and decidability boundaries.
Method: We establish, for the first time, an exact expressive equivalence between GNNs and a decidable logic fragment—namely, first-order logic extended with Presburger quantifiers—by integrating model theory, formal semantics of GNNs, and decidability theory. Based on this equivalence, we develop a Presburger arithmetic–based logical characterization framework.
Results: We design the first sound and complete decision procedures for key GNN verification tasks—including output range checking and functional equivalence verification. Concurrently, we rigorously prove the undecidability of several static analysis problems, such as generalization guarantee verification. Our work provides both theoretical foundations and practical tools for ensuring GNN reliability.
📝 Abstract
We present results concerning the expressiveness and decidability of a popular graph learning formalism, graph neural networks (GNNs), exploiting connections with logic. We use a family of recently-discovered decidable logics involving"Presburger quantifiers". We show how to use these logics to measure the expressiveness of classes of GNNs, in some cases getting exact correspondences between the expressiveness of logics and GNNs. We also employ the logics, and the techniques used to analyze them, to obtain decision procedures for verification problems over GNNs. We complement this with undecidability results for static analysis problems involving the logics, as well as for GNN verification problems.