🤖 AI Summary
This work investigates the computational power of recurrent graph neural networks (GNNs) over the real numbers, proposing a theoretical analogue model: recurrent arithmetic circuits with memory gates. By establishing a bidirectional simulation framework between recurrent GNNs and this circuit model, the study demonstrates their exact equivalence in function computation—any recurrent GNN can be simulated by such a circuit, and vice versa. This result overcomes the limitations of prior expressivity analyses that rely on specific GNN architectures, offering the first complete characterization of the universal representational capacity of recurrent GNNs in real-number computation. The findings provide a rigorous foundation in computational complexity for understanding the theoretical limits of these models.
📝 Abstract
We characterise the computational power of recurrent graph neural networks (GNNs) in terms of arithmetic circuits over the real numbers. Our networks are not restricted to aggregate-combine GNNs or other particular types. Generalizing similar notions from the literature, we introduce the model of recurrent arithmetic circuits, which can be seen as arithmetic analogues of sequential or logical circuits. These circuits utilise so-called memory gates which are used to store data between iterations of the recurrent circuit. While (recurrent) GNNs work on labelled graphs, we construct arithmetic circuits that obtain encoded labelled graphs as real valued tuples and then compute the same function. For the other direction we construct recurrent GNNs which are able to simulate the computations of recurrent circuits. These GNNs are given the circuit-input as initial feature vectors and then, after the GNN-computation, have the circuit-output among the feature vectors of its nodes. In this way we establish an exact correspondence between the expressivity of recurrent GNNs and recurrent arithmetic circuits operating over real numbers.