🤖 AI Summary
Dynamic graph neural networks (GNNs) lack principled and statistically rigorous uncertainty quantification mechanisms.
Method: This paper introduces conformal prediction—generalized for the first time to dynamic graph learning—via an “unfolding” mechanism that transforms static GNNs into dynamic modeling frameworks satisfying the exchangeability assumption, thereby ensuring the statistical validity of conformal prediction under both transductive and semi-inductive settings. The approach integrates dynamic graph embedding, conformal inference, and exchangeability theory through rigorous theoretical derivation.
Results: Extensive experiments on synthetic and real-world dynamic graph datasets demonstrate exact calibration of prediction set coverage to user-specified confidence levels, alongside significant improvements in prediction robustness and downstream task performance.
📝 Abstract
Graph neural networks (GNNs) are powerful black-box models which have shown impressive empirical performance. However, without any form of uncertainty quantification, it can be difficult to trust such models in high-risk scenarios. Conformal prediction aims to address this problem, however, an assumption of exchangeability is required for its validity which has limited its applicability to static graphs and transductive regimes. We propose to use unfolding, which allows any existing static GNN to output a dynamic graph embedding with exchangeability properties. Using this, we extend the validity of conformal prediction to dynamic GNNs in both transductive and semi-inductive regimes. We provide a theoretical guarantee of valid conformal prediction in these cases and demonstrate the empirical validity, as well as the performance gains, of unfolded GNNs against standard GNN architectures on both simulated and real datasets.