🤖 AI Summary
To address the performance degradation of graph neural networks (GNNs) under train-test distribution shift, this paper pioneers a dynamic control-theoretic perspective on GNN inference, modeling test-time node feature reconstruction as a stabilization problem governed by Lyapunov stability theory. We propose a parameter-free neural controller that dynamically adjusts node features during inference—without updating model parameters—while guaranteeing asymptotic convergence of predictions to ground-truth labels under rigorously established Lyapunov stability conditions. This work overcomes a key limitation of existing parameter-free test-time adaptation methods: the absence of formal convergence analysis. It provides the first theoretical framework ensuring stability-driven feature reconstruction with provable asymptotic correctness. Extensive experiments on multiple benchmark datasets demonstrate substantial improvements in GNN robustness and generalization under distribution shift, empirically validating both the efficacy and guaranteed convergence of our stability-constrained reconstruction approach.
📝 Abstract
The performance of graph neural networks (GNNs) is susceptible to discrepancies between training and testing sample distributions. Prior studies have attempted to mitigating the impact of distribution shift by reconstructing node features during the testing phase without modifying the model parameters. However, these approaches lack theoretical analysis of the proximity between predictions and ground truth at test time. In this paper, we propose a novel node feature reconstruction method grounded in Lyapunov stability theory. Specifically, we model the GNN as a control system during the testing phase, considering node features as control variables. A neural controller that adheres to the Lyapunov stability criterion is then employed to reconstruct these node features, ensuring that the predictions progressively approach the ground truth at test time. We validate the effectiveness of our approach through extensive experiments across multiple datasets, demonstrating significant performance improvements.