NodeNAS: Node-Specific Graph Neural Architecture Search for Out-of-Distribution Generalization

📅 2025-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph Neural Architecture Search (GraphNAS) suffers from poor out-of-distribution (OOD) generalization—especially under low-data regimes (e.g., few-shot or single-graph training)—due to its inability to capture node-level heterogeneity. To address this, we propose MNNAS, a node-level customized neural architecture search framework. Methodologically, MNNAS introduces: (1) a novel node-specific architecture customization mechanism that dynamically assigns optimal aggregation operations per node; (2) a multi-dimensional, extensible search space incorporating structure-invariant power-law degree distribution modeling to enhance topological robustness; and (3) an adaptive aggregation attention mechanism enabling differentiable, fine-grained node-level architecture search. Experiments demonstrate that MNNAS consistently outperforms state-of-the-art methods across both supervised and unsupervised OOD tasks, achieving significant gains in generalization under data sparsity.

Technology Category

Application Category

📝 Abstract
Graph neural architecture search (GraphNAS) has demonstrated advantages in mitigating performance degradation of graph neural networks (GNNs) due to distribution shifts. Recent approaches introduce weight sharing across tailored architectures, generating unique GNN architectures for each graph end-to-end. However, existing GraphNAS methods do not account for distribution patterns across different graphs and heavily rely on extensive training data. With sparse or single training graphs, these methods struggle to discover optimal mappings between graphs and architectures, failing to generalize to out-of-distribution (OOD) data. In this paper, we propose node-specific graph neural architecture search(NodeNAS), which aims to tailor distinct aggregation methods for different nodes through disentangling node topology and graph distribution with limited datasets. We further propose adaptive aggregation attention based multi-dim NodeNAS method(MNNAS), which learns an node-specific architecture customizer with good generalizability. Specifically, we extend the vertical depth of the search space, supporting simultaneous node-specific architecture customization across multiple dimensions. Moreover, we model the power-law distribution of node degrees under varying assortativity, encoding structure invariant information to guide architecture customization across each dimension. Extensive experiments across supervised and unsupervised tasks demonstrate that MNNAS surpasses state-of-the-art algorithms and achieves excellent OOD generalization.
Problem

Research questions and friction points this paper is trying to address.

Addresses performance degradation in GNNs due to distribution shifts.
Tailors unique GNN architectures for each node with limited data.
Improves out-of-distribution generalization through node-specific architecture customization.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Node-specific architecture customization for OOD generalization
Adaptive aggregation attention in multi-dim NodeNAS
Modeling power-law distribution for structure-invariant guidance
🔎 Similar Papers
No similar papers found.
Qiyi Wang
Qiyi Wang
Tongji University
Video UnderstandingGNN
Y
Yinning Shao
Tongji University, Shanghai 201800, China
Y
Yunlong Ma
Tongji University, Shanghai 201800, China
M
Min Liu
Tongji University, Shanghai 201800, China