Dual-level Mixup for Graph Few-shot Learning with Fewer Tasks

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity of meta-tasks and the limited practicality of existing meta-learning approaches—which rely heavily on large, heterogeneous task collections—in graph few-shot learning, this paper proposes SMILE, a novel framework. SMILE introduces a dual-level MixUp mechanism: intra-task node-level MixUp and inter-task meta-level MixUp, substantially enhancing meta-training data diversity. It is the first to explicitly incorporate node-degree priors into graph representation learning, improving both discriminability and robustness. Theoretical analysis establishes a tighter generalization error bound for SMILE. Extensive experiments under both in-domain and cross-domain few-shot settings demonstrate that SMILE consistently outperforms state-of-the-art methods, significantly reducing dependence on large-scale meta-task sets. As a result, SMILE provides a scalable, highly generalizable solution for low-resource graph learning.

Technology Category

Application Category

📝 Abstract
Graph neural networks have been demonstrated as a powerful paradigm for effectively learning graph-structured data on the web and mining content from it.Current leading graph models require a large number of labeled samples for training, which unavoidably leads to overfitting in few-shot scenarios. Recent research has sought to alleviate this issue by simultaneously leveraging graph learning and meta-learning paradigms. However, these graph meta-learning models assume the availability of numerous meta-training tasks to learn transferable meta-knowledge. Such assumption may not be feasible in the real world due to the difficulty of constructing tasks and the substantial costs involved. Therefore, we propose a SiMple yet effectIve approach for graph few-shot Learning with fEwer tasks, named SMILE. We introduce a dual-level mixup strategy, encompassing both within-task and across-task mixup, to simultaneously enrich the available nodes and tasks in meta-learning. Moreover, we explicitly leverage the prior information provided by the node degrees in the graph to encode expressive node representations. Theoretically, we demonstrate that SMILE can enhance the model generalization ability. Empirically, SMILE consistently outperforms other competitive models by a large margin across all evaluated datasets with in-domain and cross-domain settings. Our anonymous code can be found here.
Problem

Research questions and friction points this paper is trying to address.

Graph few-shot learning
Fewer meta-training tasks
Dual-level mixup strategy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-level mixup strategy
Utilizes node degrees information
Enhances model generalization ability
🔎 Similar Papers
No similar papers found.
Yonghao Liu
Yonghao Liu
Jilin University
Graph Neural NetworkNatural Language Processing
M
Mengyu Li
College of Computer Science and Technology, Jilin University, Changchun, China
Fausto Giunchiglia
Fausto Giunchiglia
Professor of Computer Science, Università di Trento
Computational theories of the mind
L
Lan Huang
College of Computer Science and Technology, Jilin University, Changchun, China
Ximing Li
Ximing Li
Jilin university, China; RIKEN AIP, Japan
Weakly-supervised learningMisinformation analysis
X
Xiaoyue Feng
College of Computer Science and Technology, Jilin University, Changchun, China
R
Renchu Guan
College of Computer Science and Technology, Jilin University, Changchun, China