Adapting Unsigned Graph Neural Networks for Signed Graphs: A Few-Shot Prompt Tuning Approach

📅 2024-12-11
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges of label scarcity in signed graph learning and the difficulty of transferring pre-trained knowledge from unsigned graphs to downstream signed graph tasks, this paper proposes the Signed Graph Prompt Tuning (SGPT) framework. SGPT decouples graph templates from semantic prompts to explicitly model the structural and semantic distinctions between positive and negative edges, and introduces task-specific templates and feature prompts to achieve cross-graph alignment—both in structural representation and task objective—between unsigned graph pre-training and signed graph downstream adaptation. To our knowledge, this is the first work to introduce the prompt tuning paradigm into signed graph learning. Evaluated on multiple benchmark signed graph datasets, SGPT achieves state-of-the-art performance with only a few labeled examples, demonstrating substantial improvements in few-shot generalization and transfer robustness.

Technology Category

Application Category

📝 Abstract
Signed Graph Neural Networks (SGNNs) are powerful tools for signed graph representation learning but struggle with limited generalization and heavy dependence on labeled data. While recent advancements in"graph pre-training and prompt tuning"have reduced label dependence in Graph Neural Networks (GNNs) and improved their generalization abilities by leveraging pre-training knowledge, these efforts have focused exclusively on unsigned graphs. The scarcity of publicly available signed graph datasets makes it essential to transfer knowledge from unsigned graphs to signed graph tasks. However, this transfer introduces significant challenges due to the graph-level and task-level divergences between the pre-training and downstream phases. To address these challenges, we propose Signed Graph Prompt Tuning (SGPT) in this paper. Specifically, SGPT employs a graph template and a semantic prompt to segregate mixed link semantics in the signed graph and then adaptively integrate the distinctive semantic information according to the needs of downstream tasks, thereby unifying the pre-training and downstream graphs. Additionally, SGPT utilizes a task template and a feature prompt to reformulate the downstream signed graph tasks, aligning them with pre-training tasks to ensure a unified optimization objective and consistent feature space across tasks. Finally, extensive experiments are conducted on popular signed graph datasets, demonstrating the superiority of SGPT over state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Transferring knowledge from unsigned to signed graphs with structural discrepancies
Reducing supervision requirements for signed graph tasks with few-shot learning
Aligning pre-training and downstream objectives for signed graph representation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph template disentangles mixed node relationships
Task template unifies link prediction objectives
Feature and semantic prompts align pre-training and downstream spaces
🔎 Similar Papers
No similar papers found.
Zian Zhai
Zian Zhai
University of New South Wales
S
Sima Qing
University of New South Wales
X
Xiaoyang Wang
University of New South Wales
W
Wenjie Zhang
University of New South Wales