🤖 AI Summary
To address the dual challenges of label scarcity in signed graph learning and the difficulty of transferring pre-trained knowledge from unsigned graphs to downstream signed graph tasks, this paper proposes the Signed Graph Prompt Tuning (SGPT) framework. SGPT decouples graph templates from semantic prompts to explicitly model the structural and semantic distinctions between positive and negative edges, and introduces task-specific templates and feature prompts to achieve cross-graph alignment—both in structural representation and task objective—between unsigned graph pre-training and signed graph downstream adaptation. To our knowledge, this is the first work to introduce the prompt tuning paradigm into signed graph learning. Evaluated on multiple benchmark signed graph datasets, SGPT achieves state-of-the-art performance with only a few labeled examples, demonstrating substantial improvements in few-shot generalization and transfer robustness.
📝 Abstract
Signed Graph Neural Networks (SGNNs) are powerful tools for signed graph representation learning but struggle with limited generalization and heavy dependence on labeled data. While recent advancements in"graph pre-training and prompt tuning"have reduced label dependence in Graph Neural Networks (GNNs) and improved their generalization abilities by leveraging pre-training knowledge, these efforts have focused exclusively on unsigned graphs. The scarcity of publicly available signed graph datasets makes it essential to transfer knowledge from unsigned graphs to signed graph tasks. However, this transfer introduces significant challenges due to the graph-level and task-level divergences between the pre-training and downstream phases. To address these challenges, we propose Signed Graph Prompt Tuning (SGPT) in this paper. Specifically, SGPT employs a graph template and a semantic prompt to segregate mixed link semantics in the signed graph and then adaptively integrate the distinctive semantic information according to the needs of downstream tasks, thereby unifying the pre-training and downstream graphs. Additionally, SGPT utilizes a task template and a feature prompt to reformulate the downstream signed graph tasks, aligning them with pre-training tasks to ensure a unified optimization objective and consistent feature space across tasks. Finally, extensive experiments are conducted on popular signed graph datasets, demonstrating the superiority of SGPT over state-of-the-art methods.