MAGPrompt: Message-Adaptive Graph Prompt Tuning for Graph Neural Networks

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance limitations of pre-trained graph neural networks (GNNs) when transferred to downstream tasks, which often arise from a mismatch between pre-training objectives and task-specific requirements. Existing graph prompting methods struggle to effectively modulate neighbor interactions during message passing. To overcome this, we propose Message-adaptive Graph Prompting (MAGP), a novel approach that, for the first time, directly embeds learnable prompts into the GNN message-passing mechanism. With the backbone parameters frozen, MAGP employs task-specific prompt vectors to reweight incoming neighbor messages, enabling fine-grained and task-adaptive information aggregation. The method is compatible with mainstream GNN architectures and pre-training strategies, significantly outperforming existing graph prompting techniques in few-shot settings and matching the performance of full-parameter fine-tuning under full-data conditions.

Technology Category

Application Category

📝 Abstract
Pre-trained graph neural networks (GNNs) transfer well, but adapting them to downstream tasks remains challenging due to mismatches between pre-training objectives and task requirements. Graph prompt tuning offers a parameter-efficient alternative to fine-tuning, yet most methods only modify inputs or representations and leave message passing unchanged, limiting their ability to adapt neighborhood interactions. We propose message-adaptive graph prompt tuning, which injects learnable prompts into the message passing step to reweight incoming neighbor messages and add task-specific prompt vectors during message aggregation, while keeping the backbone GNN frozen. The approach is compatible with common GNN backbones and pre-training strategies, and applicable across downstream settings. Experiments on diverse node- and graph-level datasets show consistent gains over prior graph prompting methods in few-shot settings, while achieving performance competitive with fine-tuning in full-shot regimes.
Problem

Research questions and friction points this paper is trying to address.

graph neural networks
prompt tuning
message passing
downstream task adaptation
parameter-efficient learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

message-adaptive prompting
graph prompt tuning
parameter-efficient adaptation
message passing
pre-trained GNNs
🔎 Similar Papers
No similar papers found.