TAVP: Task-Adaptive Visual Prompt for Cross-domain Few-shot Segmentation

📅 2024-09-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited generalization capability of Segment Anything Model (SAM) in cross-domain few-shot segmentation, this paper proposes a Task-Adaptive Visual Prompting (TAVP) framework. TAVP introduces, for the first time, a class- and domain-agnostic self-generating visual prompting mechanism: it disentangles class and domain representations via prototype-driven feature decomposition and integrates a learnable prompting branch while preserving SAM’s pretrained priors. Furthermore, leveraging multi-level feature fusion (MFF), we design a Class- and Domain-aware Task-Adaptive Prompting (CDTAP) module to jointly optimize prompt generation and matching. Evaluated on four cross-domain benchmarks, TAVP achieves substantial improvements over state-of-the-art methods—+1.3% average mIoU in 1-shot and +11.76% in 5-shot settings—demonstrating superior cross-domain transferability and robustness.

Technology Category

Application Category

📝 Abstract
While large visual models (LVM) demonstrated significant potential in image understanding, due to the application of large-scale pre-training, the Segment Anything Model (SAM) has also achieved great success in the field of image segmentation, supporting flexible interactive cues and strong learning capabilities. However, SAM's performance often falls short in cross-domain and few-shot applications. Previous work has performed poorly in transferring prior knowledge from base models to new applications. To tackle this issue, we propose a task-adaptive auto-visual prompt framework, a new paradigm for Cross-dominan Few-shot segmentation (CD-FSS). First, a Multi-level Feature Fusion (MFF) was used for integrated feature extraction as prior knowledge. Besides, we incorporate a Class Domain Task-Adaptive Auto-Prompt (CDTAP) module to enable class-domain agnostic feature extraction and generate high-quality, learnable visual prompts. This significant advancement uses a unique generative approach to prompts alongside a comprehensive model structure and specialized prototype computation. While ensuring that the prior knowledge of SAM is not discarded, the new branch disentangles category and domain information through prototypes, guiding it in adapting the CD-FSS. Comprehensive experiments across four cross-domain datasets demonstrate that our model outperforms the state-of-the-art CD-FSS approach, achieving an average accuracy improvement of 1.3% in the 1-shot setting and 11.76% in the 5-shot setting.
Problem

Research questions and friction points this paper is trying to address.

Image Recognition
Few-shot Learning
Transfer Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Task Adaptive Visual Prompting
MFF Information Integration
CDTAP Enhanced SAM Model
🔎 Similar Papers
No similar papers found.