EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model

📅 2024-06-28
🏛️ arXiv.org
📈 Citations: 10
Influential: 2
📄 PDF
🤖 AI Summary
To address SAM’s limited capability in modeling textual prompts for referring expression segmentation, this paper proposes EVF-SAM, an early vision-language fusion architecture. It jointly encodes images and text prompts through a pre-trained multimodal encoder (BEIT-3) to generate semantically enriched referring prompts, which are then injected into SAM’s decoder in a decoupled manner. This work is the first to empirically demonstrate that early cross-modal fusion substantially outperforms late fusion, better aligning with SAM’s prompt-driven paradigm. Evaluated on RefCOCO, RefCOCO+, and RefCOCOg, EVF-SAM achieves state-of-the-art performance with only 1.32B parameters—82% fewer than mainstream large-model approaches—striking an optimal balance among accuracy, efficiency, and generalization. Key innovations include: (i) BEIT-3-guided early fusion, (ii) SAM-compatible prompt injection design, and (iii) a text-guided interactive segmentation framework.

Technology Category

Application Category

📝 Abstract
Segment Anything Model (SAM) has attracted widespread attention for its superior interactive segmentation capabilities with visual prompts while lacking further exploration of text prompts. In this paper, we empirically investigate what text prompt encoders (e.g., CLIP or LLM) are good for adapting SAM for referring expression segmentation and introduce the Early Vision-language Fusion-based SAM (EVF-SAM). EVF-SAM is a simple yet effective referring segmentation method which exploits multimodal prompts (i.e., image and text) and comprises a pre-trained vision-language model to generate referring prompts and a SAM model for segmentation. Surprisingly, we observe that: (1) multimodal prompts and (2) vision-language models with early fusion (e.g., BEIT-3) are beneficial for prompting SAM for accurate referring segmentation. Our experiments show that the proposed EVF-SAM based on BEIT-3 can obtain state-of-the-art performance on RefCOCO/+/g for referring expression segmentation and demonstrate the superiority of prompting SAM with early vision-language fusion. In addition, the proposed EVF-SAM with 1.32B parameters achieves remarkably higher performance while reducing nearly 82% of parameters compared to previous SAM methods based on large multimodal models.
Problem

Research questions and friction points this paper is trying to address.

Explores text prompt encoders for SAM adaptation
Introduces EVF-SAM for multimodal prompt-based segmentation
Demonstrates early vision-language fusion improves SAM performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Early Vision-Language Fusion for SAM
Multimodal prompts enhance segmentation accuracy
Reduced parameters with improved performance
🔎 Similar Papers
No similar papers found.
Y
Yuxuan Zhang
School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
Tianheng Cheng
Tianheng Cheng
ByteDance Seed
Computer VisionObject DetectionInstance SegmentationMultimodal ModelsAutonomous Driving
R
Rui Hu
School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
L
Lei Liu
vivo AI Lab
Heng Liu
Heng Liu
Guangxi Minzu University
adaptive fuzzy controlfractional-order systemnonlinear systemrobust controlneural network
L
Longjin Ran
vivo AI Lab
Xiaoxin Chen
Xiaoxin Chen
Coriell Institute for Medical Research
W
Wenyu Liu
School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
Xinggang Wang
Xinggang Wang
Professor, Huazhong University of Science and Technology
Artificial IntelligenceComputer VisionAutonomous DrivingObject DetectionObject Segmentation