SAM-REF: Introducing Image-Prompt Synergy during Interaction for Detail Enhancement in the Segment Anything Model

📅 2024-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Interactive segmentation models face a trade-off: early fusion incurs high latency, while late fusion degrades fine-grained detail perception within prompt regions. This paper proposes a two-stage lightweight fine-tuning framework that preserves SAM’s efficient image encoding capability while dynamically fusing image and prompt features to enhance target-region detail representation. The core innovation is a plug-and-play Refiner module, which performs prompt-guided feature refinement via cross-modal attention at the feature level and local adaptive reweighting. This enables precise, prompt-aware enhancement without architectural overhaul. The method achieves superior accuracy and efficiency: it significantly outperforms state-of-the-art methods on multiple benchmarks—particularly in mIoU and Boundary F-score—while increasing inference latency by less than 3%, thereby maintaining real-time interactive performance.

Technology Category

Application Category

📝 Abstract
Interactive segmentation is to segment the mask of the target object according to the user's interactive prompts. There are two mainstream strategies: early fusion and late fusion. Current specialist models utilize the early fusion strategy that encodes the combination of images and prompts to target the prompted objects, yet repetitive complex computations on the images result in high latency. Late fusion models extract image embeddings once and merge them with the prompts in later interactions. This strategy avoids redundant image feature extraction and improves efficiency significantly. A recent milestone is the Segment Anything Model (SAM). However, this strategy limits the models' ability to extract detailed information from the prompted target zone. To address this issue, we propose SAM-REF, a two-stage refinement framework that fully integrates images and prompts by using a lightweight refiner into the interaction of late fusion, which combines the accuracy of early fusion and maintains the efficiency of late fusion. Through extensive experiments, we show that our SAM-REF model outperforms the current state-of-the-art method in most metrics on segmentation quality without compromising efficiency.
Problem

Research questions and friction points this paper is trying to address.

Enhancing detail in Segment Anything Model via image-prompt synergy
Reducing latency in interactive segmentation without losing accuracy
Improving late fusion models' ability to extract detailed target information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage refinement framework for detail enhancement
Lightweight refiner in late fusion interaction
Combines early fusion accuracy with late fusion efficiency
🔎 Similar Papers
No similar papers found.
C
Chongkai Yu
MT Lab, Meitu Inc
T
Ting Liu
MT Lab, Meitu Inc
A
Anqi Li
Beijing Institute of Technology
X
Xiaochao Qu
MT Lab, Meitu Inc
C
Chengjing Wu
Luoqi Liu
Luoqi Liu
Director of MT Lab; Meitu
Computer Vision
X
Xiaolin Hu