🤖 AI Summary
Large Vision-Language Models (VLMs) exhibit substantially inferior performance in semantic segmentation—particularly under out-of-distribution (OOD) conditions—compared to task-specific models. To address this, this paper introduces a few-shot prompting segmentation paradigm and systematically evaluates the efficacy of textual and visual prompts on the cross-distribution MESS benchmark. We make the first observation that textual and visual prompts exhibit strong complementarity. Leveraging this insight, we propose PromptMatcher, a training-free method that adaptively fuses dual-modal prompts without architectural modification or parameter optimization. PromptMatcher overcomes the inherent limitations of unimodal prompting: it improves mean IoU by +2.5% over the best text-prompted VLM and +3.5% over the best vision-prompted VLM in the few-shot setting, yielding an absolute IoU gain of up to 11%. Our work establishes a new paradigm for open-vocabulary, training-free, multimodal prompt-based segmentation.
📝 Abstract
Large Vision-Language Models (VLMs) are increasingly being regarded as foundation models that can be instructed to solve diverse tasks by prompting, without task-specific training. We examine the seemingly obvious question: how to effectively prompt VLMs for semantic segmentation. To that end, we systematically evaluate the segmentation performance of several recent models guided by either text or visual prompts on the out-of-distribution MESS dataset collection. We introduce a scalable prompting scheme, few-shot prompted semantic segmentation, inspired by open-vocabulary segmentation and few-shot learning. It turns out that VLMs lag far behind specialist models trained for a specific segmentation task, by about 30% on average on the Intersection-over-Union metric. Moreover, we find that text prompts and visual prompts are complementary: each one of the two modes fails on many examples that the other one can solve. Our analysis suggests that being able to anticipate the most effective prompt modality can lead to a 11% improvement in performance. Motivated by our findings, we propose PromptMatcher, a remarkably simple training-free baseline that combines both text and visual prompts, achieving state-of-the-art results outperforming the best text-prompted VLM by 2.5%, and the top visual-prompted VLM by 3.5% on few-shot prompted semantic segmentation.