🤖 AI Summary
Conventional few-shot segmentation (FSS) suffers from weak meta-knowledge generalization due to intra-class visual variability, heavily relying on limited and biased support image features. Method: This paper proposes a language-driven, bias-free semantic generalization paradigm that replaces visual support features with fine-grained, attribute-rich textual descriptions—generated by large language models—as semantic priors. We design a multi-attribute enhancement module and a cross-modal alignment mechanism to jointly optimize and deeply fuse textual semantics with visual features. Contribution/Results: The framework substantially alleviates dependence on scarce, visually biased support samples. It achieves state-of-the-art performance on standard benchmarks including PASCAL-5i and COCO-20i, and—critically—provides the first systematic empirical validation of the effectiveness and scalability of generic semantic priors in few-shot segmentation.
📝 Abstract
Few-shot segmentation (FSS) aims to segment novel classes under the guidance of limited support samples by a meta-learning paradigm. Existing methods mainly mine references from support images as meta guidance. However, due to intra-class variations among visual representations, the meta information extracted from support images cannot produce accurate guidance to segment untrained classes. In this paper, we argue that the references from support images may not be essential, the key to the support role is to provide unbiased meta guidance for both trained and untrained classes. We then introduce a Language-Driven Attribute Generalization (LDAG) architecture to utilize inherent target property language descriptions to build robust support strategy. Specifically, to obtain an unbiased support representation, we design a Multi-attribute Enhancement (MaE) module, which produces multiple detailed attribute descriptions of the target class through Large Language Models (LLMs), and then builds refined visual-text prior guidance utilizing multi-modal matching. Meanwhile, due to text-vision modal shift, attribute text struggles to promote visual feature representation, we design a Multi-modal Attribute Alignment (MaA) to achieve cross-modal interaction between attribute texts and visual feature. Experiments show that our proposed method outperforms existing approaches by a clear margin and achieves the new state-of-the art performance. The code will be released.