TSAL: Few-shot Text Segmentation Based on Attribute Learning

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity of high-quality annotations and the high cost of pixel-level labeling in scene text segmentation, this paper proposes TSAL, a few-shot learning framework. TSAL leverages CLIP’s vision-language priors via a dual-branch architecture: a vision-guided branch extracts spatial features, while an adaptive prompt-guided branch models textual semantics. We introduce the Adaptive Feature Alignment (AFA) module—the first to enable learnable attribute tokens to dynamically align with both visual features and prompt prototypes. This constitutes the first systematic application of attribute learning to few-shot text segmentation. Evaluated under few-shot settings across multiple benchmarks, TSAL achieves state-of-the-art performance using only a minimal number of support samples, significantly improving segmentation accuracy and cross-instance generalization for text regions.

Technology Category

Application Category

📝 Abstract
Recently supervised learning rapidly develops in scene text segmentation. However, the lack of high-quality datasets and the high cost of pixel annotation greatly limit the development of them. Considering the well-performed few-shot learning methods for downstream tasks, we investigate the application of the few-shot learning method to scene text segmentation. We propose TSAL, which leverages CLIP's prior knowledge to learn text attributes for segmentation. To fully utilize the semantic and texture information in the image, a visual-guided branch is proposed to separately extract text and background features. To reduce data dependency and improve text detection accuracy, the adaptive prompt-guided branch employs effective adaptive prompt templates to capture various text attributes. To enable adaptive prompts capture distinctive text features and complex background distribution, we propose Adaptive Feature Alignment module(AFA). By aligning learnable tokens of different attributes with visual features and prompt prototypes, AFA enables adaptive prompts to capture both general and distinctive attribute information. TSAL can capture the unique attributes of text and achieve precise segmentation using only few images. Experiments demonstrate that our method achieves SOTA performance on multiple text segmentation datasets under few-shot settings and show great potential in text-related domains.
Problem

Research questions and friction points this paper is trying to address.

Addresses lack of high-quality datasets for text segmentation
Reduces dependency on pixel annotation costs
Improves few-shot text segmentation accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages CLIP's prior knowledge for text segmentation
Uses visual-guided branch for text and background features
Employs adaptive prompt templates to capture text attributes
🔎 Similar Papers
No similar papers found.