ViTA-PAR: Visual and Textual Attribute Alignment with Attribute Prompting for Pedestrian Attribute Recognition

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In pedestrian attribute recognition (PAR), fine-grained local attributes—such as accessories—are challenging to localize accurately due to their highly variable spatial positions, limiting the performance of existing methods relying on fixed-region partitioning. To address this, we propose a cross-modal prompt alignment framework that jointly learns visual attribute prompts and person-attribute contextual text templates, enabling adaptive local feature extraction under global semantic guidance. By integrating multimodal prompt learning with vision-language feature alignment, our approach eliminates reliance on predefined regions and supports cross-regional, multi-scale modeling. Evaluated on four mainstream PAR benchmarks, our method achieves state-of-the-art performance while maintaining efficient inference. The code and pretrained models are publicly available.

Technology Category

Application Category

📝 Abstract
The Pedestrian Attribute Recognition (PAR) task aims to identify various detailed attributes of an individual, such as clothing, accessories, and gender. To enhance PAR performance, a model must capture features ranging from coarse-grained global attributes (e.g., for identifying gender) to fine-grained local details (e.g., for recognizing accessories) that may appear in diverse regions. Recent research suggests that body part representation can enhance the model's robustness and accuracy, but these methods are often restricted to attribute classes within fixed horizontal regions, leading to degraded performance when attributes appear in varying or unexpected body locations. In this paper, we propose Visual and Textual Attribute Alignment with Attribute Prompting for Pedestrian Attribute Recognition, dubbed as ViTA-PAR, to enhance attribute recognition through specialized multimodal prompting and vision-language alignment. We introduce visual attribute prompts that capture global-to-local semantics, enabling diverse attribute representations. To enrich textual embeddings, we design a learnable prompt template, termed person and attribute context prompting, to learn person and attributes context. Finally, we align visual and textual attribute features for effective fusion. ViTA-PAR is validated on four PAR benchmarks, achieving competitive performance with efficient inference. We release our code and model at https://github.com/mlnjeongpark/ViTA-PAR.
Problem

Research questions and friction points this paper is trying to address.

Enhancing pedestrian attribute recognition accuracy
Addressing attribute localization variability issues
Aligning visual and textual attribute features
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual attribute prompts capture global-to-local semantics
Learnable prompt template enriches textual embeddings
Aligns visual and textual features for effective fusion
🔎 Similar Papers
No similar papers found.
Minjeong Park
Minjeong Park
Korea University
Artificial IntelligenceComputer Vision
Hongbeen Park
Hongbeen Park
Korea University
J
Jinkyu Kim
Department of Computer Science and Engineering, Korea University, Seoul 02841, Korea