CLIP4VI-ReID: Learning Modality-shared Representations via CLIP Semantic Bridge for Visible-Infrared Person Re-identification

πŸ“… 2025-11-13
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In visible-infrared person re-identification (VI-ReID), learning modality-invariant representations is challenging due to fundamental physical disparities between modalities. To address this, we propose a CLIP-based semantic bridging framework that leverages textual semantics as a cross-modal intermediary, establishing an alignment pathway: visible image β†’ text description β†’ infrared feature. Our method introduces a text generation module and a high-level semantic alignment mechanism to enable precise identity-relevant semantic transfer and disentangle modality-agnostic features. A shared encoder is jointly optimized to enhance infrared modality adaptation. Extensive experiments demonstrate that our approach achieves significant improvements over state-of-the-art methods on benchmark datasets including SYSU-MM01 and RegDB, with marked gains in cross-modal matching accuracy. Notably, this work represents the first systematic integration of vision-language pretrained models into heterogeneous person re-identification, introducing a novel paradigm for cross-modal alignment.

Technology Category

Application Category

πŸ“ Abstract
This paper proposes a novel CLIP-driven modality-shared representation learning network named CLIP4VI-ReID for VI-ReID task, which consists of Text Semantic Generation (TSG), Infrared Feature Embedding (IFE), and High-level Semantic Alignment (HSA). Specifically, considering the huge gap in the physical characteristics between natural images and infrared images, the TSG is designed to generate text semantics only for visible images, thereby enabling preliminary visible-text modality alignment. Then, the IFE is proposed to rectify the feature embeddings of infrared images using the generated text semantics. This process injects id-related semantics into the shared image encoder, enhancing its adaptability to the infrared modality. Besides, with text serving as a bridge, it enables indirect visible-infrared modality alignment. Finally, the HSA is established to refine the high-level semantic alignment. This process ensures that the fine-tuned text semantics only contain id-related information, thereby achieving more accurate cross-modal alignment and enhancing the discriminability of the learned modal-shared representations. Extensive experimental results demonstrate that the proposed CLIP4VI-ReID achieves superior performance than other state-of-the-art methods on some widely used VI-ReID datasets.
Problem

Research questions and friction points this paper is trying to address.

Bridging modality gap between visible and infrared images
Enhancing cross-modal alignment using text semantics
Improving discriminability of shared representations for ReID
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates text semantics for visible images
Rectifies infrared features using text semantics
Aligns high-level semantics for cross-modal matching
πŸ”Ž Similar Papers
No similar papers found.
X
Xiaomei Yang
Shandong Key Laboratory of Ubiquitous Intelligent Computing, School of Information Science and Engineering, University of Jinan, Jinan 250022, China
X
Xizhan Gao
Shandong Key Laboratory of Ubiquitous Intelligent Computing, School of Information Science and Engineering, University of Jinan, Jinan 250022, China
Sijie Niu
Sijie Niu
University of Jinan
Medical Image ComputingPattern Recognition
Fa Zhu
Fa Zhu
Nanjing Forestry University
pattern recognitionmachine learning
Guang Feng
Guang Feng
University of Jinan
deep learningreferring image segmentationsaliency detection
X
Xiaofeng Qu
Shandong Key Laboratory of Ubiquitous Intelligent Computing, School of Information Science and Engineering, University of Jinan, Jinan 250022, China
David Camacho
David Camacho
Universidad PolitΓ©cnica de Madrid
Machine LearningSocial Network AnalysisEvolutionary ComputationDisinformation