π€ AI Summary
In visible-infrared person re-identification (VI-ReID), learning modality-invariant representations is challenging due to fundamental physical disparities between modalities. To address this, we propose a CLIP-based semantic bridging framework that leverages textual semantics as a cross-modal intermediary, establishing an alignment pathway: visible image β text description β infrared feature. Our method introduces a text generation module and a high-level semantic alignment mechanism to enable precise identity-relevant semantic transfer and disentangle modality-agnostic features. A shared encoder is jointly optimized to enhance infrared modality adaptation. Extensive experiments demonstrate that our approach achieves significant improvements over state-of-the-art methods on benchmark datasets including SYSU-MM01 and RegDB, with marked gains in cross-modal matching accuracy. Notably, this work represents the first systematic integration of vision-language pretrained models into heterogeneous person re-identification, introducing a novel paradigm for cross-modal alignment.
π Abstract
This paper proposes a novel CLIP-driven modality-shared representation learning network named CLIP4VI-ReID for VI-ReID task, which consists of Text Semantic Generation (TSG), Infrared Feature Embedding (IFE), and High-level Semantic Alignment (HSA). Specifically, considering the huge gap in the physical characteristics between natural images and infrared images, the TSG is designed to generate text semantics only for visible images, thereby enabling preliminary visible-text modality alignment. Then, the IFE is proposed to rectify the feature embeddings of infrared images using the generated text semantics. This process injects id-related semantics into the shared image encoder, enhancing its adaptability to the infrared modality. Besides, with text serving as a bridge, it enables indirect visible-infrared modality alignment. Finally, the HSA is established to refine the high-level semantic alignment. This process ensures that the fine-tuned text semantics only contain id-related information, thereby achieving more accurate cross-modal alignment and enhancing the discriminability of the learned modal-shared representations. Extensive experimental results demonstrate that the proposed CLIP4VI-ReID achieves superior performance than other state-of-the-art methods on some widely used VI-ReID datasets.