CPCL: Cross-Modal Prototypical Contrastive Learning for Weakly Supervised Text-based Person Re-Identification

📅 2024-01-18
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Weakly supervised text-driven person re-identification (TPRe-ID) aims to achieve cross-modal image–text matching without identity labels, yet suffers from large intra-class variation and deep semantic gaps across modalities. This paper introduces CLIP into weakly supervised TPRe-ID for the first time, establishing a unified latent embedding space. We propose a Prototype-based Multimodal Memory (PMM) module to model prototype-level associations between image–text pairs. Furthermore, we design a Hybrid Cross-modal Matching (HCM) strategy and an Outlier Pseudo-Label Mining (OPLM) mechanism to enable robust multi-person, multi-image clustering and pseudo-label refinement. Our approach transcends instance-level learning limitations and significantly enhances cross-modal alignment. Extensive experiments demonstrate state-of-the-art performance: Rank@1 improvements of 11.58%, 8.77%, and 5.25% on CUHK-PEDES, ICFG-PEDES, and RSTPReid, respectively.

Technology Category

Application Category

📝 Abstract
Weakly supervised text-based person re-identification (TPRe-ID) seeks to retrieve images of a target person using textual descriptions, without relying on identity annotations and is more challenging and practical. The primary challenge is the intra-class differences, encompassing intra-modal feature variations and cross-modal semantic gaps. Prior works have focused on instance-level samples and ignored prototypical features of each person which are intrinsic and invariant. Toward this, we propose a Cross-Modal Prototypical Contrastive Learning (CPCL) method. In practice, the CPCL introduces the CLIP model to weakly supervised TPRe-ID for the first time, mapping visual and textual instances into a shared latent space. Subsequently, the proposed Prototypical Multi-modal Memory (PMM) module captures associations between heterogeneous modalities of image-text pairs belonging to the same person through the Hybrid Cross-modal Matching (HCM) module in a many-to-many mapping fashion. Moreover, the Outlier Pseudo Label Mining (OPLM) module further distinguishes valuable outlier samples from each modality, enhancing the creation of more reliable clusters by mining implicit relationships between image-text pairs. Experimental results demonstrate that our proposed CPCL attains state-of-the-art performance on all three public datasets, with a significant improvement of 11.58%, 8.77% and 5.25% in Rank@1 accuracy on CUHK-PEDES, ICFG-PEDES and RSTPReid datasets, respectively. The code is available at https://github.com/codeGallery24/CPCL.
Problem

Research questions and friction points this paper is trying to address.

Address intra-class differences in text-based person retrieval
Leverage prototypical features for cross-modal matching
Enhance clustering by mining outlier pseudo labels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses CLIP model for shared latent space mapping
Introduces Prototypical Multi-modal Memory module
Develops Outlier Pseudo Label Mining module
🔎 Similar Papers
No similar papers found.
Y
Yanwei Zheng
Shandong University
X
Xinpeng Zhao
Shandong University
C
Chuanlin Lan
City University of Hong Kong
X
Xiaowei Zhang
Qingdao University
Bowen Huang
Bowen Huang
Electrical Engineer, Optimization and Control, PNNL
Control theoryTransfer operator theoryPower system analysis
J
Jibin Yang
Shandong University
Dongxiao Yu
Dongxiao Yu
Professor of Computer Science, Shandong University
Distributed ComputingWireless NetworkingGraph Algorithms