Colors See Colors Ignore: Clothes Changing ReID with Color Disentanglement

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the appearance discrepancy problem in clothing-changing person re-identification (CC-ReID) caused by garment variation. We propose a lightweight, annotation-free disentanglement framework that requires no auxiliary models. Our key innovation is the first use of foreground and background color statistics as unsupervised proxy signals: via color-space mapping and feature disentanglement learning, we explicitly separate color-aware features from identity-specific representations. Furthermore, we introduce a Spatial-to-Attention (S2A) self-attention mechanism to explicitly suppress information leakage between color and identity features. The end-to-end RGB network achieves significant improvements across four CC-ReID benchmarks: for image-based ReID, mAP increases by 2.9% on LTCC and 5.0% on PRCC; for video-based ReID, mAP improves by 1.0% on CCVID and 2.5% on MeVID.

Technology Category

Application Category

📝 Abstract
Clothes-Changing Re-Identification (CC-ReID) aims to recognize individuals across different locations and times, irrespective of clothing. Existing methods often rely on additional models or annotations to learn robust, clothing-invariant features, making them resource-intensive. In contrast, we explore the use of color - specifically foreground and background colors - as a lightweight, annotation-free proxy for mitigating appearance bias in ReID models. We propose Colors See, Colors Ignore (CSCI), an RGB-only method that leverages color information directly from raw images or video frames. CSCI efficiently captures color-related appearance bias ('Color See') while disentangling it from identity-relevant ReID features ('Color Ignore'). To achieve this, we introduce S2A self-attention, a novel self-attention to prevent information leak between color and identity cues within the feature space. Our analysis shows a strong correspondence between learned color embeddings and clothing attributes, validating color as an effective proxy when explicit clothing labels are unavailable. We demonstrate the effectiveness of CSCI on both image and video ReID with extensive experiments on four CC-ReID datasets. We improve the baseline by Top-1 2.9% on LTCC and 5.0% on PRCC for image-based ReID, and 1.0% on CCVID and 2.5% on MeVID for video-based ReID without relying on additional supervision. Our results highlight the potential of color as a cost-effective solution for addressing appearance bias in CC-ReID. Github: https://github.com/ppriyank/ICCV-CSCI-Person-ReID.
Problem

Research questions and friction points this paper is trying to address.

Mitigate appearance bias in ReID models using color
Develop lightweight method without extra annotations
Disentangle color-related bias from identity features
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses RGB-only method for color information
Introduces S2A self-attention for feature disentanglement
Leverages color as proxy without clothing labels
🔎 Similar Papers
No similar papers found.
Priyank Pathak
Priyank Pathak
PhD UCF, Past: Stony Brook, NYU, Amobee, Clarifai, IIT Kanpur
Computer Vision
Y
Yogesh S. Rawat
Center for Research in Computer Vision, University of Central Florida