🤖 AI Summary
This work addresses the challenge of feature ambiguity and matching failure in clothing-changing person re-identification (CC-ReID) caused by low-quality images (e.g., blur, pixelation). To this end, we propose RLQ—the first alternating training framework explicitly designed for robust cross-quality matching. RLQ innovatively integrates coarse-grained attribute prediction (CAP) to enhance external semantic discriminability and task-agnostic distillation (TAD) to align internal feature representations between high- and low-quality images. Additionally, it introduces a cross-quality feature bridging mechanism and a multi-source collaborative training strategy. Evaluated on three mainstream CC-ReID benchmarks—LaST, DeepChange, and PRCC—RLQ achieves consistent Top-1 accuracy improvements of 1.6–2.9%, 1.6–2.9%, and 5.3–6.0%, respectively. On LTCC, it maintains state-of-the-art performance. Overall, RLQ significantly enhances model robustness to real-world low-quality imagery while preserving discriminative capability under clothing changes.
📝 Abstract
This work focuses on Clothes Changing Re-IDentification (CC-ReID) for the real world. Existing works perform well with high-quality (HQ) images, but struggle with low-quality (LQ) where we can have artifacts like pixelation, out-of-focus blur, and motion blur. These artifacts introduce noise to not only external biometric attributes (e.g. pose, body shape, etc.) but also corrupt the model's internal feature representation. Models usually cluster LQ image features together, making it difficult to distinguish between them, leading to incorrect matches. We propose a novel framework Robustness against Low-Quality (RLQ) to improve CC-ReID model on real-world data. RLQ relies on Coarse Attributes Prediction (CAP) and Task Agnostic Distillation (TAD) operating in alternate steps in a novel training mechanism. CAP enriches the model with external fine-grained attributes via coarse predictions, thereby reducing the effect of noisy inputs. On the other hand, TAD enhances the model's internal feature representation by bridging the gap between HQ and LQ features, via an external dataset through task-agnostic self-supervision and distillation. RLQ outperforms the existing approaches by 1.6%-2.9% Top-1 on real-world datasets like LaST, and DeepChange, while showing consistent improvement of 5.3%-6% Top-1 on PRCC with competitive performance on LTCC. *The code will be made public soon.*