🤖 AI Summary
To address the challenge of data saturation with low-quality synthetic samples and scarcity of high-value instances in text-driven person retrieval, this paper proposes the Filtering-WoRA collaborative paradigm. First, high-quality image-text pairs are selected via cross-modal correlation modeling; subsequently, parameter-efficient lightweight fine-tuning is achieved through Weighted Low-Rank Adaptation (WoRA). Key contributions include: (i) the first correlation-driven data filtering mechanism for text-image alignment; (ii) a learnable-weight low-rank adaptation module (WoRA) enabling adaptive rank utilization; and (iii) an end-to-end text–image joint embedding optimization framework. Evaluated on CUHK-PEDES, our method achieves 67.02% mAP—a state-of-the-art result—while reducing training time by 19.82%. Notably, it significantly improves both accuracy and efficiency in few-shot retrieval scenarios.
📝 Abstract
In text-based person search endeavors, data generation has emerged as a prevailing practice, addressing concerns over privacy preservation and the arduous task of manual annotation. Although the number of synthesized data can be infinite in theory, the scientific conundrum persists that how much generated data optimally fuels subsequent model training. We observe that only a subset of the data in these constructed datasets plays a decisive role. Therefore, we introduce a new Filtering-WoRA paradigm, which contains a filtering algorithm to identify this crucial data subset and WoRA (Weighted Low-Rank Adaptation) learning strategy for light fine-tuning. The filtering algorithm is based on the cross-modality relevance to remove the lots of coarse matching synthesis pairs. As the number of data decreases, we do not need to fine-tune the entire model. Therefore, we propose a WoRA learning strategy to efficiently update a minimal portion of model parameters. WoRA streamlines the learning process, enabling heightened efficiency in extracting knowledge from fewer, yet potent, data instances. Extensive experimentation validates the efficacy of pretraining, where our model achieves advanced and efficient retrieval performance on challenging real-world benchmarks. Notably, on the CUHK-PEDES dataset, we have achieved a competitive mAP of 67.02% while reducing model training time by 19.82%.