🤖 AI Summary
This paper addresses the challenge of unsupervised cross-domain image retrieval (UCIR), where category-level correspondence lacks label supervision. To this end, we propose ProtoOT—a novel paradigm that unifies intra-domain representation learning and cross-domain alignment within an optimal transport (OT) framework, enabling prototype-driven end-to-end co-optimization. Methodologically, ProtoOT integrates: (i) semantic prototype generation initialized via K-means; (ii) tunable-constraint OT-based cross-domain alignment; (iii) approximation of class-level marginal distributions; and (iv) contrastive learning to enhance prototype discriminability and cross-domain semantic consistency. Evaluated on DomainNet and Office-Home benchmarks, ProtoOT achieves substantial improvements—24.44% higher P@200 and 12.12% higher P@15—surpassing state-of-the-art methods by a significant margin.
📝 Abstract
Unsupervised cross-domain image retrieval (UCIR) aims to retrieve images sharing the same category across diverse domains without relying on labeled data. Prior approaches have typically decomposed the UCIR problem into two distinct tasks: intra-domain representation learning and cross-domain feature alignment. However, these segregated strategies overlook the potential synergies between these tasks. This paper introduces ProtoOT, a novel Optimal Transport formulation explicitly tailored for UCIR, which integrates intra-domain feature representation learning and cross-domain alignment into a unified framework. ProtoOT leverages the strengths of the K-means clustering method to effectively manage distribution imbalances inherent in UCIR. By utilizing K-means for generating initial prototypes and approximating class marginal distributions, we modify the constraints in Optimal Transport accordingly, significantly enhancing its performance in UCIR scenarios. Furthermore, we incorporate contrastive learning into the ProtoOT framework to further improve representation learning. This encourages local semantic consistency among features with similar semantics, while also explicitly enforcing separation between features and unmatched prototypes, thereby enhancing global discriminativeness. ProtoOT surpasses existing state-of-the-art methods by a notable margin across benchmark datasets. Notably, on DomainNet, ProtoOT achieves an average P@200 enhancement of 24.44%, and on Office-Home, it demonstrates a P@15 improvement of 12.12%. Code is available at https://github.com/HCVLAB/ProtoOT.