🤖 AI Summary
To address the cross-domain generalization challenge in unsupervised multi-source domain adaptation (MSDA) for person re-identification, this paper proposes the Low-Rank Expert Fusion (LoRE) framework. LoRE deploys lightweight LoRA adapters per source domain and employs a source-domain-agnostic learnable gating network to dynamically weight and fuse low-rank experts—thereby avoiding redundant backbone replication. Its core innovation lies in decoupling domain-specific knowledge from shared representations, enabling highly efficient knowledge transfer with minimal parameter overhead (>98% reduction in adapter cost). Extensive experiments on Market-1501, DukeMTMC-reID, and MSMT17 demonstrate significant improvements over state-of-the-art methods, achieving both superior cross-domain generalization and computational efficiency.
📝 Abstract
Adapting person re-identification (reID) models to new target environments remains a challenging problem that is typically addressed using unsupervised domain adaptation (UDA) methods. Recent works show that when labeled data originates from several distinct sources (e.g., datasets and cameras), considering each source separately and applying multi-source domain adaptation (MSDA) typically yields higher accuracy and robustness compared to blending the sources and performing conventional UDA. However, state-of-the-art MSDA methods learn domain-specific backbone models or require access to source domain data during adaptation, resulting in significant growth in training parameters and computational cost. In this paper, a Source-free Adaptive Gated Experts (SAGE-reID) method is introduced for person reID. Our SAGE-reID is a cost-effective, source-free MSDA method that first trains individual source-specific low-rank adapters (LoRA) through source-free UDA. Next, a lightweight gating network is introduced and trained to dynamically assign optimal merging weights for fusion of LoRA experts, enabling effective cross-domain knowledge transfer. While the number of backbone parameters remains constant across source domains, LoRA experts scale linearly but remain negligible in size (<= 2% of the backbone), reducing both the memory consumption and risk of overfitting. Extensive experiments conducted on three challenging benchmarks: Market-1501, DukeMTMC-reID, and MSMT17 indicate that SAGE-reID outperforms state-of-the-art methods while being computationally efficient.