🤖 AI Summary
Cross-domain recommendation (CDR) faces two key challenges: privacy leakage from sharing user interaction data across domains, and ineffective knowledge transfer in sparse scenarios due to heavy reliance on overlapping users. To address these, we propose Federated Prototype Contrastive Learning (FPCL), the first framework enabling privacy-preserving cross-domain knowledge transfer without requiring overlapping users. FPCL decouples representation learning from privacy protection via localized prototype clustering and global prototype alignment. It further introduces a differential prototype mechanism to perform federated contrastive learning under local differential privacy (LDP) guarantees. Extensive experiments on four real-world cross-domain tasks from Amazon and Douban demonstrate that FPCL significantly outperforms state-of-the-art methods. The implementation is publicly available.
📝 Abstract
Cross-domain recommendation (CDR) aims to improve recommendation accuracy in sparse domains by transferring knowledge from data-rich domains. However, existing CDR approaches often assume that user-item interaction data across domains is publicly available, neglecting user privacy concerns. Additionally, they experience performance degradation with sparse overlapping users due to their reliance on a large number of fully shared users for knowledge transfer. To address these challenges, we propose a Federated Prototype-based Contrastive Learning (CL) framework for Privacy Preserving CDR, called FedPCL-CDR. This approach utilizes non-overlapping user information and differential prototypes to improve model performance within a federated learning framework. FedPCL-CDR comprises two key modules: local domain (client) learning and global server aggregation. In the local domain, FedPCL-CDR first clusters all user data and utilizes local differential privacy (LDP) to learn differential prototypes, effectively utilizing non-overlapping user information and protecting user privacy. It then conducts knowledge transfer by employing both local and global prototypes returned from the server in a CL manner. Meanwhile, the global server aggregates differential prototypes sent from local domains to learn both local and global prototypes. Extensive experiments on four CDR tasks across Amazon and Douban datasets demonstrate that FedPCL-CDR surpasses SOTA baselines. We release our code at https://github.com/Lili1013/FedPCL CDR