FedPCL-CDR: A Federated Prototype-based Contrastive Learning Framework for Privacy-Preserving Cross-domain Recommendation

📅 2024-09-05
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Cross-domain recommendation (CDR) faces two key challenges: privacy leakage from sharing user interaction data across domains, and ineffective knowledge transfer in sparse scenarios due to heavy reliance on overlapping users. To address these, we propose Federated Prototype Contrastive Learning (FPCL), the first framework enabling privacy-preserving cross-domain knowledge transfer without requiring overlapping users. FPCL decouples representation learning from privacy protection via localized prototype clustering and global prototype alignment. It further introduces a differential prototype mechanism to perform federated contrastive learning under local differential privacy (LDP) guarantees. Extensive experiments on four real-world cross-domain tasks from Amazon and Douban demonstrate that FPCL significantly outperforms state-of-the-art methods. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Cross-domain recommendation (CDR) aims to improve recommendation accuracy in sparse domains by transferring knowledge from data-rich domains. However, existing CDR approaches often assume that user-item interaction data across domains is publicly available, neglecting user privacy concerns. Additionally, they experience performance degradation with sparse overlapping users due to their reliance on a large number of fully shared users for knowledge transfer. To address these challenges, we propose a Federated Prototype-based Contrastive Learning (CL) framework for Privacy Preserving CDR, called FedPCL-CDR. This approach utilizes non-overlapping user information and differential prototypes to improve model performance within a federated learning framework. FedPCL-CDR comprises two key modules: local domain (client) learning and global server aggregation. In the local domain, FedPCL-CDR first clusters all user data and utilizes local differential privacy (LDP) to learn differential prototypes, effectively utilizing non-overlapping user information and protecting user privacy. It then conducts knowledge transfer by employing both local and global prototypes returned from the server in a CL manner. Meanwhile, the global server aggregates differential prototypes sent from local domains to learn both local and global prototypes. Extensive experiments on four CDR tasks across Amazon and Douban datasets demonstrate that FedPCL-CDR surpasses SOTA baselines. We release our code at https://github.com/Lili1013/FedPCL CDR
Problem

Research questions and friction points this paper is trying to address.

Enhances cross-domain recommendation accuracy with sparse data
Addresses privacy concerns in user-item interaction sharing
Improves performance with non-overlapping users via federated learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated learning for privacy-preserving CDR
Prototype-based contrastive learning enhancement
Differential prototypes with local privacy
🔎 Similar Papers
No similar papers found.
L
Li Wang
School of Electrical and Data Engineering, University of Technology Sydney, Sydney 2000, Australia
Q
Qiang Wu
School of Electrical and Data Engineering, University of Technology Sydney, Sydney 2000, Australia
M
Min Xu
School of Electrical and Data Engineering, University of Technology Sydney, Sydney 2000, Australia