🤖 AI Summary
This study investigates the tripartite trade-off among privacy, utility, and fairness in privacy-preserving recommender systems. We conduct a cross-model empirical analysis using two differential privacy (DP) mechanisms—DP stochastic gradient descent (DPSGD) and local DP (LDP)—integrated with four representative models: Neural Collaborative Filtering (NCF), Bayesian Personalized Ranking (BPR), Singular Value Decomposition (SVD), and Variational Autoencoder (VAE), evaluated on MovieLens-1M and Yelp datasets. Key findings are: (1) Tighter privacy budgets generally degrade recommendation accuracy, but the extent of degradation is model-dependent—VAE exhibits highest sensitivity, while NCF demonstrates greatest robustness; (2) DPSGD achieves the best privacy–utility trade-off on NCF and significantly mitigates popularity bias between head and tail items, improving group fairness; (3) We provide the first systematic evidence that DP’s impact on recommendation fairness is both model-dependent and non-uniform—yielding empirical foundations and model-selection guidelines for fairness-aware privacy-preserving recommendation design.
📝 Abstract
Recommender systems (RSs) output ranked lists of items, such as movies or restaurants, that users may find interesting, based on the user's past ratings and ratings from other users. RSs increasingly incorporate differential privacy (DP) to protect user data, raising questions about how privacy mechanisms affect both recommendation accuracy and fairness. We conduct a comprehensive, cross-model evaluation of two DP mechanisms, differentially private stochastic gradient descent (DPSGD) and local differential privacy (LDP), applied to four recommender systems (Neural Collaborative Filtering (NCF), Bayesian Personalized Ranking (BPR), Singular Value Decomposition (SVD), and Variational Autoencoder (VAE)) on the MovieLens-1M and Yelp datasets. We find that stronger privacy consistently reduces utility, but not uniformly. NCF under DPSGD shows the smallest accuracy loss (under 10 percent at epsilon approximately 1), whereas SVD and BPR experience larger drops, especially for users with niche preferences. VAE is the most sensitive to privacy, with sharp declines for sparsely represented groups. The impact on bias metrics is similarly heterogeneous. DPSGD generally reduces the gap between recommendations of popular and less popular items, whereas LDP preserves existing patterns more closely. These results highlight that no single DP mechanism is uniformly superior; instead, each provides trade-offs under different privacy regimes and data conditions.