FedAU2: Attribute Unlearning for User-Level Federated Recommender Systems with Adaptive and Robust Adversarial Training

๐Ÿ“… 2025-11-27
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In federated recommendation systems, user embeddings risk leaking sensitive attributes, rendering them vulnerable to attribute inference attacks. To address this privacy threat, we propose FedAU2โ€”a novel framework integrating adaptive adversarial training with a dual-randomized variational autoencoder (DR-VAE). Adaptive adversarial training dynamically accommodates user-level data heterogeneity to enhance robustness against inference attacks, while DR-VAE suppresses sensitive attribute leakage at the gradient level via dual randomization of latent variables and gradient perturbation. Unlike existing approaches, FedAU2 achieves superior attribute forgetting without compromising recommendation accuracy. Extensive experiments on three real-world datasets demonstrate that FedAU2 significantly outperforms state-of-the-art baselines: it reduces attribute inference accuracy by a larger margin while maintaining higher retention rates of key recommendation metricsโ€”namely, NDCG and Recall.

Technology Category

Application Category

๐Ÿ“ Abstract
Federated Recommender Systems (FedRecs) leverage federated learning to protect user privacy by retaining data locally. However, user embeddings in FedRecs often encode sensitive attribute information, rendering them vulnerable to attribute inference attacks. Attribute unlearning has emerged as a promising approach to mitigate this issue. In this paper, we focus on user-level FedRecs, which is a more practical yet challenging setting compared to group-level FedRecs. Adversarial training emerges as the most feasible approach within this context. We identify two key challenges in implementing adversarial training-based attribute unlearning for user-level FedRecs: i) mitigating training instability caused by user data heterogeneity, and ii) preventing attribute information leakage through gradients. To address these challenges, we propose FedAU2, an attribute unlearning method for user-level FedRecs. For CH1, we propose an adaptive adversarial training strategy, where the training dynamics are adjusted in response to local optimization behavior. For CH2, we propose a dual-stochastic variational autoencoder to perturb the adversarial model, effectively preventing gradient-based information leakage. Extensive experiments on three real-world datasets demonstrate that our proposed FedAU2 achieves superior performance in unlearning effectiveness and recommendation performance compared to existing baselines.
Problem

Research questions and friction points this paper is trying to address.

Mitigates training instability from user data heterogeneity
Prevents attribute information leakage via gradients
Enhances unlearning effectiveness and recommendation performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive adversarial training for user heterogeneity
Dual-stochastic VAE to prevent gradient leakage
User-level federated attribute unlearning method
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yuyuan Li
Hangzhou Dianzi University
J
Junjie Fang
Hangzhou Dianzi University
F
Fengyuan Yu
Zhejiang University
X
Xichun Sheng
Macao Polytechnic University
Tianyu Du
Tianyu Du
Zhejiang University
AI SecurityAdversarial Machine Learning
X
Xuyang Teng
Hangzhou Dianzi University
S
Shaowei Jiang
Hangzhou Dianzi University
L
Linbo Jiang
Ant Group
J
Jianan Lin
Ant Group
Chaochao Chen
Chaochao Chen
Zhejiang University
Trustworthy AIPrivacy-Preserving MLFederated LearningRecommender Systems