Multi-Hop Privacy Propagation for Differentially Private Federated Learning in Social Networks

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning over social networks, multi-hop social connections induce indirect privacy leakage—clients’ privacy loss is affected by others’ privacy decisions. Method: We propose a socially aware differential privacy mechanism. We model indirect privacy propagation via a multi-hop privacy diffusion framework and design a Stackelberg game with a mean-field estimator, enabling the server (leader) to incentivize clients (followers) and dynamically allocate privacy budgets. We theoretically establish the convergence of the mean-field estimator and derive closed-form expressions for the equilibrium solution. Contribution/Results: This work is the first to integrate multi-hop privacy externalities with mean-field game theory under information asymmetry, achieving near-optimal social welfare. Experiments demonstrate that our approach significantly improves client utility and reduces server cost compared to baselines without social awareness or with only single-hop externality, while preserving model accuracy.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) enables collaborative model training across decentralized clients without sharing local data, thereby enhancing privacy and facilitating collaboration among clients connected via social networks. However, these social connections introduce privacy externalities: a client's privacy loss depends not only on its privacy protection strategy but also on the privacy decisions of others, propagated through the network via multi-hop interactions. In this work, we propose a socially-aware privacy-preserving FL mechanism that systematically quantifies indirect privacy leakage through a multi-hop propagation model. We formulate the server-client interaction as a two-stage Stackelberg game, where the server, as the leader, optimizes incentive policies, and clients, as followers, strategically select their privacy budgets, which determine their privacy-preserving levels by controlling the magnitude of added noise. To mitigate information asymmetry in networked privacy estimation, we introduce a mean-field estimator to approximate the average external privacy risk. We theoretically prove the existence and convergence of the fixed point of the mean-field estimator and derive closed-form expressions for the Stackelberg Nash Equilibrium. Despite being designed from a client-centric incentive perspective, our mechanism achieves approximately-optimal social welfare, as revealed by Price of Anarchy (PoA) analysis. Experiments on diverse datasets demonstrate that our approach significantly improves client utilities and reduces server costs while maintaining model performance, outperforming both Social-Agnostic (SA) baselines and methods that account for social externalities.
Problem

Research questions and friction points this paper is trying to address.

Quantify indirect privacy leakage in federated learning via multi-hop propagation
Formulate server-client interaction as a Stackelberg game for optimal incentives
Mitigate information asymmetry in privacy estimation using mean-field approximation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-hop propagation model for privacy leakage
Stackelberg game for server-client incentive optimization
Mean-field estimator for networked privacy risk
🔎 Similar Papers
No similar papers found.