Scrutinizing the Vulnerability of Decentralized Learning to Membership Inference Attacks

📅 2024-12-17
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically investigates the privacy vulnerability of decentralized learning architectures under membership inference attacks (MIAs). Methodologically, it combines multi-dataset empirical evaluation with theoretical analysis of graph mixing properties to identify two key determinants of MIA success: (i) the local model mixing strategy, governing inter-node information leakage intensity, and (ii) the global mixing degree of the communication graph, regulating knowledge diffusion efficiency across the network. The study comprehensively examines both static and dynamic graph topologies, as well as diverse aggregation mechanisms, establishing—for the first time—a principled, interpretable linkage between graph structural properties and MIA robustness. Empirical results on CIFAR-10, CIFAR-100, FEMNIST, and Tiny-ImageNet demonstrate that optimizing local mixing and enhancing graph mixing degree collectively reduce MIA success rates by 28.6%–43.2% on average. Based on these findings, the paper proposes embeddable defense design principles, offering both theoretical foundations and practical guidelines for privacy-enhancing decentralized learning.

Technology Category

Application Category

📝 Abstract
The primary promise of decentralized learning is to allow users to engage in the training of machine learning models in a collaborative manner while keeping their data on their premises and without relying on any central entity. However, this paradigm necessitates the exchange of model parameters or gradients between peers. Such exchanges can be exploited to infer sensitive information about training data, which is achieved through privacy attacks (e.g Membership Inference Attacks -- MIA). In order to devise effective defense mechanisms, it is important to understand the factors that increase/reduce the vulnerability of a given decentralized learning architecture to MIA. In this study, we extensively explore the vulnerability to MIA of various decentralized learning architectures by varying the graph structure (e.g number of neighbors), the graph dynamics, and the aggregation strategy, across diverse datasets and data distributions. Our key finding, which to the best of our knowledge we are the first to report, is that the vulnerability to MIA is heavily correlated to (i) the local model mixing strategy performed by each node upon reception of models from neighboring nodes and (ii) the global mixing properties of the communication graph. We illustrate these results experimentally using four datasets and by theoretically analyzing the mixing properties of various decentralized architectures. Our paper draws a set of lessons learned for devising decentralized learning systems that reduce by design the vulnerability to MIA.
Problem

Research questions and friction points this paper is trying to address.

Decentralized learning vulnerability to privacy attacks
Impact of graph structure on Membership Inference Attacks
Mixing strategies influence on MIA vulnerability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decentralized learning architecture analysis
Membership Inference Attacks vulnerability study
Model mixing strategy impact examination
🔎 Similar Papers
No similar papers found.