Data Reconstruction Attacks and Defenses: A Systematic Evaluation

๐Ÿ“… 2024-02-13
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 4
โœจ Influential: 1
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing evaluations of data reconstruction attacks and defenses in machine learning lack theoretical foundations, making it difficult to distinguish genuine defense efficacy from limitations imposed by attacker computational resources. Method: We propose a systematic evaluation framework grounded in inverse problem modeling. For two-layer neural networks, we derive the first algorithmic upper bound and information-theoretic lower bound on reconstruction error. We further design a unified utilityโ€“privacy metric to rectify misjudgments of defense strength prevalent in prior work. Contribution/Results: Through gradient inversion experiments under strong adversarial conditions, we empirically validate the true privacy-preserving capability of multiple defenses. Our analysis uncovers fundamental performance bottlenecks of mainstream defenses, establishes a reproducible and comparable benchmark suite, and introduces a novel paradigm bridging theoretical analysis and empirical assessment of privacy-preserving mechanisms.

Technology Category

Application Category

๐Ÿ“ Abstract
Reconstruction attacks and defenses are essential in understanding the data leakage problem in machine learning. However, prior work has centered around empirical observations of gradient inversion attacks, lacks theoretical grounding, and cannot disentangle the usefulness of defending methods from the computational limitation of attacking methods. In this work, we propose to view the problem as an inverse problem, enabling us to theoretically and systematically evaluate the data reconstruction attack. On various defense methods, we derived the algorithmic upper bound and the matching (in feature dimension and architecture dimension) information-theoretical lower bound on the reconstruction error for two-layer neural networks. To complement the theoretical results and investigate the utility-privacy trade-off, we defined a natural evaluation metric of the defense methods with similar utility loss among the strongest attacks. We further propose a strong reconstruction attack that helps update some previous understanding of the strength of defense methods under our proposed evaluation metric.
Problem

Research questions and friction points this paper is trying to address.

Evaluate data reconstruction attacks and defenses systematically
Establish theoretical bounds for reconstruction error in neural networks
Propose strong attack to reassess defense method effectiveness
Innovation

Methods, ideas, or system contributions that make the work stand out.

View problem as inverse problem theoretically
Derive bounds for two-layer neural networks
Propose strong attack under new metric
๐Ÿ”Ž Similar Papers
No similar papers found.
S
Sheng Liu
Stanford University
Z
Zihan Wang
New York University
Q
Qi Lei
New York University