Decoupling Generalizability and Membership Privacy Risks in Neural Networks

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inherent trade-off between privacy preservation and generalization performance in deep neural networks, where enhancing one often compromises the other. The study reveals, for the first time, a region-wise decoupling between model generalization and vulnerability to membership inference attacks within network architectures: generalization primarily relies on deep-layer features, whereas privacy risks are concentrated in shallow-layer representations. Building on this insight, the authors propose the Privacy-Preserving Training Principle (PPTP), a tailored training strategy that selectively fortifies high-risk components without impairing generalization capability. Extensive experiments demonstrate that PPTP consistently outperforms existing methods across multiple benchmarks, achieving a synergistic optimization of both privacy protection and model utility.

Technology Category

Application Category

📝 Abstract
A deep learning model usually has to sacrifice some utilities when it acquires some other abilities or characteristics. Privacy preservation has such trade-off relationships with utilities. The loss disparity between various defense approaches implies the potential to decouple generalizability and privacy risks to maximize privacy gain. In this paper, we identify that the model's generalization and privacy risks exist in different regions in deep neural network architectures. Based on the observations that we investigate, we propose Privacy-Preserving Training Principle (PPTP) to protect model components from privacy risks while minimizing the loss in generalizability. Through extensive evaluations, our approach shows significantly better maintenance in model generalizability while enhancing privacy preservation.
Problem

Research questions and friction points this paper is trying to address.

generalizability
membership privacy
privacy-utility trade-off
neural networks
privacy preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

decoupling
generalizability
membership privacy
privacy-preserving training
neural networks
🔎 Similar Papers
No similar papers found.