🤖 AI Summary
Current evaluations of synthetic data privacy lack quantifiable, comparable metrics due to ambiguous privacy definitions and existing measures’ inability to reflect real-world disclosure risks.
Method: We propose the first benchmark framework based on deliberate risk insertion—integrating legal theory with a black-box threat model—to enable reproducible, cross-method assessment of privacy-utility trade-offs. Our approach systematically controls perturbations, models diverse black-box attacks, maps outputs to regulatory compliance criteria, and validates findings on public datasets.
Contribution/Results: Empirical evaluation reveals substantial discrepancies between mainstream privacy metrics (e.g., k-anonymity, differential privacy estimates) and actual re-identification risks under realistic attack scenarios. This work establishes the first evaluation paradigm for privacy-enhancing technologies (PETs) that is simultaneously interpretable, empirically grounded, and aligned with regulatory requirements—thereby bridging theoretical guarantees, practical security, and legal accountability.
📝 Abstract
Synthetic data generation is gaining traction as a privacy enhancing technology (PET). When properly generated, synthetic data preserve the analytic utility of real data while avoiding the retention of information that would allow the identification of specific individuals. However, the concept of data privacy remains elusive, making it challenging for practitioners to evaluate and benchmark the degree of privacy protection offered by synthetic data. In this paper, we propose a framework to empirically assess the efficacy of tabular synthetic data privacy quantification methods through controlled, deliberate risk insertion. To demonstrate this framework, we survey existing approaches to synthetic data privacy quantification and the related legal theory. We then apply the framework to the main privacy quantification methods with no-box threat models on publicly available datasets.