🤖 AI Summary
Medical synthetic data—intended to enable privacy-preserving data sharing—faces a “sharing paradox”: despite its purpose, it remains difficult to deploy in practice due to ambiguous re-identification risk assessment criteria and misalignment between common privacy metrics (e.g., k-anonymity, membership inference robustness) and regulatory requirements (e.g., GDPR, HIPAA). This study is the first to systematically define and empirically demonstrate this paradox. We propose a novel privacy–utility co-evaluation paradigm integrating differential privacy theory, fine-grained re-identification risk modeling, regulatory compliance analysis, and multi-center clinical validation. Our findings clarify the applicability boundaries of prevailing privacy metrics in healthcare contexts and establish an auditable, regulator-ready evaluation framework for synthetic data sharing. The work provides both theoretical foundations and actionable pathways for compliant deployment of medical synthetic data.
📝 Abstract
Synthetic data offers a promising solution to privacy concerns in healthcare by generating useful datasets in a privacy-aware manner. However, although synthetic data is typically developed with the intention of sharing said data, ambiguous reidentification risk assessments often prevent synthetic data from seeing the light of day. One of the main causes is that privacy metrics for synthetic data, which inform on reidentification risks, are not well-aligned with practical requirements and regulations regarding data sharing in healthcare. This article discusses the paradoxical situation where synthetic data is designed for data sharing but is often still restricted. We also discuss how the field should move forward to mitigate this issue.