🤖 AI Summary
Assessing synthetic tabular data quality faces five key challenges: lack of methodological consensus, misuse of evaluation metrics, insufficient domain-expert involvement, incomplete reporting of data characteristics, and poor result reproducibility—particularly acute in healthcare. This study systematically reviews 101 peer-reviewed publications following the PRISMA framework, augmented by interdisciplinary expert consensus and reproducibility audits. We propose, for the first time, a comprehensive, end-to-end guideline for synthetic data generation and evaluation tailored to healthcare applications. The guideline emphasizes cross-disciplinary collaboration, interpretable and context-aware evaluation frameworks, and standardized reporting protocols. It synthesizes actionable, domain-specific recommendations grounded in empirical evidence and stakeholder input. Our work advances the trustworthy deployment of synthetic data in privacy-preserving analytics and AI development—ensuring safety, reliability, and verifiability in real-world clinical and research settings.
📝 Abstract
Generating synthetic tabular data can be challenging, however evaluation of their quality is just as challenging, if not more. This systematic review sheds light on the critical importance of rigorous evaluation of synthetic health data to ensure reliability, relevance, and their appropriate use. Based on screening of 1766 papers and a detailed review of 101 papers we identified key challenges, including lack of consensus on evaluation methods, improper use of evaluation metrics, limited input from domain experts, inadequate reporting of dataset characteristics, and limited reproducibility of results. In response, we provide several guidelines on the generation and evaluation of synthetic data, to allow the community to unlock and fully harness the transformative potential of synthetic data and accelerate innovation.