🤖 AI Summary
Unconditional latent diffusion models (LDMs) widely deployed in medical image synthesis exhibit unintended memorization of original patient data—a critical privacy leakage risk. Method: We propose the first self-supervised copy-detection framework to systematically assess and quantify memorization across CT, MR, and X-ray modalities. Contribution/Results: Our analysis reveals that although LDMs surpass VAEs and GANs in image fidelity, they incur significantly higher memorization risk. We identify key training factors governing memorization: data augmentation, model size reduction, and increased training dataset scale effectively mitigate memorization, whereas overfitting exacerbates leakage. This work provides the first empirical characterization of memorization in medical LDMs, establishing foundational insights for privacy-preserving medical AI development and offering actionable guidelines for mitigating reconstruction-based privacy risks in clinical deep learning applications.
📝 Abstract
AI models present a wide range of applications in the field of medicine. However, achieving optimal performance requires access to extensive healthcare data, which is often not readily available. Furthermore, the imperative to preserve patient privacy restricts patient data sharing with third parties and even within institutes. Recently, generative AI models have been gaining traction for facilitating open-data sharing by proposing synthetic data as surrogates of real patient data. Despite the promise, some of these models are susceptible to patient data memorization, where models generate patient data copies instead of novel synthetic samples. Considering the importance of the problem, surprisingly it has received relatively little attention in the medical imaging community. To this end, we assess memorization in unconditional latent diffusion models. We train latent diffusion models on CT, MR, and X-ray datasets for synthetic data generation. We then detect the amount of training data memorized utilizing our novel self-supervised copy detection approach and further investigate various factors that can influence memorization. Our findings show a surprisingly high degree of patient data memorization across all datasets. Comparison with non-diffusion generative models, such as autoencoders and generative adversarial networks, indicates that while latent diffusion models are more susceptible to memorization, overall they outperform non-diffusion models in synthesis quality. Further analyses reveal that using augmentation strategies, small architecture, and increasing dataset can reduce memorization while over-training the models can enhance it. Collectively, our results emphasize the importance of carefully training generative models on private medical imaging datasets, and examining the synthetic data to ensure patient privacy before sharing it for medical research and applications.