🤖 AI Summary
This work addresses the critical reliability concerns of generative MRI reconstruction models, which are prone to producing hallucinated anatomical structures under minute input perturbations—posing significant risks of clinical misdiagnosis. For the first time, the study systematically employs adversarial perturbations to actively induce and quantify such hallucinations, revealing the models’ extreme sensitivity to subtle input changes. Experiments on the fastMRI dataset, conducted on both UNet and end-to-end VarNet architectures using adversarial example generation techniques, demonstrate that current models frequently generate clinically unreliable hallucinatory content. Notably, conventional image quality metrics fail to effectively detect these errors. This research establishes a novel paradigm for evaluating and enhancing the trustworthiness of medical image reconstruction systems.
📝 Abstract
Generative models are increasingly used to improve the quality of medical imaging, such as reconstruction of magnetic resonance images and computed tomography. However, it is well-known that such models are susceptible to hallucinations: they may insert features into the reconstructed image which are not actually present in the original image. In a medical setting, such hallucinations may endanger patient health as they can lead to incorrect diagnoses. In this work, we aim to quantify the extent to which state-of-the-art generative models suffer from hallucinations in the context of magnetic resonance image reconstruction. Specifically, we craft adversarial perturbations resembling random noise for the unprocessed input images which induce hallucinations when reconstructed using a generative model. We perform this evaluation on the brain and knee images from the fastMRI data set using UNet and end-to-end VarNet architectures to reconstruct the images. Our results show that these models are highly susceptible to small perturbations and can be easily coaxed into producing hallucinations. This fragility may partially explain why hallucinations occur in the first place and suggests that a carefully constructed adversarial training routine may reduce their prevalence. Moreover, these hallucinations cannot be reliably detected using traditional image quality metrics. Novel approaches will therefore need to be developed to detect when hallucinations have occurred.