🤖 AI Summary
To address severe noise, structural distortion, and cross-modal inconsistency in low-dose PET/CT multimodal image reconstruction, this paper proposes an end-to-end differentiable multi-branch variational autoencoder (MB-VAE) collaborative reconstruction framework. Our method innovatively embeds the MB-VAE into a generative prior regularization term, jointly modeling PET–CT image pairs to suppress noise and enhance anatomical consistency across modalities at the feature level. Leveraging joint optimization with MNIST pretraining and clinical data, the framework significantly improves reconstruction quality: on low-dose PET/CT datasets, it achieves superior PSNR and SSIM over conventional methods and single-branch generative models. Structural fidelity and noise robustness reach state-of-the-art (SOTA) performance, effectively overcoming the generalization limitations inherent in unimodal generative priors.
📝 Abstract
This paper presents a novel approach for learned synergistic reconstruction of medical images using multi-branch generative models. Leveraging variational autoencoders (VAEs), our model learns from pairs of images simultaneously, enabling effective denoising and reconstruction. Synergistic image reconstruction is achieved by incorporating the trained models in a regularizer that evaluates the distance between the images and the model. We demonstrate the efficacy of our approach on both Modified National Institute of Standards and Technology (MNIST) and positron emission tomography (PET)/computed tomography (CT) datasets, showcasing improved image quality for low-dose imaging. Despite challenges such as patch decomposition and model limitations, our results underscore the potential of generative models for enhancing medical imaging reconstruction.