🤖 AI Summary
To address the challenge of scarce Tau PET imaging data for cross-modal synthesis, this work proposes a novel method for 3D T1w MRI-to-3D Tau PET image translation. To overcome limitations of existing 3D/2.5D perceptual losses—namely, dependency on pretrained models and suboptimal multi-planar loss balancing—we introduce a cyclic 2.5D perceptual loss: 2D perceptual errors are computed alternately along axial, coronal, and sagittal planes, with the cycle length dynamically shortened during training. Additionally, vendor-standardized preprocessing is incorporated to improve representation stability in high-SUVR pathological regions. The framework is architecture-agnostic, compatible with U-Net, UNETR, SwinUNETR, CycleGAN, and Pix2Pix, and jointly optimized using SSIM, MSE, and the proposed loss. Experiments demonstrate significant improvements in quantitative metrics (PSNR, SSIM, LPIPS), tau lesion structural fidelity, cross-scanner robustness, and clinical diagnostic consistency.
📝 Abstract
There is a demand for medical image synthesis or translation to generate synthetic images of missing modalities from available data. This need stems from challenges such as restricted access to high-cost imaging devices, government regulations, or failure to follow up with patients or study participants. In medical imaging, preserving high-level semantic features is often more critical than achieving pixel-level accuracy. Perceptual loss functions are widely employed to train medical image synthesis or translation models, as they quantify differences in high-level image features using a pre-trained feature extraction network. While 3D and 2.5D perceptual losses are used in 3D medical image synthesis, they face challenges, such as the lack of pre-trained 3D models or difficulties in balancing loss reduction across different planes. In this work, we focus on synthesizing 3D tau PET images from 3D T1-weighted MR images. We propose a cyclic 2.5D perceptual loss that sequentially computes the 2D average perceptual loss for each of the axial, coronal, and sagittal planes over epochs, with the cycle duration gradually decreasing. Additionally, we process tau PET images using by-manufacturer standardization to enhance the preservation of high-SUVR regions indicative of tau pathology and mitigate SUVR variability caused by inter-manufacturer differences. We combine the proposed loss with SSIM and MSE losses and demonstrate its effectiveness in improving both quantitative and qualitative performance across various generative models, including U-Net, UNETR, SwinUNETR, CycleGAN, and Pix2Pix.