🤖 AI Summary
The scarcity of FDG PET data severely limits unsupervised brain abnormality detection. Method: We propose a conditional GAN-based cross-modal synthesis framework that generates high-fidelity synthetic FDG PET images from T1-weighted MRI inputs, and—novelly—directly integrates them into an unsupervised anomaly detection pipeline. Our cGAN employs a U-Net architecture, incorporates deep feature-space reconstruction loss, and introduces a SPADE-based self-supervised anomaly scoring mechanism, enabling knowledge transfer without ground-truth PET annotations. Results: On the ADNI dataset, our method achieves a 12.3% AUC improvement over baselines using either MRI alone or real PET. Radiologist-blinded evaluation confirms clinical-grade fidelity of the synthesized PET images. This work eliminates reliance on paired PET ground truth, establishing a generalizable cross-modal augmentation paradigm for low-resource medical imaging anomaly detection.