MRI Cross-Modal Synthesis: A Comparative Study of Generative Models for T1-to-T2 Reconstruction

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of efficiently and accurately synthesizing T2-weighted MRI images from T1-weighted scans to reduce clinical acquisition time while preserving diagnostic information. Within a unified experimental framework, it presents the first systematic comparison of Pix2Pix GAN, CycleGAN, and variational autoencoders (VAEs) for cross-modal synthesis on the BraTS 2020 dataset. Quantitative evaluation demonstrates that CycleGAN achieves the highest performance in terms of PSNR (32.28 dB) and SSIM (0.9008), while Pix2Pix GAN yields the lowest mean squared error (MSE = 0.005846). Although the VAE exhibits slightly lower quantitative metrics, it demonstrates strong capabilities in latent space modeling. This work establishes a comprehensive benchmark and offers practical guidance for selecting models in MRI cross-modal synthesis tasks.

Technology Category

Application Category

📝 Abstract
MRI cross-modal synthesis involves generating images from one acquisition protocol using another, offering considerable clinical value by reducing scan time while maintaining diagnostic information. This paper presents a comprehensive comparison of three state-of-the-art generative models for T1-to-T2 MRI reconstruction: Pix2Pix GAN, CycleGAN, and Variational Autoencoder (VAE). Using the BraTS 2020 dataset (11,439 training and 2,000 testing slices), we evaluate these models based on established metrics including Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index (SSIM). Our experiments demonstrate that all models can successfully synthesize T2 images from T1 inputs, with CycleGAN achieving the highest PSNR (32.28 dB) and SSIM (0.9008), while Pix2Pix GAN provides the lowest MSE (0.005846). The VAE, though showing lower quantitative performance (MSE: 0.006949, PSNR: 24.95 dB, SSIM: 0.6573), offers advantages in latent space representation and sampling capabilities. This comparative study provides valuable insights for researchers and clinicians selecting appropriate generative models for MRI synthesis applications based on their specific requirements and data constraints.
Problem

Research questions and friction points this paper is trying to address.

MRI cross-modal synthesis
T1-to-T2 reconstruction
generative models
medical image synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

MRI cross-modal synthesis
generative models
T1-to-T2 reconstruction
CycleGAN
variational autoencoder
🔎 Similar Papers
No similar papers found.
A
Ali Alqutayfi
Information and Computer Science Department, SDAIA-KFUPM Joint Research Center for Artificial Intelligence, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia
Sadam Al-Azani
Sadam Al-Azani
Research Scientist, SDAIA-KFUPM Joint Research Center for AI, KFUPM
Artificial IntelligenceArabic NLPMultimodal LearningVideo AnalyticsSocial Computing