MAISY: Motion-Aware Image SYnthesis for MedicalImage Motion Correction

📅 2025-05-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address motion-induced blurring, artifacts, and organ deformation in medical image acquisition, this paper proposes a motion-aware image synthesis framework that first models the motion distribution and then performs precise motion correction. Innovatively, it integrates the Segment Anything Model (SAM) to dynamically localize motion-sensitive regions along anatomical boundaries while preserving fine-grained pathological details. Furthermore, it introduces a variance-selective structural similarity (VS-SSIM) loss function, enhancing robustness to intensity heterogeneity and local intensity variance without compromising global structural fidelity. The end-to-end generative adversarial network (GAN) framework achieves significant improvements over state-of-the-art methods on chest and head CT datasets: +40% in PSNR, +10% in SSIM, and +16% in Dice coefficient—demonstrating superior efficacy in motion artifact correction and pathological information preservation.

Technology Category

Application Category

📝 Abstract
Patient motion during medical image acquisition causes blurring, ghosting, and distorts organs, which makes image interpretation challenging.Current state-of-the-art algorithms using Generative Adversarial Network (GAN)-based methods with their ability to learn the mappings between corrupted images and their ground truth via Structural Similarity Index Measure (SSIM) loss effectively generate motion-free images. However, we identified the following limitations: (i) they mainly focus on global structural characteristics and therefore overlook localized features that often carry critical pathological information, and (ii) the SSIM loss function struggles to handle images with varying pixel intensities, luminance factors, and variance. In this study, we propose Motion-Aware Image SYnthesis (MAISY) which initially characterize motion and then uses it for correction by: (a) leveraging the foundation model Segment Anything Model (SAM), to dynamically learn spatial patterns along anatomical boundaries where motion artifacts are most pronounced and, (b) introducing the Variance-Selective SSIM (VS-SSIM) loss which adaptively emphasizes spatial regions with high pixel variance to preserve essential anatomical details during artifact correction. Experiments on chest and head CT datasets demonstrate that our model outperformed the state-of-the-art counterparts, with Peak Signal-to-Noise Ratio (PSNR) increasing by 40%, SSIM by 10%, and Dice by 16%.
Problem

Research questions and friction points this paper is trying to address.

Corrects motion artifacts in medical images globally and locally
Addresses limitations of SSIM loss in varying intensity images
Improves image quality via dynamic spatial pattern learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses SAM to dynamically learn spatial patterns
Introduces VS-SSIM loss for adaptive variance emphasis
Combines motion characterization with artifact correction
🔎 Similar Papers
No similar papers found.