pyMEAL: A Multi-Encoder Augmentation-Aware Learning for Robust and Generalizable Medical Image Translation

πŸ“… 2025-05-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Poor generalization and robustness of 3D medical image analysis stem from scanning protocol variations, equipment heterogeneity, and patient motion. To address this, we propose an Enhancement-Aware Multi-Encoder Learning (EAMEL) framework. Our method treats diverse image augmentations as complementary feature sources and introduces a novel adaptive controller module (BD) that enables protocol-agnostic, structurally faithful feature fusion while preserving augmentation-specific representations. Integrating deep generative modeling, a multi-branch encoder architecture, and feature-level adaptive fusion, EAMEL is applied to CT-to-T1-MRI cross-modal translation. Experiments demonstrate statistically significant improvements over state-of-the-art baselines in PSNR (+2.1 dB) and SSIM (+0.042), alongside superior robustness to geometric transformations and input perturbations. Comprehensive ablation studies and cross-dataset evaluations validate strong generalization across diverse scanners, protocols, and anatomical domains.

Technology Category

Application Category

πŸ“ Abstract
Medical imaging is critical for diagnostics, but clinical adoption of advanced AI-driven imaging faces challenges due to patient variability, image artifacts, and limited model generalization. While deep learning has transformed image analysis, 3D medical imaging still suffers from data scarcity and inconsistencies due to acquisition protocols, scanner differences, and patient motion. Traditional augmentation uses a single pipeline for all transformations, disregarding the unique traits of each augmentation and struggling with large data volumes. To address these challenges, we propose a Multi-encoder Augmentation-Aware Learning (MEAL) framework that leverages four distinct augmentation variants processed through dedicated encoders. Three fusion strategies such as concatenation (CC), fusion layer (FL), and adaptive controller block (BD) are integrated to build multi-encoder models that combine augmentation-specific features before decoding. MEAL-BD uniquely preserves augmentation-aware representations, enabling robust, protocol-invariant feature learning. As demonstrated in a Computed Tomography (CT)-to-T1-weighted Magnetic Resonance Imaging (MRI) translation study, MEAL-BD consistently achieved the best performance on both unseen- and predefined-test data. On both geometric transformations (like rotations and flips) and non-augmented inputs, MEAL-BD outperformed other competing methods, achieving higher mean peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) scores. These results establish MEAL as a reliable framework for preserving structural fidelity and generalizing across clinically relevant variability. By reframing augmentation as a source of diverse, generalizable features, MEAL supports robust, protocol-invariant learning, advancing clinically reliable medical imaging solutions.
Problem

Research questions and friction points this paper is trying to address.

Addresses data scarcity and inconsistencies in 3D medical imaging
Improves robustness against patient variability and image artifacts
Enhances model generalization across different acquisition protocols
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-encoder framework for augmentation-aware learning
Fusion strategies combine augmentation-specific features
Preserves structural fidelity across clinical variability
πŸ”Ž Similar Papers
No similar papers found.
A
A. Ilyas
Hong Kong Centre for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong, China
A
Adeleke Maradesa
Hong Kong Centre for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong, China
J
Jamal Banzi
Hong Kong Centre for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong, China; Department of Informatics, Sokoine University of Agriculture, Chuo Kikuu Morogoro, Tanzania
Jianpan Huang
Jianpan Huang
Assistant Professor, The University of Hong Kong
MRICEST MRIMedical ImagingAINeurodegenerative Diseases
H
Henry K.F. Mak
Department of Diagnostic Radiology, The University of Hong Kong, Hong Kong, China; State Key Laboratory of Brain and Cognitive Sciences, The University of Hong Kong, Hong Kong; Alzheimer’s Disease Research Network, The University of Hong Kong, Hong Kong
K
Kannie W.Y. Chan
Hong Kong Centre for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong, China; Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China; Russell H. Morgan Department of Radiology and Radiological Science, The Johns Hopkins University School of Medicine, Baltimore, MD, USA; City University of Hong Kong Shenzhen Research Institute, Shenzhen, China