HULFSynth : An INR based Super-Resolution and Ultra Low-Field MRI Synthesis via Contrast factor estimation

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the bidirectional unsupervised single-image synthesis between high-field (HF) and ultra-low-field (ULF) magnetic resonance (MR) images, as well as ULF-to-HF super-resolution reconstruction. We propose a physics-driven implicit neural representation (INR) framework that—uniquely—integrates tissue-specific signal-to-noise ratio (SNR) estimation and contrast factor modeling into the INR, yielding a differentiable, unpaired-data-free forward physical model. By jointly incorporating tissue segmentation priors and signal intensity modeling, the framework enables unified cross-field-strength translation and super-resolution. Evaluated on both synthetic data and real 64-mT ULF MR acquisitions, the method improves white-matter–gray-matter contrast by 52% and 37%, respectively. Sensitivity analysis confirms robustness to noise and initialization. Our key contribution is the first unpaired, physically interpretable, and bidirectionally compatible MRI field-strength translation method.

Technology Category

Application Category

📝 Abstract
We present an unsupervised single image bidirectional Magnetic Resonance Image (MRI) synthesizer that synthesizes an Ultra-Low Field (ULF) like image from a High-Field (HF) magnitude image and vice-versa. Unlike existing MRI synthesis models, our approach is inspired by the physics that drives contrast changes between HF and ULF MRIs. Our forward model simulates a HF to ULF transformation by estimating the tissue-type Signal-to-Noise ratio (SNR) values based on target contrast values. For the Super-Resolution task, we used an Implicit Neural Representation (INR) network to synthesize HF image by simultaneously predicting tissue-type segmentations and image intensity without observed HF data. The proposed method is evaluated using synthetic ULF-like data from generated from standard 3T T$_1$-weighted images for qualitative assessments and paired 3T-64mT T$_1$-weighted images for validation experiments. WM-GM contrast improved by 52% in synthetic ULF-like images and 37% in 64mT images. Sensitivity experiments demonstrated the robustness of our forward model to variations in target contrast, noise and initial seeding.
Problem

Research questions and friction points this paper is trying to address.

Synthesizes Ultra-Low Field MRI from High-Field images bidirectionally
Estimates tissue-type contrast factors for physics-driven MRI transformation
Uses Implicit Neural Representation for super-resolution without observed HF data
Innovation

Methods, ideas, or system contributions that make the work stand out.

INR network synthesizes HF images without observed data
Forward model estimates tissue SNR for HF-ULF transformation
Unsupervised bidirectional MRI synthesis via contrast factor estimation
🔎 Similar Papers
No similar papers found.
P
Pranav Indrakanti
LILI Lab, Department of Informatics, University of Sussex, Brighton, UK.
Ivor Simpson
Ivor Simpson
University of Sussex
Computer visionGenerative ModelsMachine learningMedical image analysis