SwinCCIR: An end-to-end deep network for Compton camera imaging reconstruction

📅 2025-12-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Compton camera reconstruction suffers from intrinsic artifacts, geometric distortions, and system calibration errors, severely limiting imaging quality. This paper proposes the first end-to-end deep learning framework that directly reconstructs the spatial distribution of radioactive sources from list-mode events, bypassing the conventional two-stage paradigm. Its core innovation lies in the first application of the Swin Transformer to Compton imaging, integrated with event encoding and a transposed convolutional generator to enable physics-informed mapping from the event domain to the image domain—implicitly compensating for model inaccuracies and system imperfections. Trained jointly on simulated and experimental data, the method substantially suppresses artifacts and improves both spatial resolution and quantitative accuracy, outperforming state-of-the-art iterative and supervised learning approaches. It demonstrates strong potential for clinical and field-deployable applications.

Technology Category

Application Category

📝 Abstract
Compton cameras (CCs) are a kind of gamma cameras which are designed to determine the directions of incident gammas based on the Compton scatter. However, the reconstruction of CCs face problems of severe artifacts and deformation due to the fundamental reconstruction principle of back-projection of Compton cones. Besides, a part of systematic errors originated from the performance of devices are hard to remove through calibration, leading to deterioration of imaging quality. Iterative algorithms and deep-learning based methods have been widely used to improve reconstruction. But most of them are optimization based on the results of back-projection. Therefore, we proposed an end-to-end deep learning framework, SwinCCIR, for CC imaging. Through adopting swin-transformer blocks and a transposed convolution-based image generation module, we established the relationship between the list-mode events and the radioactive source distribution. SwinCCIR was trained and validated on both simulated and practical dataset. The experimental results indicate that SwinCCIR effectively overcomes problems of conventional CC imaging, which are expected to be implemented in practical applications.
Problem

Research questions and friction points this paper is trying to address.

Reconstructs Compton camera images with reduced artifacts
Addresses systematic errors from device performance limitations
Establishes direct event-to-distribution mapping via deep learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

End-to-end deep learning framework for Compton camera imaging
Swin-transformer blocks for event-to-image relationship
Transposed convolution module generates radioactive source distribution
🔎 Similar Papers
2024-03-17SIAM Journal of Imaging SciencesCitations: 0
Minghao Dong
Minghao Dong
Xidian University
neuroplasticityexpertiseNeural feedbackBCI
X
Xinyang Luo
Department of Engineering Physics, Tsinghua University, Beijing, 100084, China
X
Xujian Ouyang
Department of Engineering Physics, Tsinghua University, Beijing, 100084, China
Y
Yongshun Xiao
Department of Engineering Physics, Tsinghua University, Beijing, 100084, China