🤖 AI Summary
This work proposes a deep generative prior (DGP) framework that integrates diffusion generative models with iterative optimization to address artifacts and distortions in X-ray CT reconstruction under sparse-view or limited-angle conditions. By incorporating a diffusion model into the DGP formulation and co-designing the image generation process, network architecture, and optimization strategy, the method substantially enhances reconstruction quality under extremely sparse sampling. The approach effectively balances the expressive power of generative models with the interpretability of iterative optimization, enabling high-fidelity CT image reconstruction while significantly suppressing artifacts and structural distortions in highly undersampled scenarios.
📝 Abstract
The reconstruction of X-rays CT images from sparse or limited-angle geometries is a highly challenging task. The lack of data typically results in artifacts in the reconstructed image and may even lead to object distortions. For this reason, the use of deep generative models in this context has great interest and potential success. In the Deep Generative Prior (DGP) framework, the use of diffusion-based generative models is combined with an iterative optimization algorithm for the reconstruction of CT images from sinograms acquired under sparse geometries, to maintain the explainability of a model-based approach while introducing the generative power of a neural network. There are therefore several aspects that can be further investigated within these frameworks to improve reconstruction quality, such as image generation, the model, and the iterative algorithm used to solve the minimization problem, for which we propose modifications with respect to existing approaches. The results obtained even under highly sparse geometries are very promising, although further research is clearly needed in this direction.