🤖 AI Summary
In industrial cone-beam CT sparse-view reconstruction, challenges persist—including slow imaging of high-density materials, severe beam-hardening artifacts, and loss of fine structural details. To address these, this paper proposes a plug-and-play (PnP) reconstruction framework leveraging a 2.5D convolutional neural network (CNN) as a learnable artifact-suppression prior. Unlike conventional 2D priors, our approach explicitly models 3D structural correlations by integrating spatial context across adjacent slices, eliminating the need for raw-domain preprocessing and enabling zero-shot, cross-domain transfer without fine-tuning. Extensive validation on both synthetic and real-world datasets demonstrates that the method significantly improves pore morphology fidelity and defect detection accuracy over 2D-prior-based approaches. It achieves state-of-the-art reconstruction quality while maintaining computational efficiency.
📝 Abstract
Cone-beam X-ray computed tomography (XCT) is an essential imaging technique for generating 3D reconstructions of internal structures, with applications ranging from medical to industrial imaging. Producing high-quality reconstructions typically requires many X-ray measurements; this process can be slow and expensive, especially for dense materials. Recent work incorporating artifact reduction priors within a plug-and-play (PnP) reconstruction framework has shown promising results in improving image quality from sparse-view XCT scans while enhancing the generalizability of deep learning-based solutions. However, this method uses a 2D convolutional neural network (CNN) for artifact reduction, which captures only slice-independent information from the 3D reconstruction, limiting performance. In this paper, we propose a PnP reconstruction method that uses a 2.5D artifact reduction CNN as the prior. This approach leverages inter-slice information from adjacent slices, capturing richer spatial context while remaining computationally efficient. We show that this 2.5D prior not only improves the quality of reconstructions but also enables the model to directly suppress commonly occurring XCT artifacts (such as beam hardening), eliminating the need for artifact correction pre-processing. Experiments on both experimental and synthetic cone-beam XCT data demonstrate that the proposed method better preserves fine structural details, such as pore size and shape, leading to more accurate defect detection compared to 2D priors. In particular, we demonstrate strong performance on experimental XCT data using a 2.5D artifact reduction prior trained entirely on simulated scans, highlighting the proposed method's ability to generalize across domains.