🤖 AI Summary
Existing implicit neural representation methods suffer from insufficient geometric accuracy when reconstructing 3D anatomical surfaces from 2D X-ray images. To address this, we propose Neural Attenuation Surfaces (NeAS), the first implicit representation framework jointly modeling surface geometry and X-ray attenuation coefficients. NeAS incorporates a signed distance function (SDF) as a geometric prior and couples it with differentiable X-ray volume rendering to enable joint geometric–physical optimization. Evaluated on both synthetic and real sparse-view X-ray data, NeAS achieves sub-millimeter surface reconstruction accuracy (<1 mm) using only a small number of projections (≤5 views), significantly outperforming state-of-the-art methods. This work introduces SDF-based geometric priors into X-ray implicit reconstruction for the first time, establishing a new paradigm for low-dose, high-fidelity 3D anatomical modeling.
📝 Abstract
Reconstructing three-dimensional (3D) structures from two-dimensional (2D) X-ray images is a valuable and efficient technique in medical applications that requires less radiation exposure than computed tomography scans. Recent approaches that use implicit neural representations have enabled the synthesis of novel views from sparse X-ray images. However, although image synthesis has improved the accuracy, the accuracy of surface shape estimation remains insufficient. Therefore, we propose a novel approach for reconstructing 3D scenes using a Neural Attenuation Surface (NeAS) that simultaneously captures the surface geometry and attenuation coefficient fields. NeAS incorporates a signed distance function (SDF), which defines the attenuation field and aids in extracting the 3D surface within the scene. We conducted experiments using simulated and authentic X-ray images, and the results demonstrated that NeAS could accurately extract 3D surfaces within a scene using only 2D X-ray images.