๐ค AI Summary
This paper addresses the underexplored problem of reconstructing internal object structures. We propose the first 3D Gaussian Splatting (3DGS) method tailored for internal scenes. Unlike conventional 3DGS, which relies on external viewpoints and precise camera poses, our approach factorizes 3D Gaussian lattices to directly map sparse volumetric slices (e.g., CT or MRI) into a continuous density fieldโenabling pose-agnostic, plug-and-play internal rendering. Our key contributions are: (1) a continuous density representation that eliminates dependence on camera pose; (2) a cross-modality-compatible voxelized Gaussian modeling framework; and (3) CUDA-accelerated implementation achieving high-fidelity, fine-grained reconstruction with stable rendering quality even from extremely sparse input. The method demonstrates robustness across diverse medical imaging modalities and enables real-time GPU inference. The source code is publicly available.
๐ Abstract
3D Gaussian Splatting (3DGS) has recently gained popularity for efficient scene rendering by representing scenes as explicit sets of anisotropic 3D Gaussians. However, most existing work focuses primarily on modeling external surfaces. In this work, we target the reconstruction of internal scenes, which is crucial for applications that require a deep understanding of an object's interior. By directly modeling a continuous volumetric density through the inner 3D Gaussian distribution, our model effectively reconstructs smooth and detailed internal structures from sparse sliced data. Our approach eliminates the need for camera poses, is plug-and-play, and is inherently compatible with any data modalities. We provide cuda implementation at: https://github.com/Shuxin-Liang/InnerGS.