๐ค AI Summary
Existing Gaussian splatting methods regress Gaussian parameters via pixel- or point-cloud-level correspondences, causing Gaussians to overfit supervision signals rather than faithfully represent underlying geometry and textureโleading to redundant modeling and degraded fidelity. This paper introduces LeanGaussian, the first method to decouple Gaussian parameter learning from explicit geometric and textural representation. It defines Gaussian centers at learned 3D reference points and drives a deformable Transformer attention mechanism solely through their 2D projections, eliminating explicit one-to-one pixel-Gaussian constraints. Built upon an iterative refinement framework, LeanGaussian employs image features as Keys and Values for efficient, geometry-aware modeling. Evaluated on ShapeNet SRN and Google Scanned Objects, it achieves PSNRs of 25.44 and 22.36, outperforming state-of-the-art methods by 6.1%. Additionally, it attains 7.2 FPS for 3D reconstruction and 500 FPS for rendering.
๐ Abstract
Recently, Gaussian splatting has demonstrated significant success in novel view synthesis. Current methods often regress Gaussians with pixel or point cloud correspondence, linking each Gaussian with a pixel or a 3D point. This leads to the redundancy of Gaussians being used to overfit the correspondence rather than the objects represented by the 3D Gaussians themselves, consequently wasting resources and lacking accurate geometries or textures. In this paper, we introduce LeanGaussian, a novel approach that treats each query in deformable Transformer as one 3D Gaussian ellipsoid, breaking the pixel or point cloud correspondence constraints. We leverage deformable decoder to iteratively refine the Gaussians layer-by-layer with the image features as keys and values. Notably, the center of each 3D Gaussian is defined as 3D reference points, which are then projected onto the image for deformable attention in 2D space. On both the ShapeNet SRN dataset (category level) and the Google Scanned Objects dataset (open-category level, trained with the Objaverse dataset), our approach, outperforms prior methods by approximately 6.1%, achieving a PSNR of 25.44 and 22.36, respectively. Additionally, our method achieves a 3D reconstruction speed of 7.2 FPS and rendering speed 500 FPS. Codes are available at https://github.com/jwubz123/LeanGaussian.