Gaussian Splashing: Direct Volumetric Rendering Underwater

📅 2024-11-29
🏛️ arXiv.org
📈 Citations: 6
Influential: 0
📄 PDF
🤖 AI Summary
Underwater light scattering severely attenuates visual features, rendering mainstream 3D reconstruction methods—such as NeRF and 3D Gaussian Splatting (3DGS)—ineffective. To address this, we propose the first underwater 3D Gaussian Splatting framework explicitly incorporating a physics-based underwater imaging model, accounting for wavelength-dependent attenuation, forward scattering, and backward scattering. Our method introduces three key innovations: (i) a scattering-adaptive direct volume rendering pipeline; (ii) a differentiable depth estimation module; and (iii) a scattering-aware joint geometry-appearance loss function. It supports both monocular and multi-view inputs, reconstructs scenes in minutes—over 100× faster than underwater NeRF—and renders at 140 FPS, achieving, for the first time, NeRF-level reconstruction fidelity with real-time rendering capability. Extensive evaluations on public and newly collected underwater datasets demonstrate significant improvements in long-range detail clarity and structural fidelity.

Technology Category

Application Category

📝 Abstract
In underwater images, most useful features are occluded by water. The extent of the occlusion depends on imaging geometry and can vary even across a sequence of burst images. As a result, 3D reconstruction methods robust on in-air scenes, like Neural Radiance Field methods (NeRFs) or 3D Gaussian Splatting (3DGS), fail on underwater scenes. While a recent underwater adaptation of NeRFs achieved state-of-the-art results, it is impractically slow: reconstruction takes hours and its rendering rate, in frames per second (FPS), is less than 1. Here, we present a new method that takes only a few minutes for reconstruction and renders novel underwater scenes at 140 FPS. Named Gaussian Splashing, our method unifies the strengths and speed of 3DGS with an image formation model for capturing scattering, introducing innovations in the rendering and depth estimation procedures and in the 3DGS loss function. Despite the complexities of underwater adaptation, our method produces images at unparalleled speeds with superior details. Moreover, it reveals distant scene details with far greater clarity than other methods, dramatically improving reconstructed and rendered images. We demonstrate results on existing datasets and a new dataset we have collected. Additional visual results are available at: https://bgu-cs-vil.github.io/gaussiansplashingUW.github.io/ .
Problem

Research questions and friction points this paper is trying to address.

Rendering underwater scenes with volumetric scattering effects
Accelerating 3D reconstruction and novel view synthesis
Enhancing clarity of distant details in underwater imagery
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unifies 3DGS speed with underwater scattering model
Introduces innovations in rendering and depth estimation
Modifies 3DGS loss function for underwater adaptation
🔎 Similar Papers
No similar papers found.
N
Nir Mualem
Ben-Gurion University
Roy Amoyal
Roy Amoyal
Computer Vision PhD Candidate, Ben Gurion University
Computer VisionDeep Learning3D ReconstructionVSLAM2D/3D Registration/Alignment
O
O. Freifeld
Ben-Gurion University
D
D. Akkaynak
The Inter-University Institute for Marine Sciences and the University of Haifa