🤖 AI Summary
Existing underwater simulators suffer from insufficient physical modeling fidelity and suboptimal rendering efficiency, resulting in a pronounced sim-to-real gap. This paper introduces the first real-time physically based rendering framework tailored for imaging sonar, integrating physics-based light–water medium interaction models with CUDA-accelerated GPU parallel rendering to enable joint simulation of optical cameras and imaging sonar. Our approach achieves millisecond-level sonar image generation and accelerates synthetic data production by 100×. Evaluated on real underwater scenes, it demonstrates high qualitative and quantitative fidelity—including accurate sonar echo structure and consistent optical scattering attenuation—significantly reducing distributional discrepancies between simulated and real-world sensor data. The framework establishes a high-fidelity, trustworthy simulation foundation for training and evaluating perception algorithms in underwater robotics.
📝 Abstract
Underwater simulators offer support for building robust underwater perception solutions. Significant work has recently been done to develop new simulators and to advance the performance of existing underwater simulators. Still, there remains room for improvement on physics-based underwater sensor modeling and rendering efficiency. In this paper, we propose OceanSim, a high-fidelity GPU-accelerated underwater simulator to address this research gap. We propose advanced physics-based rendering techniques to reduce the sim-to-real gap for underwater image simulation. We develop OceanSim to fully leverage the computing advantages of GPUs and achieve real-time imaging sonar rendering and fast synthetic data generation. We evaluate the capabilities and realism of OceanSim using real-world data to provide qualitative and quantitative results. The project page for OceanSim is https://umfieldrobotics.github.io/OceanSim.