🤖 AI Summary
Motion blur—caused by low-light conditions or long exposure—severely degrades the localization and mapping performance of NeRF- and 3D Gaussian Splatting (3DGS)-based SLAM systems. To address this, we propose the first end-to-end dense SLAM framework that explicitly models the physical motion blur process. Our method jointly optimizes time-varying camera exposure trajectories alongside neural radiance fields or 3D Gaussian splatting representations, integrating motion-blur-aware photometric tracking with continuous-time motion modeling to enable simultaneous blur compensation and geometry-appearance co-optimization. Crucially, we introduce the first differentiable forward motion blur model directly embedded into the SLAM pipeline, unifying pose estimation and scene reconstruction for blurred imagery. Evaluated on both synthetic and real-world blurred datasets, our approach reduces pose estimation error by 32% and improves reconstruction PSNR by 4.8 dB, while demonstrating strong generalization across both blurred and sharp image sequences.
📝 Abstract
Emerging 3D scene representations, such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS), have demonstrated their effectiveness in Simultaneous Localization and Mapping (SLAM) for photo-realistic rendering, particularly when using high-quality video sequences as input. However, existing methods struggle with motion-blurred frames, which are common in real-world scenarios like low-light or long-exposure conditions. This often results in a significant reduction in both camera localization accuracy and map reconstruction quality. To address this challenge, we propose a dense visual SLAM pipeline (i.e. MBA-SLAM) to handle severe motion-blurred inputs. Our approach integrates an efficient motion blur-aware tracker with either neural radiance fields or Gaussian Splatting based mapper. By accurately modeling the physical image formation process of motion-blurred images, our method simultaneously learns 3D scene representation and estimates the cameras' local trajectory during exposure time, enabling proactive compensation for motion blur caused by camera movement. In our experiments, we demonstrate that MBA-SLAM surpasses previous state-of-the-art methods in both camera localization and map reconstruction, showcasing superior performance across a range of datasets, including synthetic and real datasets featuring sharp images as well as those affected by motion blur, highlighting the versatility and robustness of our approach.