🤖 AI Summary
Existing 3D Gaussian Splatting (3DGS) reconstruction methods fail under severe motion blur, as they rely on sharp images for camera pose estimation and COLMAP-based initialization—both of which become unreliable under extreme blur due to degraded feature matching.
Method: We propose the first end-to-end 3DGS reconstruction framework that requires no sharp-image prior. Our approach jointly optimizes camera trajectories and Gaussian parameters, employs a probabilistic blur-adaptive Gaussian initialization, and incorporates event-camera data via Dual-Integration Event Deblurring (EDI) for progressive refinement. It integrates VGGSfM for robust pose estimation and 3DGS-MCMC for uncertainty-aware modeling.
Contribution/Results: The method achieves state-of-the-art performance in both reconstruction quality and stability on synthetic and real-world motion-blurred datasets, significantly outperforming prior approaches without requiring any clear-frame supervision.
📝 Abstract
We introduce GeMS, a framework for 3D Gaussian Splatting (3DGS) designed to handle severely motion-blurred images. State-of-the-art deblurring methods for extreme blur, such as ExBluRF, as well as Gaussian Splatting-based approaches like Deblur-GS, typically assume access to sharp images for camera pose estimation and point cloud generation, an unrealistic assumption. Methods relying on COLMAP initialization, such as BAD-Gaussians, also fail due to unreliable feature correspondences under severe blur. To address these challenges, we propose GeMS, a 3DGS framework that reconstructs scenes directly from extremely blurred images. GeMS integrates: (1) VGGSfM, a deep learning-based Structure-from-Motion pipeline that estimates poses and generates point clouds directly from blurred inputs; (2) 3DGS-MCMC, which enables robust scene initialization by treating Gaussians as samples from a probability distribution, eliminating heuristic densification and pruning; and (3) joint optimization of camera trajectories and Gaussian parameters for stable reconstruction. While this pipeline produces strong results, inaccuracies may remain when all inputs are severely blurred. To mitigate this, we propose GeMS-E, which integrates a progressive refinement step using events: (4) Event-based Double Integral (EDI) deblurring restores sharper images that are then fed into GeMS, improving pose estimation, point cloud generation, and overall reconstruction. Both GeMS and GeMS-E achieve state-of-the-art performance on synthetic and real-world datasets. To our knowledge, this is the first framework to address extreme motion blur within 3DGS directly from severely blurred inputs.