DBMovi-GS: Dynamic View Synthesis from Blurry Monocular Video via Sparse-Controlled Gaussian Splatting

📅 2025-06-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Novel view synthesis from dynamic-blurred monocular video remains challenging due to the reliance of existing methods on static-scene assumptions and high-resolution inputs, resulting in poor robustness under motion blur and non-rigid motion, inaccurate geometry reconstruction, and low visual fidelity. Method: We propose Motion-aware Gaussian Splatting (MAGS), a novel framework that jointly models camera and object motion via dynamic 3D Gaussian representations and motion-aware sparse control. MAGS explicitly decouples and co-optimizes motion-deblurring and geometry reconstruction, eliminating dependence on static priors or high-quality input. Contribution/Results: MAGS achieves stable, high-fidelity novel view synthesis directly from low-resolution, motion-blurred monocular videos. Evaluated on multiple dynamic-blur benchmarks, it significantly improves geometric accuracy (e.g., lower depth RMSE) and visual quality (e.g., higher PSNR, LPIPS) over prior art. MAGS establishes a new state-of-the-art benchmark for blurred monocular novel view synthesis, offering both theoretical novelty—through explicit motion-aware 3D Gaussian optimization—and practical deployability.

Technology Category

Application Category

📝 Abstract
Novel view synthesis is a task of generating scenes from unseen perspectives; however, synthesizing dynamic scenes from blurry monocular videos remains an unresolved challenge that has yet to be effectively addressed. Existing novel view synthesis methods are often constrained by their reliance on high-resolution images or strong assumptions about static geometry and rigid scene priors. Consequently, their approaches lack robustness in real-world environments with dynamic object and camera motion, leading to instability and degraded visual fidelity. To address this, we propose Motion-aware Dynamic View Synthesis from Blurry Monocular Video via Sparse-Controlled Gaussian Splatting (DBMovi-GS), a method designed for dynamic view synthesis from blurry monocular videos. Our model generates dense 3D Gaussians, restoring sharpness from blurry videos and reconstructing detailed 3D geometry of the scene affected by dynamic motion variations. Our model achieves robust performance in novel view synthesis under dynamic blurry scenes and sets a new benchmark in realistic novel view synthesis for blurry monocular video inputs.
Problem

Research questions and friction points this paper is trying to address.

Synthesizing dynamic scenes from blurry monocular videos
Overcoming reliance on high-resolution or static scene assumptions
Restoring sharpness and 3D geometry from motion-affected blurry videos
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse-controlled Gaussian splatting for dynamic scenes
Restores sharpness from blurry monocular videos
Generates dense 3D Gaussians for motion variations
🔎 Similar Papers
No similar papers found.