OUGS: Active View Selection via Object-aware Uncertainty Estimation in 3DGS

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing active view selection methods for object-centric high-fidelity 3D reconstruction in complex scenes rely on scene-level uncertainty metrics, making them susceptible to background interference and computationally inefficient. Method: We propose an object-aware uncertainty estimation framework based on 3D Gaussian splatting. For the first time, we derive an interpretable uncertainty model by propagating covariance through the physical parameters of Gaussians—position, scale, and orientation—and incorporating the rendering Jacobian. Semantic segmentation masks enable explicit decoupling of uncertainty between target objects and background. Contribution/Results: Our approach establishes the first object-centric active view selection mechanism. Evaluated on public benchmarks, it significantly improves reconstruction efficiency and target-object fidelity while preserving robust, globally consistent uncertainty estimation across the entire scene.

Technology Category

Application Category

📝 Abstract
Recent advances in 3D Gaussian Splatting (3DGS) have achieved state-of-the-art results for novel view synthesis. However, efficiently capturing high-fidelity reconstructions of specific objects within complex scenes remains a significant challenge. A key limitation of existing active reconstruction methods is their reliance on scene-level uncertainty metrics, which are often biased by irrelevant background clutter and lead to inefficient view selection for object-centric tasks. We present OUGS, a novel framework that addresses this challenge with a more principled, physically-grounded uncertainty formulation for 3DGS. Our core innovation is to derive uncertainty directly from the explicit physical parameters of the 3D Gaussian primitives (e.g., position, scale, rotation). By propagating the covariance of these parameters through the rendering Jacobian, we establish a highly interpretable uncertainty model. This foundation allows us to then seamlessly integrate semantic segmentation masks to produce a targeted, object-aware uncertainty score that effectively disentangles the object from its environment. This allows for a more effective active view selection strategy that prioritizes views critical to improving object fidelity. Experimental evaluations on public datasets demonstrate that our approach significantly improves the efficiency of the 3DGS reconstruction process and achieves higher quality for targeted objects compared to existing state-of-the-art methods, while also serving as a robust uncertainty estimator for the global scene.
Problem

Research questions and friction points this paper is trying to address.

Addresses inefficient object reconstruction in 3DGS by overcoming scene-level uncertainty bias
Develops object-aware uncertainty using Gaussian parameters and semantic segmentation
Improves active view selection for targeted object fidelity in complex scenes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Derives uncertainty from 3D Gaussian physical parameters
Integrates semantic masks for object-aware uncertainty scoring
Enables active view selection to improve object fidelity
🔎 Similar Papers
No similar papers found.