Data Augmentation for NeRFs in the Low Data Limit

πŸ“… 2025-03-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address hallucination and model collapse in Neural Radiance Fields (NeRF) training under sparse and incomplete input views, this paper proposes a posterior uncertainty-guided view augmentation method. First, it jointly models voxel-wise uncertainty estimation and spatial coverage quantification to construct a scene’s posterior uncertainty distribution. Then, a rejection sampling mechanism draws high-informativeness, low-uncertainty novel views from this distribution to augment training data. This is the first NeRF framework that performs view augmentation via posterior uncertainty modeling in the low-data regime, avoiding geometric drift inherent in heuristic augmentation strategies. On standard reconstruction benchmarks, the method achieves an average PSNR improvement of 39.9% and an 87.5% reduction in PSNR variance, significantly enhancing reconstruction robustness and cross-view consistency. It establishes an interpretable and generalizable theoretical and technical foundation for resource-constrained robotic scene reconstruction.

Technology Category

Application Category

πŸ“ Abstract
Current methods based on Neural Radiance Fields fail in the low data limit, particularly when training on incomplete scene data. Prior works augment training data only in next-best-view applications, which lead to hallucinations and model collapse with sparse data. In contrast, we propose adding a set of views during training by rejection sampling from a posterior uncertainty distribution, generated by combining a volumetric uncertainty estimator with spatial coverage. We validate our results on partially observed scenes; on average, our method performs 39.9% better with 87.5% less variability across established scene reconstruction benchmarks, as compared to state of the art baselines. We further demonstrate that augmenting the training set by sampling from any distribution leads to better, more consistent scene reconstruction in sparse environments. This work is foundational for robotic tasks where augmenting a dataset with informative data is critical in resource-constrained, a priori unknown environments. Videos and source code are available at https://murpheylab.github.io/low-data-nerf/.
Problem

Research questions and friction points this paper is trying to address.

Neural Radiance Fields fail in low data scenarios
Prior data augmentation methods cause hallucinations and model collapse
Proposed method improves scene reconstruction in sparse environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rejection sampling from posterior uncertainty distribution
Combining volumetric uncertainty with spatial coverage
Augmenting training set for consistent scene reconstruction
πŸ”Ž Similar Papers
No similar papers found.
Ayush Gaggar
Ayush Gaggar
Northwestern University
RoboticsEmbodied LearningSingle Shot Learning
T
Todd D. Murphey
Department of Mechanical Engineering, Northwestern University, Evanston, IL 60208 USA