🤖 AI Summary
Existing NeRF and 3D Gaussian Splatting (3DGS) methods suffer significant performance degradation in close-range novel view synthesis, primarily due to insufficient near-field viewpoint coverage in training data, leading to poor generalization. To address this, we propose a pseudo-label-driven supervised learning framework—introducing the first dedicated benchmark for close-range synthesis. Our approach extends NeRF and 3DGS by integrating self-generated pseudo-labels, multi-view geometric constraints, and targeted data augmentation designed specifically for near-field scenarios. Extensive experiments on our newly established benchmark demonstrate substantial improvements in PSNR and SSIM, markedly enhanced fidelity of fine near-field details, and significantly improved generalization to unseen viewpoints. This work delivers the first systematic solution to close-range novel view synthesis and establishes the first standardized evaluation benchmark for the task.
📝 Abstract
Recent methods, such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS), have demonstrated remarkable capabilities in novel view synthesis. However, despite their success in producing high-quality images for viewpoints similar to those seen during training, they struggle when generating detailed images from viewpoints that significantly deviate from the training set, particularly in close-up views. The primary challenge stems from the lack of specific training data for close-up views, leading to the inability of current methods to render these views accurately. To address this issue, we introduce a novel pseudo-label-based learning strategy. This approach leverages pseudo-labels derived from existing training data to provide targeted supervision across a wide range of close-up viewpoints. Recognizing the absence of benchmarks for this specific challenge, we also present a new dataset designed to assess the effectiveness of both current and future methods in this area. Our extensive experiments demonstrate the efficacy of our approach.