Multi-Space Neural Radiance Fields

📅 2023-05-07
🏛️ Computer Vision and Pattern Recognition
📈 Citations: 16
Influential: 4
📄 PDF
🤖 AI Summary
Existing NeRF methods struggle to accurately model complex light transport phenomena—such as specular reflection and refraction—leading to blurred or distorted renderings. To address this, we propose Multi-Space Neural Radiance Fields (MS-NeRF), the first framework introducing a parallel multi-subspace radiance field architecture that integrates seamlessly with mainstream NeRF variants—including NeRF, Mip-NeRF, and Mip-NeRF 360—without modifying their backbone networks. Leveraging a lightweight multi-head feature-space design and joint optimization, MS-NeRF significantly improves geometric and appearance modeling of reflective and refractive objects. Evaluated on 25 synthetic and 7 real-world 360° reflective/refractive scenes, MS-NeRF achieves substantial PSNR and SSIM improvements over single-space baselines, yielding markedly enhanced rendering fidelity. Crucially, it incurs less than a 5% increase in training and inference overhead.
📝 Abstract
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects, often resulting in blurry or distorted rendering. Instead of calculating a single radiance field, we propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces, which leads to a better understanding of the neural network toward the existence of reflective and refractive objects. Our multi-space scheme works as an enhancement to existing NeRF methods, with only small computational overheads needed for training and inferring the extra-space outputs. We demonstrate the superiority and compatibility of our approach using three representative NeRF-based models, i.e., NeRF, Mip-NeRF, and Mip-NeRF 360. Comparisons are performed on a novelly constructed dataset consisting of 25 synthetic scenes and 7 real captured scenes with complex reflection and refraction, all having 360-degree viewpoints. Extensive experiments show that our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes concerned with complex light paths through mirror-like objects. Our code and dataset will be publicly available at https://zx-yin.github.io/msnerf.
Problem

Research questions and friction points this paper is trying to address.

Improves NeRF rendering for reflective objects
Proposes multi-space feature fields for better accuracy
Enhances existing NeRF methods with minimal overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-space feature fields enhance reflective rendering
Minimal computational overhead for extra-space outputs
Improves NeRF methods with small parameter additions
🔎 Similar Papers
No similar papers found.