Neural Cone Radiosity for Interactive Global Illumination with Glossy Materials

📅 2025-09-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing neural radiance field methods struggle to model high-frequency, strongly view-dependent outgoing radiance distributions under glossy materials, leading to highlight distortion and noise in real-time global illumination. To address this, we propose Neural Cone Radiance—a novel representation that explicitly captures directional reflectance properties. Our approach introduces three key innovations: (1) a reflection-aware ray-cone encoding scheme that models anisotropic reflectance; (2) a pre-filtered multi-resolution hash grid that jointly encodes view-dependent reflectance and enables unified representation across the full gloss spectrum—from specular highlights to diffuse matte regions; and (3) continuous-space aggregation for efficient rendering. Evaluated on diverse glossy scenes, our method achieves real-time, noise-free, high-fidelity rendering. It significantly outperforms state-of-the-art baselines both qualitatively—delivering superior visual realism—and quantitatively—achieving higher PSNR, SSIM, and LPIPS scores.

Technology Category

Application Category

📝 Abstract
Modeling of high-frequency outgoing radiance distributions has long been a key challenge in rendering, particularly for glossy material. Such distributions concentrate radiative energy within a narrow lobe and are highly sensitive to changes in view direction. However, existing neural radiosity methods, which primarily rely on positional feature encoding, exhibit notable limitations in capturing these high-frequency, strongly view-dependent radiance distributions. To address this, we propose a highly-efficient approach by reflectance-aware ray cone encoding based on the neural radiosity framework, named neural cone radiosity. The core idea is to employ a pre-filtered multi-resolution hash grid to accurately approximate the glossy BSDF lobe, embedding view-dependent reflectance characteristics directly into the encoding process through continuous spatial aggregation. Our design not only significantly improves the network's ability to model high-frequency reflection distributions but also effectively handles surfaces with a wide range of glossiness levels, from highly glossy to low-gloss finishes. Meanwhile, our method reduces the network's burden in fitting complex radiance distributions, allowing the overall architecture to remain compact and efficient. Comprehensive experimental results demonstrate that our method consistently produces high-quality, noise-free renderings in real time under various glossiness conditions, and delivers superior fidelity and realism compared to baseline approaches.
Problem

Research questions and friction points this paper is trying to address.

Modeling high-frequency outgoing radiance distributions for glossy materials
Capturing view-dependent radiance with neural radiosity methods
Handling surfaces with wide glossiness levels efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reflectance-aware ray cone encoding for glossy materials
Pre-filtered multi-resolution hash grid approximation
Continuous spatial aggregation for view-dependent characteristics
🔎 Similar Papers
No similar papers found.
J
Jierui Ren
College of Future Technology, Peking University, China
H
Haojie Jin
School of Computer Science, Peking University, China
B
Bo Pang
School of Computer Science, Peking University, China
Yisong Chen
Yisong Chen
Associate Professor of Computer Science, Peking University
computer vision
G
Guoping Wang
School of Computer Science, Peking University, China; National Key Laboratory of Intelligent Parallel Technology
S
Sheng Li
School of Computer Science, Peking University, China; National Key Laboratory of Intelligent Parallel Technology