Stochastic Preconditioning for Neural Field Optimization

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural fields suffer from slow convergence and poor robustness during optimization—especially when explicit multi-scale or frequency-domain priors are absent. To address this, we propose an implicit preconditioning mechanism grounded in spatial stochasticity: Gaussian offset sampling is modeled as an expectation-based implicit blurring operation, requiring no modifications to network architecture, loss functions, or hand-crafted hierarchical representations. Our framework unifies mainstream neural field representations—including coordinate MLPs, hash grids, and tri-planes—while supporting boundary-aware and spatially varying blur, enabling plug-and-play adaptation to arbitrary neural field architectures. Experiments demonstrate significantly accelerated convergence and enhanced training stability in both surface reconstruction and radiance field tasks. Notably, our method matches or surpasses performance in settings with predefined hierarchical designs, while delivering substantial quality improvements in scenarios lacking any inherent hierarchy.

Technology Category

Application Category

📝 Abstract
Neural fields are a highly effective representation across visual computing. This work observes that fitting these fields is greatly improved by incorporating spatial stochasticity during training, and that this simple technique can replace or even outperform custom-designed hierarchies and frequency space constructions. The approach is formalized as implicitly operating on a blurred version of the field, evaluated in-expectation by sampling with Gaussian-distributed offsets. Querying the blurred field during optimization greatly improves convergence and robustness, akin to the role of preconditioners in numerical linear algebra. This implicit, sampling-based perspective fits naturally into the neural field paradigm, comes at no additional cost, and is extremely simple to implement. We describe the basic theory of this technique, including details such as handling boundary conditions, and extending to a spatially-varying blur. Experiments demonstrate this approach on representations including coordinate MLPs, neural hashgrids, triplanes, and more, across tasks including surface reconstruction and radiance fields. In settings where custom-designed hierarchies have already been developed, stochastic preconditioning nearly matches or improves their performance with a simple and unified approach; in settings without existing hierarchies it provides an immediate boost to quality and robustness.
Problem

Research questions and friction points this paper is trying to address.

Improving neural field optimization via spatial stochasticity
Enhancing convergence and robustness with blurred field queries
Unified stochastic preconditioning for diverse neural field tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incorporates spatial stochasticity during training
Uses Gaussian-distributed offsets for sampling
Improves convergence with blurred field queries
🔎 Similar Papers
No similar papers found.