🤖 AI Summary
Neural fields suffer from slow convergence and poor robustness during optimization—especially when explicit multi-scale or frequency-domain priors are absent. To address this, we propose an implicit preconditioning mechanism grounded in spatial stochasticity: Gaussian offset sampling is modeled as an expectation-based implicit blurring operation, requiring no modifications to network architecture, loss functions, or hand-crafted hierarchical representations. Our framework unifies mainstream neural field representations—including coordinate MLPs, hash grids, and tri-planes—while supporting boundary-aware and spatially varying blur, enabling plug-and-play adaptation to arbitrary neural field architectures. Experiments demonstrate significantly accelerated convergence and enhanced training stability in both surface reconstruction and radiance field tasks. Notably, our method matches or surpasses performance in settings with predefined hierarchical designs, while delivering substantial quality improvements in scenarios lacking any inherent hierarchy.
📝 Abstract
Neural fields are a highly effective representation across visual computing. This work observes that fitting these fields is greatly improved by incorporating spatial stochasticity during training, and that this simple technique can replace or even outperform custom-designed hierarchies and frequency space constructions. The approach is formalized as implicitly operating on a blurred version of the field, evaluated in-expectation by sampling with Gaussian-distributed offsets. Querying the blurred field during optimization greatly improves convergence and robustness, akin to the role of preconditioners in numerical linear algebra. This implicit, sampling-based perspective fits naturally into the neural field paradigm, comes at no additional cost, and is extremely simple to implement. We describe the basic theory of this technique, including details such as handling boundary conditions, and extending to a spatially-varying blur. Experiments demonstrate this approach on representations including coordinate MLPs, neural hashgrids, triplanes, and more, across tasks including surface reconstruction and radiance fields. In settings where custom-designed hierarchies have already been developed, stochastic preconditioning nearly matches or improves their performance with a simple and unified approach; in settings without existing hierarchies it provides an immediate boost to quality and robustness.