Importance Sampling via Score-based Generative Models

📅 2025-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency of retraining models for each importance-weighting function in multi-task biased sampling. We propose a **training-free importance sampling framework**, leveraging a pre-trained score-based generative model (SGM). By combining its score function with an arbitrary importance weight function, we formulate importance sampling as a **controllable reverse diffusion process**, requiring neither fine-tuning nor retraining. Our key contribution is the first fully training-free importance sampling method, featuring an explicitly derived weight-adaptive reverse stochastic differential equation (SDE) grounded in the score function. Extensive experiments on industrial and natural image datasets demonstrate high scalability and effectiveness: the approach significantly reduces computational and training overhead in multi-task settings while enabling flexible adaptation of a single base distribution to diverse bias objectives.

Technology Category

Application Category

📝 Abstract
Importance sampling, which involves sampling from a probability density function (PDF) proportional to the product of an importance weight function and a base PDF, is a powerful technique with applications in variance reduction, biased or customized sampling, data augmentation, and beyond. Inspired by the growing availability of score-based generative models (SGMs), we propose an entirely training-free Importance sampling framework that relies solely on an SGM for the base PDF. Our key innovation is realizing the importance sampling process as a backward diffusion process, expressed in terms of the score function of the base PDF and the specified importance weight function--both readily available--eliminating the need for any additional training. We conduct a thorough analysis demonstrating the method's scalability and effectiveness across diverse datasets and tasks, including importance sampling for industrial and natural images with neural importance weight functions. The training-free aspect of our method is particularly compelling in real-world scenarios where a single base distribution underlies multiple biased sampling tasks, each requiring a different importance weight function. To the best of our knowledge our approach is the first importance sampling framework to achieve this.
Problem

Research questions and friction points this paper is trying to address.

Proposes training-free importance sampling using SGMs.
Implements sampling as a backward diffusion process.
Scalable across diverse datasets and tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free importance sampling framework
Utilizes score-based generative models
Implements backward diffusion process
🔎 Similar Papers
No similar papers found.