$Psi$-Sampler: Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing score-based inference-time reward alignment methods typically initialize particles with a Gaussian prior, which poorly covers high-reward regions and leads to low sampling efficiency. To address this, we propose a reward-aware sequential Monte Carlo (SMC) initialization framework. Our approach introduces the preconditioned Crank–Nicolson Langevin (pCNL) algorithm—previously unexplored in this context—to sample from the reward posterior in high-dimensional latent spaces, achieving dimension-robust, gradient-driven initialization. By integrating score-based denoising with gradient-guided MCMC, our method significantly improves both alignment quality and sampling efficiency across diverse tasks, including layout-to-image generation, quantity-aware synthesis, and aesthetic preference modeling. Experiments demonstrate its generality, scalability, and practical effectiveness, establishing a new state-of-the-art for reward-aligned generative modeling.

Technology Category

Application Category

📝 Abstract
We introduce $Psi$-Sampler, an SMC-based framework incorporating pCNL-based initial particle sampling for effective inference-time reward alignment with a score-based generative model. Inference-time reward alignment with score-based generative models has recently gained significant traction, following a broader paradigm shift from pre-training to post-training optimization. At the core of this trend is the application of Sequential Monte Carlo (SMC) to the denoising process. However, existing methods typically initialize particles from the Gaussian prior, which inadequately captures reward-relevant regions and results in reduced sampling efficiency. We demonstrate that initializing from the reward-aware posterior significantly improves alignment performance. To enable posterior sampling in high-dimensional latent spaces, we introduce the preconditioned Crank-Nicolson Langevin (pCNL) algorithm, which combines dimension-robust proposals with gradient-informed dynamics. This approach enables efficient and scalable posterior sampling and consistently improves performance across various reward alignment tasks, including layout-to-image generation, quantity-aware generation, and aesthetic-preference generation, as demonstrated in our experiments.
Problem

Research questions and friction points this paper is trying to address.

Improves reward alignment in score-based generative models
Enhances sampling efficiency via reward-aware posterior initialization
Introduces pCNL for scalable high-dimensional posterior sampling
Innovation

Methods, ideas, or system contributions that make the work stand out.

SMC-based framework with pCNL sampling
Reward-aware posterior initialization
Dimension-robust gradient-informed dynamics
🔎 Similar Papers
No similar papers found.