🤖 AI Summary
This work addresses the limitation in diffusion-based text-to-image generation where aligning outputs with user preferences typically requires either model retraining or differentiable reward functions. We propose a training-free, gradient-free inference-time alignment method that modifies neither model parameters nor training objectives. Our approach introduces a stochastic optimization-based noise distribution modulation mechanism, which dynamically steers the denoising trajectory toward high-reward regions during sampling. Crucially, it supports non-differentiable rewards—including outputs from vision-language model APIs and human annotations—without backpropagation. To our knowledge, this is the first method enabling fully training-free and gradient-free preference alignment for diffusion models, while remaining compatible with any pre-trained diffusion model. Experiments demonstrate significant improvements in aesthetic quality scores of generated images. The implementation is publicly available.
📝 Abstract
Aligning diffusion models with user preferences has been a key challenge. Existing methods for aligning diffusion models either require retraining or are limited to differentiable reward functions. To address these limitations, we propose a stochastic optimization approach, dubbed Demon, to guide the denoising process at inference time without backpropagation through reward functions or model retraining. Our approach works by controlling noise distribution in denoising steps to concentrate density on regions corresponding to high rewards through stochastic optimization. We provide comprehensive theoretical and empirical evidence to support and validate our approach, including experiments that use non-differentiable sources of rewards such as Visual-Language Model (VLM) APIs and human judgements. To the best of our knowledge, the proposed approach is the first inference-time, backpropagation-free preference alignment method for diffusion models. Our method can be easily integrated with existing diffusion models without further training. Our experiments show that the proposed approach significantly improves the average aesthetics scores for text-to-image generation. Implementation is available at https://github.com/aiiu-lab/DemonSampling.