Efficient Diversity-Preserving Diffusion Alignment via Gradient-Informed GFlowNets

📅 2024-12-10
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the three key challenges in reward-guided fine-tuning of diffusion models—reduced sample diversity, erosion of pretrained priors, and slow convergence—this paper proposes Nabla-GFlowNet. It is the first method to incorporate reward gradient signals into Generative Flow Networks (GFlowNets), introducing a gradient-aware alignment objective (∇DB) and its residual variant (Res∇DB) to jointly optimize for diversity preservation and prior retention. The approach integrates diffusion model gradient inversion, text-conditional sampling, and reward-guided flow modeling. Extensive experiments across diverse real-world reward functions demonstrate that Nabla-GFlowNet significantly accelerates convergence, enhances sample diversity and distribution fidelity, and—critically—strictly preserves the original pretrained priors of models such as Stable Diffusion.

Technology Category

Application Category

📝 Abstract
While one commonly trains large diffusion models by collecting datasets on target downstream tasks, it is often desired to align and finetune pretrained diffusion models with some reward functions that are either designed by experts or learned from small-scale datasets. Existing post-training methods for reward finetuning of diffusion models typically suffer from lack of diversity in generated samples, lack of prior preservation, and/or slow convergence in finetuning. Inspired by recent successes in generative flow networks (GFlowNets), a class of probabilistic models that sample with the unnormalized density of a reward function, we propose a novel GFlowNet method dubbed Nabla-GFlowNet (abbreviated as methodname), the first GFlowNet method that leverages the rich signal in reward gradients, together with an objective called graddb plus its variant esgraddb designed for prior-preserving diffusion finetuning. We show that our proposed method achieves fast yet diversity- and prior-preserving finetuning of Stable Diffusion, a large-scale text-conditioned image diffusion model, on different realistic reward functions.
Problem

Research questions and friction points this paper is trying to address.

Align pretrained diffusion models with reward functions
Preserve diversity and prior in generated samples
Achieve fast convergence in finetuning diffusion models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient-informed GFlowNets for diffusion alignment
Prior-preserving finetuning with graddb objective
Fast, diversity-preserving finetuning of Stable Diffusion
🔎 Similar Papers
Z
Zhen Liu
Mila, Université de Montréal; Max Planck Institute for Intelligent Systems - Tübingen
Tim Z. Xiao
Tim Z. Xiao
University of Tübingen · International Max Planck Research School for Intelligent Systems (IMPRS-IS)
Machine LearningProbabilistic ModelsLarge Language Models
Weiyang Liu
Weiyang Liu
CUHK | Max Planck Institute for Intelligent Systems
Machine LearningArtificial IntelligenceComputer Vision
Y
Y. Bengio
Mila, Université de Montréal
D
Dinghuai Zhang
Mila, Université de Montréal