Inference-Time Alignment of Diffusion Models with Evolutionary Algorithms

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models often generate samples that violate downstream safety constraints or domain validity requirements, while existing alignment methods rely on gradients, internal model access, or substantial computational resources. This paper proposes a black-box alignment framework operating solely at inference time, which— for the first time—systematically applies evolutionary algorithms (EAs) to search in the latent space of diffusion models without requiring gradients, model parameter access, or differentiability of the objective function. The method drastically reduces resource consumption: GPU memory usage drops by 55–76%, and runtime speeds up by 72–80%. Moreover, it achieves superior alignment scores within just 50 optimization steps compared to both gradient-based and existing gradient-free approaches. Evaluated on DrawBench and Open Image Preferences benchmarks, it attains state-of-the-art performance, demonstrating exceptional efficiency, generality across objectives, and deployment friendliness.

Technology Category

Application Category

📝 Abstract
Diffusion models are state-of-the-art generative models in various domains, yet their samples often fail to satisfy downstream objectives such as safety constraints or domain-specific validity. Existing techniques for alignment require gradients, internal model access, or large computational budgets. We introduce an inference-time alignment framework based on evolutionary algorithms. We treat diffusion models as black-boxes and search their latent space to maximize alignment objectives. Our method enables efficient inference-time alignment for both differentiable and non-differentiable alignment objectives across a range of diffusion models. On the DrawBench and Open Image Preferences benchmark, our EA methods outperform state-of-the-art gradient-based and gradient-free inference-time methods. In terms of memory consumption, we require 55% to 76% lower GPU memory than gradient-based methods. In terms of running-time, we are 72% to 80% faster than gradient-based methods. We achieve higher alignment scores over 50 optimization steps on Open Image Preferences than gradient-based and gradient-free methods.
Problem

Research questions and friction points this paper is trying to address.

Align diffusion models with downstream objectives efficiently
Overcome limitations of gradient-based alignment methods
Optimize non-differentiable objectives via evolutionary algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evolutionary algorithms optimize diffusion model alignment
Black-box latent space search maximizes objectives
Lower memory and faster than gradient methods
🔎 Similar Papers
No similar papers found.
Purvish Jajal
Purvish Jajal
Purdue University
Deep Learning
N
N. Eliopoulos
Purdue University
Benjamin Shiue-Hal Chou
Benjamin Shiue-Hal Chou
PhD student, Purdue University
Music and Artificial IntelligenceComputer Vision
G
G. Thiruvathukal
Loyola University Chicago
J
James C. Davis
Purdue University
Y
Yung-Hsiang Lu
Purdue University