🤖 AI Summary
To address bottlenecks in text-to-image (T2I) diffusion models—including limited prompt adherence, suboptimal image quality, and scarcity of high-quality preference data—this paper proposes a fully AI-driven Direct Preference Optimization (DPO) framework. Our method introduces a novel vision-language model (VLM)-based closed-loop preference generation paradigm that automatically produces multi-dimensional feedback (e.g., style fidelity, semantic coherence, aesthetic quality) without human annotation, enabling scalable alignment. The framework integrates VLM-based automatic evaluation and scoring, DPO optimization, and fine-tuning of Stable Diffusion v1.4, v1.5, and SDXL-base. Extensive experiments on the TIFA and HPSv2 benchmarks demonstrate significant improvements: our approach achieves superior VQA accuracy and higher aesthetic scores compared to strong baselines. All code and datasets are publicly released.
📝 Abstract
Text-to-Image (T2I) diffusion models have achieved remarkable success in image generation. Despite their progress, challenges remain in both prompt-following ability, image quality and lack of high-quality datasets, which are essential for refining these models. As acquiring labeled data is costly, we introduce AGFSync, a framework that enhances T2I diffusion models through Direct Preference Optimization (DPO) in a fully AI-driven approach. AGFSync utilizes Vision-Language Models (VLM) to assess image quality across style, coherence, and aesthetics, generating feedback data within an AI-driven loop. By applying AGFSync to leading T2I models such as SD v1.4, v1.5, and SDXL-base, our extensive experiments on the TIFA dataset demonstrate notable improvements in VQA scores, aesthetic evaluations, and performance on the HPSv2 benchmark, consistently outperforming the base models. AGFSync's method of refining T2I diffusion models paves the way for scalable alignment techniques. Our code and dataset are publicly available at https://anjingkun.github.io/AGFSync.