π€ AI Summary
This work addresses two key challenges in autoregressive image generation: weak chain-of-thought (CoT) reasoning capability and the difficulty of jointly optimizing textβimage alignment and aesthetic quality. We present the first systematic comparison of Direct Preference Optimization (DPO) and Group Relative Policy Optimization (GRPO) within this paradigm. To enhance CoT reasoning, we propose a reinforcement learning framework grounded in a multi-dimensional learnable reward model, incorporating a dynamic scaling strategy and three scalable training methods to balance in-domain stability and cross-domain generalization. Experiments demonstrate that GRPO significantly improves cross-domain generalization, whereas DPO achieves superior in-domain convergence stability. Our method attains state-of-the-art performance on multiple text-to-image generation benchmarks, empirically validating the critical role of reward model generalization in RL-based image generation.
π Abstract
Recent advancements underscore the significant role of Reinforcement Learning (RL) in enhancing the Chain-of-Thought (CoT) reasoning capabilities of large language models (LLMs). Two prominent RL algorithms, Direct Preference Optimization (DPO) and Group Relative Policy Optimization (GRPO), are central to these developments, showcasing different pros and cons. Autoregressive image generation, also interpretable as a sequential CoT reasoning process, presents unique challenges distinct from LLM-based CoT reasoning. These encompass ensuring text-image consistency, improving image aesthetic quality, and designing sophisticated reward models, rather than relying on simpler rule-based rewards. While recent efforts have extended RL to this domain, these explorations typically lack an in-depth analysis of the domain-specific challenges and the characteristics of different RL strategies. To bridge this gap, we provide the first comprehensive investigation of the GRPO and DPO algorithms in autoregressive image generation, evaluating their in-domain performance and out-of-domain generalization, while scrutinizing the impact of different reward models on their respective capabilities. Our findings reveal that GRPO and DPO exhibit distinct advantages, and crucially, that reward models possessing stronger intrinsic generalization capabilities potentially enhance the generalization potential of the applied RL algorithms. Furthermore, we systematically explore three prevalent scaling strategies to enhance both their in-domain and out-of-domain proficiency, deriving unique insights into efficiently scaling performance for each paradigm. We hope our study paves a new path for inspiring future work on developing more effective RL algorithms to achieve robust CoT reasoning in the realm of autoregressive image generation. Code is released at https://github.com/ZiyuGuo99/Image-Generation-CoT