🤖 AI Summary
Existing inference-based image generation methods are constrained by unimodal reasoning or reliance on high-quality annotated data for fine-tuning. This paper proposes MILR—a test-time, training-free multimodal reasoning framework operating in a shared image-text latent space. Its core innovation is the first test-time discrete token-level co-reasoning between images and text, establishing a unified latent space that enables language-guided generation. MILR jointly optimizes image and text token representations via policy gradient within this space and incorporates an image quality evaluator for guidance, integrated into the MUG framework. It achieves state-of-the-art performance on GenEval, T2I-CompBench, and the knowledge-intensive WISE benchmark—attaining an overall score of 0.63 on WISE, an 80% improvement over baselines. MILR thus substantially overcomes the limitations of unimodal reasoning and data dependency.
📝 Abstract
Reasoning-augmented machine learning systems have shown improved performance in various domains, including image generation. However, existing reasoning-based methods for image generation either restrict reasoning to a single modality (image or text) or rely on high-quality reasoning data for fine-tuning. To tackle these limitations, we propose MILR, a test-time method that jointly reasons over image and text in a unified latent vector space. Reasoning in MILR is performed by searching through vector representations of discrete image and text tokens. Practically, this is implemented via the policy gradient method, guided by an image quality critic. We instantiate MILR within the unified multimodal understanding and generation (MUG) framework that natively supports language reasoning before image synthesis and thus facilitates cross-modal reasoning. The intermediate model outputs, which are to be optimized, serve as the unified latent space, enabling MILR to operate entirely at test time. We evaluate MILR on GenEval, T2I-CompBench, and WISE, achieving state-of-the-art results on all benchmarks. Notably, on knowledge-intensive WISE, MILR attains an overall score of 0.63, improving over the baseline by 80%. Our further analysis indicates that joint reasoning in the unified latent space is the key to its strong performance. Moreover, our qualitative studies reveal MILR's non-trivial ability in temporal and cultural reasoning, highlighting the efficacy of our reasoning method.