Have we unified image generation and understanding yet? An empirical study of GPT-4o's image generation ability

📅 2025-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study challenges the “generation-as-understanding” assumption underlying GPT-4o’s world-knowledge-guided semantic synthesis. Method: We systematically evaluate its unified competence across three dimensions—global instruction adherence, fine-grained editing fidelity, and post-generation reasoning—using a multidimensional, human-curated benchmark. Our assessment integrates instruction robustness analysis, knowledge-constraint consistency verification, and conditional reasoning validation. Contribution/Results: We provide the first empirical evidence of GPT-4o’s fundamental deficiency in dynamic knowledge integration: it consistently exhibits literal misinterpretation, inconsistent domain-knowledge application, and failures in conditional reasoning. These findings indicate that current multimodal large language models lack a truly closed-loop generation–understanding mechanism, thereby challenging prevailing assumptions about their cross-modal semantic coherence and knowledge-driven compositional capabilities.

Technology Category

Application Category

📝 Abstract
OpenAI's multimodal GPT-4o has demonstrated remarkable capabilities in image generation and editing, yet its ability to achieve world knowledge-informed semantic synthesis--seamlessly integrating domain knowledge, contextual reasoning, and instruction adherence--remains unproven. In this study, we systematically evaluate these capabilities across three critical dimensions: (1) Global Instruction Adherence, (2) Fine-Grained Editing Precision, and (3) Post-Generation Reasoning. While existing benchmarks highlight GPT-4o's strong capabilities in image generation and editing, our evaluation reveals GPT-4o's persistent limitations: the model frequently defaults to literal interpretations of instructions, inconsistently applies knowledge constraints, and struggles with conditional reasoning tasks. These findings challenge prevailing assumptions about GPT-4o's unified understanding and generation capabilities, exposing significant gaps in its dynamic knowledge integration. Our study calls for the development of more robust benchmarks and training strategies that go beyond surface-level alignment, emphasizing context-aware and reasoning-grounded multimodal generation.
Problem

Research questions and friction points this paper is trying to address.

Evaluating GPT-4o's image generation and semantic synthesis capabilities
Assessing limitations in knowledge integration and conditional reasoning
Identifying gaps in unified understanding and generation benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates GPT-4o's image generation and editing
Tests global instruction adherence and precision
Identifies gaps in knowledge integration