Characterizing Photorealism and Artifacts in Diffusion Model-Generated Images

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates human ability to distinguish diffusion-model-generated images from authentic photographs and the underlying cognitive mechanisms. Method: We conducted 749,000 crowdsourced discrimination trials across state-of-the-art diffusion models released in 2024, augmented with 34,000 fine-grained human annotations. We systematically analyzed impacts of scene complexity, artifact type, stimulus presentation duration, and manual filtering on discrimination accuracy. Contribution/Results: We introduce the first taxonomy of AI-generated artifacts tailored to contemporary diffusion models, identifying core dimensions—including structural disharmony, texture anomalies, and semantic contradictions. Integrating statistical modeling with qualitative analysis, we quantify multivariate contributions and uncover a two-stage cognitive pattern: low-level visual cues dominate early-stage discrimination, while high-level semantic inconsistencies enhance late-stage accuracy. Our work establishes the first large-scale empirical framework for trustworthy AI-generated content assessment, providing both statistically grounded metrics and interpretable, human-centered discriminative criteria.

Technology Category

Application Category

📝 Abstract
Diffusion model-generated images can appear indistinguishable from authentic photographs, but these images often contain artifacts and implausibilities that reveal their AI-generated provenance. Given the challenge to public trust in media posed by photorealistic AI-generated images, we conducted a large-scale experiment measuring human detection accuracy on 450 diffusion-model generated images and 149 real images. Based on collecting 749,828 observations and 34,675 comments from 50,444 participants, we find that scene complexity of an image, artifact types within an image, display time of an image, and human curation of AI-generated images all play significant roles in how accurately people distinguish real from AI-generated images. Additionally, we propose a taxonomy characterizing artifacts often appearing in images generated by diffusion models. Our empirical observations and taxonomy offer nuanced insights into the capabilities and limitations of diffusion models to generate photorealistic images in 2024.
Problem

Research questions and friction points this paper is trying to address.

Identifying AI-generated image artifacts
Measuring human detection accuracy
Proposing taxonomy for diffusion model artifacts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale human detection experiment
Taxonomy of diffusion model artifacts
Factors influencing image authenticity perception
🔎 Similar Papers
No similar papers found.