GPT4o-Receipt: A Dataset and Human Study for AI-Generated Document Forensics

๐Ÿ“… 2026-03-11
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study investigates the comparative capabilities of humans and multimodal large language models (MLLMs) in detecting AI-generated financial receipts. Leveraging a benchmark dataset of 1,235 real and GPT-4o-generated receipts, the authors conduct a crowdsourced experiment with 30 participants alongside systematic evaluations of five leading MLLMsโ€”including Claude Sonnet 4 and Gemini 2.5 Flash. Results reveal that while humans can perceive visual anomalies, they struggle to reliably distinguish authentic from synthetic receipts. In contrast, MLLMs significantly outperform humans in both detection accuracy and F1 score, with performance and calibration varying across models. Notably, the study identifies arithmetic errors as the most critical yet visually imperceptible forensic cue in AI-generated receipts. The authors publicly release the dataset, evaluation framework, and full results to advance research in AI document forensics.

Technology Category

Application Category

๐Ÿ“ Abstract
Can humans detect AI-generated financial documents better than machines? We present GPT4o-Receipt, a benchmark of 1,235 receipt images pairing GPT-4o-generated receipts with authentic ones from established datasets, evaluated by five state-of-the-art multimodal LLMs and a 30-annotator crowdsourced perceptual study. Our findings reveal a striking paradox: humans are better at seeing AI artifacts, yet worse at detecting AI documents. Human annotators exhibit the largest visual discrimination gap of any evaluator, yet their binary detection F1 falls well below Claude Sonnet 4 and below Gemini 2.5 Flash. This paradox resolves once the mechanism is understood: the dominant forensic signals in AI-generated receipts are arithmetic errors -- invisible to visual inspection but systematically verifiable by LLMs. Humans cannot perceive that a subtotal is incorrect; LLMs verify it in milliseconds. Beyond the human--LLM comparison, our five-model evaluation reveals dramatic performance disparities and calibration differences that render simple accuracy metrics insufficient for detector selection. GPT4o-Receipt, the evaluation framework, and all results are released publicly to support future research in AI document forensics.
Problem

Research questions and friction points this paper is trying to address.

AI-generated document forensics
receipt detection
human perception
multimodal LLMs
document authenticity
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI-generated document forensics
multimodal LLM evaluation
arithmetic error detection
human vs. AI detection
receipt benchmark dataset
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yan Zhang
S
Simiao Ren
Ankit Raj
Ankit Raj
Unknown affiliation
Machine LearningData ScienceMultivariate Analysis
E
En Wei
D
Dennis Ng
A
Alex Shen
J
Jiayue Xu
Y
Yuxin Zhang
E
Evelyn Marotta