๐ค AI Summary
This study investigates the comparative capabilities of humans and multimodal large language models (MLLMs) in detecting AI-generated financial receipts. Leveraging a benchmark dataset of 1,235 real and GPT-4o-generated receipts, the authors conduct a crowdsourced experiment with 30 participants alongside systematic evaluations of five leading MLLMsโincluding Claude Sonnet 4 and Gemini 2.5 Flash. Results reveal that while humans can perceive visual anomalies, they struggle to reliably distinguish authentic from synthetic receipts. In contrast, MLLMs significantly outperform humans in both detection accuracy and F1 score, with performance and calibration varying across models. Notably, the study identifies arithmetic errors as the most critical yet visually imperceptible forensic cue in AI-generated receipts. The authors publicly release the dataset, evaluation framework, and full results to advance research in AI document forensics.
๐ Abstract
Can humans detect AI-generated financial documents better than machines? We present GPT4o-Receipt, a benchmark of 1,235 receipt images pairing GPT-4o-generated receipts with authentic ones from established datasets, evaluated by five state-of-the-art multimodal LLMs and a 30-annotator crowdsourced perceptual study. Our findings reveal a striking paradox: humans are better at seeing AI artifacts, yet worse at detecting AI documents. Human annotators exhibit the largest visual discrimination gap of any evaluator, yet their binary detection F1 falls well below Claude Sonnet 4 and below Gemini 2.5 Flash. This paradox resolves once the mechanism is understood: the dominant forensic signals in AI-generated receipts are arithmetic errors -- invisible to visual inspection but systematically verifiable by LLMs. Humans cannot perceive that a subtotal is incorrect; LLMs verify it in milliseconds. Beyond the human--LLM comparison, our five-model evaluation reveals dramatic performance disparities and calibration differences that render simple accuracy metrics insufficient for detector selection. GPT4o-Receipt, the evaluation framework, and all results are released publicly to support future research in AI document forensics.