🤖 AI Summary
This study investigates the feasibility of leveraging multimodal large language models (MLLMs) as zero-shot deepfake image detectors. We systematically evaluate 12 state-of-the-art MLLMs—including Gemma Flash 2, Qwen2.5-VL, and Claude 3.5—on real-world deepfake datasets. Contrary to prior assumptions, we find that several models achieve zero-shot detection accuracy surpassing conventional methods (up to 89.7%), while most perform no better than random guessing. Our analysis reveals only a weak positive correlation between model parameter count and detection performance; reasoning capability and version updates exhibit negligible impact. To enhance interpretability and generalizability, we propose a novel evaluation framework integrating prompt tuning and decision-path attribution, enabling mechanistic insight into detection behavior and cross-distribution generalization analysis. These findings establish a new paradigm for trustworthy media verification powered by MLLMs and provide empirical grounding for future research in zero-shot deepfake detection.
📝 Abstract
Deepfake detection remains a critical challenge in the era of advanced generative models, particularly as synthetic media becomes more sophisticated. In this study, we explore the potential of state of the art multi-modal (reasoning) large language models (LLMs) for deepfake image detection such as (OpenAI O1/4o, Gemini thinking Flash 2, Deepseek Janus, Grok 3, llama 3.2, Qwen 2/2.5 VL, Mistral Pixtral, Claude 3.5/3.7 sonnet) . We benchmark 12 latest multi-modal LLMs against traditional deepfake detection methods across multiple datasets, including recently published real-world deepfake imagery. To enhance performance, we employ prompt tuning and conduct an in-depth analysis of the models' reasoning pathways to identify key contributing factors in their decision-making process. Our findings indicate that best multi-modal LLMs achieve competitive performance with promising generalization ability with zero shot, even surpass traditional deepfake detection pipelines in out-of-distribution datasets while the rest of the LLM families performs extremely disappointing with some worse than random guess. Furthermore, we found newer model version and reasoning capabilities does not contribute to performance in such niche tasks of deepfake detection while model size do help in some cases. This study highlights the potential of integrating multi-modal reasoning in future deepfake detection frameworks and provides insights into model interpretability for robustness in real-world scenarios.