🤖 AI Summary
Existing methods for detecting AI-generated images often suffer from limited generalization when confronted with unseen generative models or cross-dataset scenarios. To address this challenge, this work proposes a data diversity-driven training sample selection strategy that enhances the coverage of the training data distribution by filtering samples based on feature-domain similarity. Additionally, a dual-branch network is introduced, which fuses CLIP features from both the pixel and frequency domains to jointly model semantic and structural cues. This approach significantly improves robustness against unseen generative models and adversarial perturbations, achieving state-of-the-art performance in cross-model and cross-dataset detection benchmarks.
📝 Abstract
The rapid proliferation of AI-generated images, powered by generative adversarial networks (GANs), diffusion models, and other synthesis techniques, has raised serious concerns about misinformation, copyright violations, and digital security. However, detecting such images in a generalized and robust manner remains a major challenge due to the vast diversity of generative models and data distributions. In this work, we present \textbf{Diversity Matters}, a novel framework that emphasizes data diversity and feature domain complementarity for AI-generated image detection. The proposed method introduces a feature-domain similarity filtering mechanism that discards redundant or highly similar samples across both inter-class and intra-class distributions, ensuring a more diverse and representative training set. Furthermore, we propose a dual-branch network that combines CLIP features from the pixel domain and the frequency domain to jointly capture semantic and structural cues, leading to improved generalization against unseen generative models and adversarial conditions. Extensive experiments on benchmark datasets demonstrate that the proposed approach significantly improves cross-model and cross-dataset performance compared to existing methods. \textbf{Diversity Matters} highlights the critical role of data and feature diversity in building reliable and robust detectors against the rapidly evolving landscape of synthetic content.