🤖 AI Summary
The increasing photorealism of AI-generated images poses significant challenges for reliable detection. Method: This paper proposes a robust detection framework that jointly models uncertainty from three complementary sources—Fisher information, MC Dropout entropy, and predictive variance from deep kernel Gaussian processes—and employs particle swarm optimization (PSO) for dynamic uncertainty weighting and adaptive rejection thresholding to mitigate distribution shifts across generators and adversarial perturbations (e.g., FGSM/PGD). Contribution/Results: Evaluated across diverse generative models (GLIDE, Midjourney, etc.), the framework achieves a 70% interception rate for misclassified unknown AI images and a 61% overall rejection rate under adversarial attacks (reaching 80% for the GP sub-module), while maintaining high-confidence acceptance rates for natural images and in-distribution AI-generated content. The framework is also retrainable, enabling continual adaptation to evolving generative models.
📝 Abstract
As AI-generated images become increasingly photorealistic, distinguishing them from natural images poses a growing challenge. This paper presents a robust detection framework that leverages multiple uncertainty measures to decide whether to trust or reject a model's predictions. We focus on three complementary techniques: Fisher Information, which captures the sensitivity of model parameters to input variations; entropy-based uncertainty from Monte Carlo Dropout, which reflects predictive variability; and predictive variance from a Deep Kernel Learning framework using a Gaussian Process classifier. To integrate these diverse uncertainty signals, Particle Swarm Optimisation is used to learn optimal weightings and determine an adaptive rejection threshold. The model is trained on Stable Diffusion-generated images and evaluated on GLIDE, VQDM, Midjourney, BigGAN, and StyleGAN3, each introducing significant distribution shifts. While standard metrics such as prediction probability and Fisher-based measures perform well in distribution, their effectiveness degrades under shift. In contrast, the Combined Uncertainty measure consistently achieves an incorrect rejection rate of approximately 70 percent on unseen generators, successfully filtering most misclassified AI samples. Although the system occasionally rejects correct predictions from newer generators, this conservative behaviour is acceptable, as rejected samples can support retraining. The framework maintains high acceptance of accurate predictions for natural images and in-domain AI data. Under adversarial attacks using FGSM and PGD, the Combined Uncertainty method rejects around 61 percent of successful attacks, while GP-based uncertainty alone achieves up to 80 percent. Overall, the results demonstrate that multi-source uncertainty fusion provides a resilient and adaptive solution for AI-generated image detection.