🤖 AI Summary
To address weak out-of-distribution generalization in AI-generated image detection, this paper proposes a feature robustness enhancement method based on semantically guided positive-excitation noise. The core method introduces a learnable noise generator that injects such noise into the CLIP visual feature space; cross-attention is employed to fuse visual and textual features, thereby suppressing reliance on spurious shortcuts and actively shaping discriminative feature structures. A variational training framework enables end-to-end joint optimization of noise injection and the detection network. Evaluated on an open-world benchmark covering 42 generative models, the approach achieves state-of-the-art average detection accuracy—improving upon the previous best by 5.4 percentage points—and demonstrates significantly enhanced cross-model generalization.
📝 Abstract
The rapid advancement of generative models has made real and synthetic images increasingly indistinguishable. Although extensive efforts have been devoted to detecting AI-generated images, out-of-distribution generalization remains a persistent challenge. We trace this weakness to spurious shortcuts exploited during training and we also observe that small feature-space perturbations can mitigate shortcut dominance. To address this problem in a more controllable manner, we propose the Positive-Incentive Noise for CLIP (PiN-CLIP), which jointly trains a noise generator and a detection network under a variational positive-incentive principle. Specifically, we construct positive-incentive noise in the feature space via cross-attention fusion of visual and categorical semantic features. During optimization, the noise is injected into the feature space to fine-tune the visual encoder, suppressing shortcut-sensitive directions while amplifying stable forensic cues, thereby enabling the extraction of more robust and generalized artifact representations. Comparative experiments are conducted on an open-world dataset comprising synthetic images generated by 42 distinct generative models. Our method achieves new state-of-the-art performance, with notable improvements of 5.4 in average accuracy over existing approaches.