Improving Deepfake Detection with Reinforcement Learning-Based Adaptive Data Augmentation

📅 2025-11-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited generalizability of deepfake detection models amid rapid advancements in deepfake generation techniques, this paper proposes Causal-aware Reinforcement-learning-driven Data Augmentation (CRDA). CRDA formulates data augmentation as a dynamic decision-making process: a reinforcement learning agent adaptively selects augmentation policies based on the current state of the detector; a configurable forgery operation pool, combined with causal-guided action-space perturbation, generates curriculum-style adversarial samples; and causal inference explicitly suppresses spurious correlations, steering the model to learn robust, causally invariant features. Evaluated across multiple cross-domain benchmarks, CRDA consistently outperforms existing state-of-the-art methods, significantly enhancing model generalization to unseen forgery types and resilience to distribution shifts.

Technology Category

Application Category

📝 Abstract
The generalization capability of deepfake detectors is critical for real-world use. Data augmentation via synthetic fake face generation effectively enhances generalization, yet current SoTA methods rely on fixed strategies-raising a key question: Is a single static augmentation sufficient, or does the diversity of forgery features demand dynamic approaches? We argue existing methods overlook the evolving complexity of real-world forgeries (e.g., facial warping, expression manipulation), which fixed policies cannot fully simulate. To address this, we propose CRDA (Curriculum Reinforcement-Learning Data Augmentation), a novel framework guiding detectors to progressively master multi-domain forgery features from simple to complex. CRDA synthesizes augmented samples via a configurable pool of forgery operations and dynamically generates adversarial samples tailored to the detector's current learning state. Central to our approach is integrating reinforcement learning (RL) and causal inference. An RL agent dynamically selects augmentation actions based on detector performance to efficiently explore the vast augmentation space, adapting to increasingly challenging forgeries. Simultaneously, the agent introduces action space variations to generate heterogeneous forgery patterns, guided by causal inference to mitigate spurious correlations-suppressing task-irrelevant biases and focusing on causally invariant features. This integration ensures robust generalization by decoupling synthetic augmentation patterns from the model's learned representations. Extensive experiments show our method significantly improves detector generalizability, outperforming SOTA methods across multiple cross-domain datasets.
Problem

Research questions and friction points this paper is trying to address.

Dynamic data augmentation adapts to evolving deepfake forgery complexity
Reinforcement learning optimizes augmentation strategies for detector generalization
Causal inference eliminates spurious correlations in synthetic training data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning dynamically selects augmentation actions
Causal inference mitigates spurious correlations in features
Curriculum learning progresses from simple to complex forgeries
🔎 Similar Papers
No similar papers found.