Towards Robust Protective Perturbation against DeepFake Face Swapping

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
DeepFake face-swapping poses severe privacy and security threats, yet existing defense methods based on imperceptible perturbations are highly vulnerable to common image transformations—such as compression and scaling—that degrade perturbation efficacy. To address this, we propose EOLT (Expectation Over Learned distribution of Transformation), a robust defense framework that explicitly models the transformation distribution as a learnable component. Unlike conventional Expectation-over-Transformations (EOT) approaches relying on uniform sampling, EOLT employs a policy network trained via reinforcement learning to adaptively select critical transformations that best expose defense bottlenecks. This enables instance-aware perturbation generation and optimization. Extensive experiments demonstrate that EOLT achieves an average robustness improvement of 26% over state-of-the-art methods, with gains reaching up to 30% under strong adversarial transformations.

Technology Category

Application Category

📝 Abstract
DeepFake face swapping enables highly realistic identity forgeries, posing serious privacy and security risks. A common defence embeds invisible perturbations into images, but these are fragile and often destroyed by basic transformations such as compression or resizing. In this paper, we first conduct a systematic analysis of 30 transformations across six categories and show that protection robustness is highly sensitive to the choice of training transformations, making the standard Expectation over Transformation (EOT) with uniform sampling fundamentally suboptimal. Motivated by this, we propose Expectation Over Learned distribution of Transformation (EOLT), the framework to treat transformation distribution as a learnable component rather than a fixed design choice. Specifically, EOLT employs a policy network that learns to automatically prioritize critical transformations and adaptively generate instance-specific perturbations via reinforcement learning, enabling explicit modeling of defensive bottlenecks while maintaining broad transferability. Extensive experiments demonstrate that our method achieves substantial improvements over state-of-the-art approaches, with 26% higher average robustness and up to 30% gains on challenging transformation categories.
Problem

Research questions and friction points this paper is trying to address.

Enhances robustness of protective perturbations against DeepFake face swapping
Addresses fragility of existing defenses to image transformations like compression
Learns optimal transformation distributions to improve adversarial protection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learns transformation distribution via policy network
Generates instance-specific perturbations adaptively
Models defensive bottlenecks with reinforcement learning
🔎 Similar Papers
No similar papers found.
H
Hengyang Yao
University of Birmingham
L
Lin Li
University of Oxford
K
Ke Sun
Xiamen University
Jianing Qiu
Jianing Qiu
Assistant Professor, Mohamed bin Zayed University of Artificial Intelligence
Medical Foundation ModelAgentic Medical AIHuman-AI Interaction/Collaboration
H
Huiping Chen
University of Birmingham