A Brief Review for Compression and Transfer Learning Techniques in DeepFake Detection

📅 2025-04-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of efficiently training and deploying deepfake detection models on resource-constrained edge devices, this paper proposes a synergistic optimization framework integrating model compression and transfer learning. We systematically evaluate, for the first time, the domain generalization limits of aggressive pruning (90% sparsity), quantization, knowledge distillation, and adapter-based fine-tuning across diverse generative models (e.g., StyleGAN, diffusion models). Our analysis identifies feature distribution shift and discriminative bottlenecks as primary causes of performance degradation under cross-model generalization. Experiments demonstrate lossless accuracy on in-distribution data and significantly improved robustness in cross-model detection scenarios. This work establishes the first empirically validated, edge-deployable benchmark for lightweight deepfake detection and provides a reproducible, principled optimization paradigm grounded in joint compression–transfer co-design.

Technology Category

Application Category

📝 Abstract
Training and deploying deepfake detection models on edge devices offers the advantage of maintaining data privacy and confidentiality by processing it close to its source. However, this approach is constrained by the limited computational and memory resources available at the edge. To address this challenge, we explore compression techniques to reduce computational demands and inference time, alongside transfer learning methods to minimize training overhead. Using the Synthbuster, RAISE, and ForenSynths datasets, we evaluate the effectiveness of pruning, knowledge distillation (KD), quantization, fine-tuning, and adapter-based techniques. Our experimental results demonstrate that both compression and transfer learning can be effectively achieved, even with a high compression level of 90%, remaining at the same performance level when the training and validation data originate from the same DeepFake model. However, when the testing dataset is generated by DeepFake models not present in the training set, a domain generalization issue becomes evident.
Problem

Research questions and friction points this paper is trying to address.

Reduce computational demands for edge-based deepfake detection
Minimize training overhead using transfer learning methods
Address domain generalization in unseen DeepFake models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compression techniques reduce computational demands
Transfer learning minimizes training overhead
Evaluated pruning, KD, quantization, fine-tuning, adapters
🔎 Similar Papers
No similar papers found.
A
Andreas Karathanasis
Information Technologies Institute, Centre for Research & Technology Hellas, Thessaloniki, 57001, Greece
John Violos
John Violos
École de Technologie Supérieure | Université du Québec
Data ScienceMachine LearningAmbient IntelligenceGeospatial Data Analysis
I
Ioannis Kompatsiaris
Information Technologies Institute, Centre for Research & Technology Hellas, Thessaloniki, 57001, Greece
Symeon Papadopoulos
Symeon Papadopoulos
Information Technologies Institute (ITI)
Artificial IntelligenceMedia VerificationAI FairnessWeb MiningMultimedia Retrieval