Decoupling Bias, Aligning Distributions: Synergistic Fairness Optimization for Deepfake Detection

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deepfake detection models exhibit demographic bias—particularly across gender and race—leading to systematic misclassifications and exacerbating digital inequity; existing fairness-enhancement methods often compromise detection accuracy. To address this, we propose a dual-mechanism co-optimization framework that innovatively integrates *sensitive-channel disentanglement* at the model architecture level (to decouple bias) with *inter-class distribution alignment* at the feature level (to foster globally fair representations), jointly improving both cross-group and intra-group fairness. Extensive experiments on multi-domain benchmarks demonstrate that our method maintains state-of-the-art detection performance (AUC > 98.5%) while significantly enhancing fairness: average equalized odds and demographic parity disparities (ΔEO/ΔDP) decrease by 37.2%, and intra-group variance drops by 29.6%. To the best of our knowledge, this is the first approach to achieve simultaneous high accuracy and strong fairness in deepfake detection.

Technology Category

Application Category

📝 Abstract
Fairness is a core element in the trustworthy deployment of deepfake detection models, especially in the field of digital identity security. Biases in detection models toward different demographic groups, such as gender and race, may lead to systemic misjudgments, exacerbating the digital divide and social inequities. However, current fairness-enhanced detectors often improve fairness at the cost of detection accuracy. To address this challenge, we propose a dual-mechanism collaborative optimization framework. Our proposed method innovatively integrates structural fairness decoupling and global distribution alignment: decoupling channels sensitive to demographic groups at the model architectural level, and subsequently reducing the distance between the overall sample distribution and the distributions corresponding to each demographic group at the feature level. Experimental results demonstrate that, compared with other methods, our framework improves both inter-group and intra-group fairness while maintaining overall detection accuracy across domains.
Problem

Research questions and friction points this paper is trying to address.

Addresses demographic bias in deepfake detection models across gender and race
Improves fairness without sacrificing detection accuracy through dual optimization
Reduces distribution disparities between demographic groups while maintaining performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decouples bias through structural fairness channels
Aligns distributions by reducing demographic group distances
Maintains detection accuracy while enhancing group fairness
🔎 Similar Papers
No similar papers found.
Feng Ding
Feng Ding
Suzhou Laboratory
PhysicsChemistryMaterial Science
W
Wenhui Yi
Nanchang University
Y
Yunpeng Zhou
Nanchang University
Xinan He
Xinan He
Nanchang University MS student
DeepFakesMultimedia ForensicsAIGC Detection
H
Hong Rao
Nanchang University
S
Shu Hu
Purdue University