🤖 AI Summary
Face morphing attacks severely compromise the security of face verification systems. Existing supervised morphing attack detection (MAD) methods suffer from poor generalization to unseen generation techniques, while unsupervised approaches—though more generalizable—exhibit high false-positive rates due to difficulties in modeling subtle morphing artifacts. To address these limitations, we propose SelfMAD, the first self-supervised framework for fusion-based MAD. SelfMAD introduces a novel paradigm featuring: (i) simulation of universal fusion artifacts, (ii) artifact-aware data augmentation, (iii) unsupervised feature disentanglement, and (iv) cross-domain consistency regularization. This enables learning robust, generation-agnostic decision boundaries without reliance on labeled morphed samples. In cross-method zero-shot evaluation against unknown attacks, SelfMAD reduces the equal error rate (EER) by over 64% compared to the strongest unsupervised baseline and by over 66% relative to the best supervised method.
📝 Abstract
With the continuous advancement of generative models, face morphing attacks have become a significant challenge for existing face verification systems due to their potential use in identity fraud and other malicious activities. Contemporary Morphing Attack Detection (MAD) approaches frequently rely on supervised, discriminative models trained on examples of bona fide and morphed images. These models typically perform well with morphs generated with techniques seen during training, but often lead to sub-optimal performance when subjected to novel unseen morphing techniques. While unsupervised models have been shown to perform better in terms of generalizability, they typically result in higher error rates, as they struggle to effectively capture features of subtle artifacts. To address these shortcomings, we present SelfMAD, a novel self-supervised approach that simulates general morphing attack artifacts, allowing classifiers to learn generic and robust decision boundaries without overfitting to the specific artifacts induced by particular face morphing methods. Through extensive experiments on widely used datasets, we demonstrate that SelfMAD significantly outperforms current state-of-the-art MADs, reducing the detection error by more than 64% in terms of EER when compared to the strongest unsupervised competitor, and by more than 66%, when compared to the best performing discriminative MAD model, tested in cross-morph settings. The source code for SelfMAD is available at https://github.com/LeonTodorov/SelfMAD.