🤖 AI Summary
This study addresses the unified modeling of deepfake generation and detection, using authenticity verification of coin-flip sequences as a controlled benchmark task. We propose the Markov Observation Model (MOM), a novel paradigm that jointly models generative and discriminative processes within a single probabilistic framework—marking the first such unified formulation. Under a rigorously controlled evaluation benchmark, MOM significantly outperforms GANs, SVMs, Bayesian Pattern Filters (BPF), and human judgment on both detection accuracy and generation fidelity: it achieves state-of-the-art detection performance while producing sequences whose statistical properties most closely match the true distribution. SVM supports detection only; BPF attains intermediate performance; GANs exhibit pronounced generation artifacts; and human observers yield the lowest accuracy. This work establishes an interpretable, scalable foundation for deepfake modeling, advancing principled, probability-based approaches to synthetic content analysis.
📝 Abstract
New and existing methods for generating, and especially detecting, deepfakes are investigated and compared on the simple problem of authenticating coin flip data. Importantly, an alternative approach to deepfake generation and detection, which uses a Markov Observation Model (MOM) is introduced and compared on detection ability to the traditional Generative Adversarial Network (GAN) approach as well as Support Vector Machine (SVM), Branching Particle Filtering (BPF) and human alternatives. MOM was also compared on generative and discrimination ability to GAN, filtering and humans (as SVM does not have generative ability). Humans are shown to perform the worst, followed in order by GAN, SVM, BPF and MOM, which was the best at the detection of deepfakes. Unsurprisingly, the order was maintained on the generation problem with removal of SVM as it does not have generation ability.