🤖 AI Summary
Existing single-frame face anti-spoofing (FAS) methods neglect temporal dynamic cues, leading to erroneous classifications in scenarios where photometric features are ambiguous but motion patterns are discriminative. To address this, we propose a video-based collaborative modeling framework that jointly analyzes facial images and keypoint graph structures to simultaneously capture photometric and dynamic anomalies. Our method introduces a novel Kronecker temporal attention mechanism to expand the temporal receptive field, and a graph-guided spatiotemporal fusion paradigm that leverages low-level keypoint motion to steer high-level expression dynamics modeling. Integrating graph neural networks, video Vision Transformers (ViT), and spatiotemporally decoupled attention, the framework enables multi-scale feature aggregation. Evaluated on nine mainstream benchmarks, our approach achieves state-of-the-art performance across all datasets, particularly improving detection accuracy in motion-dominant scenarios and demonstrating strong generalization capability.
📝 Abstract
In videos containing spoofed faces, we may uncover the spoofing evidence based on either photometric or dynamic abnormality, even a combination of both. Prevailing face anti-spoofing (FAS) approaches generally concentrate on the single-frame scenario, however, purely photometric-driven methods overlook the dynamic spoofing clues that may be exposed over time. This may lead FAS systems to conclude incorrect judgments, especially in cases where it is easily distinguishable in terms of dynamics but challenging to discern in terms of photometrics. To this end, we propose the Graph Guided Video Vision Transformer (G$^2$V$^2$former), which combines faces with facial landmarks for photometric and dynamic feature fusion. We factorize the attention into space and time, and fuse them via a spatiotemporal block. Specifically, we design a novel temporal attention called Kronecker temporal attention, which has a wider receptive field, and is beneficial for capturing dynamic information. Moreover, we leverage the low-semantic motion of facial landmarks to guide the high-semantic change of facial expressions based on the motivation that regions containing landmarks may reveal more dynamic clues. Extensive experiments on nine benchmark datasets demonstrate that our method achieves superior performance under various scenarios. The codes will be released soon.