🤖 AI Summary
To address the challenge of simultaneously achieving high pose accuracy and faithful facial detail preservation in talking-head video generation, this paper proposes a motion-appearance dual-codebook joint modeling framework. We introduce, for the first time, a Transformer-based cross-codebook multi-scale retrieval and compensation mechanism that jointly models motion dynamics and source-image appearance via coordinated multi-scale vector-quantized codebooks. An end-to-end differentiable compensation module enables fine-grained co-optimization under dynamic poses. The method incorporates a joint decoding network for motion flow and appearance features, significantly improving identity consistency, lip-sync accuracy, and texture fidelity. Extensive evaluations demonstrate state-of-the-art performance across multiple benchmarks: our approach achieves the best scores in FID, LPIPS, and keypoint error metrics. Both qualitative and quantitative results validate its effectiveness.
📝 Abstract
Talking head video generation aims to generate a realistic talking head video that preserves the person's identity from a source image and the motion from a driving video. Despite the promising progress made in the field, it remains a challenging and critical problem to generate videos with accurate poses and fine-grained facial details simultaneously. Essentially, facial motion is often highly complex to model precisely, and the one-shot source face image cannot provide sufficient appearance guidance during generation due to dynamic pose changes. To tackle the problem, we propose to jointly learn motion and appearance codebooks and perform multi-scale codebook compensation to effectively refine both the facial motion conditions and appearance features for talking face image decoding. Specifically, the designed multi-scale motion and appearance codebooks are learned simultaneously in a unified framework to store representative global facial motion flow and appearance patterns. Then, we present a novel multi-scale motion and appearance compensation module, which utilizes a transformer-based codebook retrieval strategy to query complementary information from the two codebooks for joint motion and appearance compensation. The entire process produces motion flows of greater flexibility and appearance features with fewer distortions across different scales, resulting in a high-quality talking head video generation framework. Extensive experiments on various benchmarks validate the effectiveness of our approach and demonstrate superior generation results from both qualitative and quantitative perspectives when compared to state-of-the-art competitors.