🤖 AI Summary
This paper addresses audio-driven expressive talking-head generation, jointly modeling lip motion, facial expression, and head pose. Methodologically, it introduces the first unified motion modeling framework based on a conditional Motion Diffusion Transformer, innovatively decoupling phoneme-level audio features (for lip articulation) from text transcriptions (for expression and head pose control). It employs 3D facial motion representations and multi-granularity audio feature extraction. Evaluated on VoxCeleb2 and HDTF, the method outperforms state-of-the-art approaches across all major metrics: lip motion accuracy (LMD), facial dynamics diversity (FDD), and head motion realism (HMD). Qualitative results further demonstrate superior visual fidelity and temporal coherence in generated videos.
📝 Abstract
We propose Dimitra, a novel framework for audio-driven talking head generation, streamlined to learn lip motion, facial expression, as well as head pose motion. Specifically, we train a conditional Motion Diffusion Transformer (cMDT) by modeling facial motion sequences with 3D representation. We condition the cMDT with only two input signals, an audio-sequence, as well as a reference facial image. By extracting additional features directly from audio, Dimitra is able to increase quality and realism of generated videos. In particular, phoneme sequences contribute to the realism of lip motion, whereas text transcript to facial expression and head pose realism. Quantitative and qualitative experiments on two widely employed datasets, VoxCeleb2 and HDTF, showcase that Dimitra is able to outperform existing approaches for generating realistic talking heads imparting lip motion, facial expression, and head pose.