π€ AI Summary
Speech-driven 3D facial animation has long suffered from inadequate emotional expressiveness and the entanglement of speech content and emotional semantics. To address this, we propose a disentangled framework featuring an emotion adapter and a dual-VAE architecture that separately models upper and lower facial regions; within the latent space, a diffusion model explicitly decouples phonetic content from emotional semantics. We further introduce 3D-BEFβthe first industrial-grade 3D blendshape dataset for emotionally expressive talking faces. Our method integrates iPhone LiveLinkFace dynamic capture with variational autoencoder priors, significantly enhancing expression realism and emotional consistency. Quantitative evaluations across multiple objective metrics (e.g., LPIPS, FID, emotion classification accuracy) and comprehensive user studies demonstrate consistent superiority over state-of-the-art methods. The framework enables high-fidelity, fine-grained, and emotion-controllable 3D facial animation generation.
π Abstract
Speech-driven 3D facial animation seeks to produce lifelike facial expressions that are synchronized with the speech content and its emotional nuances, finding applications in various multimedia fields. However, previous methods often overlook emotional facial expressions or fail to disentangle them effectively from the speech content. To address these challenges, we present EmoDiffusion, a novel approach that disentangles different emotions in speech to generate rich 3D emotional facial expressions. Specifically, our method employs two Variational Autoencoders (VAEs) to separately generate the upper face region and mouth region, thereby learning a more refined representation of the facial sequence. Unlike traditional methods that use diffusion models to connect facial expression sequences with audio inputs, we perform the diffusion process in the latent space. Furthermore, we introduce an Emotion Adapter to evaluate upper face movements accurately. Given the paucity of 3D emotional talking face data in the animation industry, we capture facial expressions under the guidance of animation experts using LiveLinkFace on an iPhone. This effort results in the creation of an innovative 3D blendshape emotional talking face dataset (3D-BEF) used to train our network. Extensive experiments and perceptual evaluations validate the effectiveness of our approach, confirming its superiority in generating realistic and emotionally rich facial animations.