Style-Preserving Lip Sync via Audio-Aware Style Reference

πŸ“… 2024-08-10
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 2
✨ Influential: 1
πŸ“„ PDF
πŸ€– AI Summary
Audio-driven lip-sync methods often fail to preserve speaker-specific lip shapes and speaking styles, leading to distorted outputs. To address this, we propose an audio-aware style reference mechanism that jointly models semantic correlations between input and reference audio, enabling integrated representation of phonetic content and individual lip-motion dynamics. We further design a two-stage generation framework: the first stage employs a Transformer with cross-modal cross-attention for accurate lip-shape prediction; the second stage integrates conditional latent diffusion with modulated convolutions and spatial cross-attention to enhance fine-grained visual fidelity. Extensive experiments demonstrate state-of-the-art performance across multiple benchmarks, achieving significant improvements in lip-sync accuracy and style consistency. The method generates high-fidelity, natural-looking talking-face videos with preserved speaker identity and articulatory characteristics.

Technology Category

Application Category

πŸ“ Abstract
Audio-driven lip sync has recently drawn significant attention due to its widespread application in the multimedia domain. Individuals exhibit distinct lip shapes when speaking the same utterance, attributed to the unique speaking styles of individuals, posing a notable challenge for audio-driven lip sync. Earlier methods for such task often bypassed the modeling of personalized speaking styles, resulting in sub-optimal lip sync conforming to the general styles. Recent lip sync techniques attempt to guide the lip sync for arbitrary audio by aggregating information from a style reference video, yet they can not preserve the speaking styles well due to their inaccuracy in style aggregation. This work proposes an innovative audio-aware style reference scheme that effectively leverages the relationships between input audio and reference audio from style reference video to address the style-preserving audio-driven lip sync. Specifically, we first develop an advanced Transformer-based model adept at predicting lip motion corresponding to the input audio, augmented by the style information aggregated through cross-attention layers from style reference video. Afterwards, to better render the lip motion into realistic talking face video, we devise a conditional latent diffusion model, integrating lip motion through modulated convolutional layers and fusing reference facial images via spatial cross-attention layers. Extensive experiments validate the efficacy of the proposed approach in achieving precise lip sync, preserving speaking styles, and generating high-fidelity, realistic talking face videos.
Problem

Research questions and friction points this paper is trying to address.

Preserve unique speaking styles in audio-driven lip sync
Improve accuracy of style aggregation from reference videos
Generate high-fidelity talking face videos with precise lip motion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Audio-aware style reference for lip sync
Transformer-based model with cross-attention layers
Conditional latent diffusion model for realistic rendering
πŸ”Ž Similar Papers
No similar papers found.
W
Wei‐Tao Zhong
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
Jichang Li
Jichang Li
Assistant Researcher@Pengcheng Lab
Agentic VisionEmbodied AIVisual Content UnderstandingWeakly-supervised Learning
Y
Yinqi Cai
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
Liang Lin
Liang Lin
Fellow of IEEE/IAPR, Professor of Computer Science, Sun Yat-sen University
Embodied AICausal Inference and LearningMultimodal Data Analysis
G
Guanbin Li
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China