Audio-visual Controlled Video Diffusion with Masked Selective State Spaces Modeling for Natural Talking Head Generation

📅 2025-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing talking head generation methods predominantly rely on single-modal input (e.g., audio), limiting flexible and fine-grained multi-source collaborative control. To address this, we propose the first multimodal-driven (audio/video/text) talking head generation framework. Our method introduces a novel parallel Mamba-based multi-branch architecture coupled with a gated masking mechanism, enabling conflict-free, modality-specific control over distinct facial regions. Furthermore, we integrate a video diffusion model with cross-temporal-spatial feature modulation to ensure high visual naturalness, precise lip-sync accuracy, and strong spatiotemporal consistency. Extensive experiments demonstrate that our approach significantly outperforms unimodal baselines, achieving breakthroughs in fine-grained controllability, multimodal signal compatibility, and real-time interactive potential.

Technology Category

Application Category

📝 Abstract
Talking head synthesis is vital for virtual avatars and human-computer interaction. However, most existing methods are typically limited to accepting control from a single primary modality, restricting their practical utility. To this end, we introduce extbf{ACTalker}, an end-to-end video diffusion framework that supports both multi-signals control and single-signal control for talking head video generation. For multiple control, we design a parallel mamba structure with multiple branches, each utilizing a separate driving signal to control specific facial regions. A gate mechanism is applied across all branches, providing flexible control over video generation. To ensure natural coordination of the controlled video both temporally and spatially, we employ the mamba structure, which enables driving signals to manipulate feature tokens across both dimensions in each branch. Additionally, we introduce a mask-drop strategy that allows each driving signal to independently control its corresponding facial region within the mamba structure, preventing control conflicts. Experimental results demonstrate that our method produces natural-looking facial videos driven by diverse signals and that the mamba layer seamlessly integrates multiple driving modalities without conflict.
Problem

Research questions and friction points this paper is trying to address.

Generate talking head videos using multi-signal control
Ensure natural coordination in temporal and spatial dimensions
Prevent control conflicts among diverse driving signals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parallel mamba structure for multi-signal control
Gate mechanism for flexible video generation
Mask-drop strategy to prevent control conflicts
🔎 Similar Papers
No similar papers found.