DynamiCtrl: Rethinking the Basic Structure and the Role of Text for High-quality Human Image Animation

📅 2025-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Human image animation faces two key challenges: performance bottlenecks inherent to U-Net architectures and insufficient exploitation of textual information. This paper introduces the first text-enhanced pose-control framework based on MM-DiT, overcoming U-Net limitations to enable high-fidelity animation with robust identity preservation and joint controllability over background and motion. Key contributions include: (1) Pose-adaptive LayerNorm (PadaLN), a novel normalization scheme that dynamically fuses sparse pose features; (2) the first support for fine-grained, synchronous control of both background and motion under text guidance; and (3) elimination of a separate pose encoder in favor of a shared VAE for unified encoding of images and pose videos, augmented with cross-modal text–vision alignment and full-attention mechanisms. Our method achieves state-of-the-art results across multiple benchmarks, significantly improving identity consistency, heterogeneous character driving capability, background controllability, and overall synthesis quality.

Technology Category

Application Category

📝 Abstract
Human image animation has recently gained significant attention due to advancements in generative models. However, existing methods still face two major challenges: (1) architectural limitations, most models rely on U-Net, which underperforms compared to the MM-DiT; and (2) the neglect of textual information, which can enhance controllability. In this work, we introduce DynamiCtrl, a novel framework that not only explores different pose-guided control structures in MM-DiT, but also reemphasizes the crucial role of text in this task. Specifically, we employ a Shared VAE encoder for both reference images and driving pose videos, eliminating the need for an additional pose encoder and simplifying the overall framework. To incorporate pose features into the full attention blocks, we propose Pose-adaptive Layer Norm (PadaLN), which utilizes adaptive layer normalization to encode sparse pose features. The encoded features are directly added to the visual input, preserving the spatiotemporal consistency of the backbone while effectively introducing pose control into MM-DiT. Furthermore, within the full attention mechanism, we align textual and visual features to enhance controllability. By leveraging text, we not only enable fine-grained control over the generated content, but also, for the first time, achieve simultaneous control over both background and motion. Experimental results verify the superiority of DynamiCtrl on benchmark datasets, demonstrating its strong identity preservation, heterogeneous character driving, background controllability, and high-quality synthesis. The project page is available at https://gulucaptain.github.io/DynamiCtrl/.
Problem

Research questions and friction points this paper is trying to address.

Overcoming U-Net limitations with MM-DiT for human animation
Integrating text for enhanced controllability in image animation
Simplifying pose encoding with Shared VAE and PadaLN
Innovation

Methods, ideas, or system contributions that make the work stand out.

Shared VAE encoder simplifies framework
Pose-adaptive Layer Norm enhances pose control
Text-visual alignment improves controllability
🔎 Similar Papers
No similar papers found.
H
Haoyu Zhao
Fudan University, China
Z
Zhongang Qi
Huawei Noah’s Ark Lab, China
C
Cong Wang
Sun Yat-sen University, China
Qingping Zheng
Qingping Zheng
Unknown affiliation
Guansong Lu
Guansong Lu
ByteDance
image/video generation/edit3d generationmultimodal
F
Fei Chen
Huawei Noah’s Ark Lab, China
H
Hang Xu
Huawei Noah’s Ark Lab, China
Zuxuan Wu
Zuxuan Wu
Fudan University