Mitigating Error Accumulation in Co-Speech Motion Generation via Global Rotation Diffusion and Multi-Level Constraints

๐Ÿ“… 2025-11-13
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing speech-driven motion generation methods model local joint rotations, leading to hierarchical error accumulation manifested as end-effector jitter and motion distortion. To address this, we propose GlobalDiffโ€”the first diffusion-based framework operating directly in the global joint rotation space, thereby eliminating conventional parent-child joint dependencies and suppressing error propagation at its source. GlobalDiff incorporates three structural constraints: (i) joint topology regularization guided by virtual anchor points, (ii) skeletal angle consistency constraints, and (iii) temporal dynamic modeling via a multi-scale variational encoder. Evaluated on standard benchmarks, GlobalDiff achieves a substantial 46.0% improvement over state-of-the-art methods, yielding significantly smoother, more accurate, and temporally coherent motions. Moreover, it demonstrates strong generalization to multi-speaker scenarios without speaker-specific fine-tuning.

Technology Category

Application Category

๐Ÿ“ Abstract
Reliable co-speech motion generation requires precise motion representation and consistent structural priors across all joints. Existing generative methods typically operate on local joint rotations, which are defined hierarchically based on the skeleton structure. This leads to cumulative errors during generation, manifesting as unstable and implausible motions at end-effectors. In this work, we propose GlobalDiff, a diffusion-based framework that operates directly in the space of global joint rotations for the first time, fundamentally decoupling each joint's prediction from upstream dependencies and alleviating hierarchical error accumulation. To compensate for the absence of structural priors in global rotation space, we introduce a multi-level constraint scheme. Specifically, a joint structure constraint introduces virtual anchor points around each joint to better capture fine-grained orientation. A skeleton structure constraint enforces angular consistency across bones to maintain structural integrity. A temporal structure constraint utilizes a multi-scale variational encoder to align the generated motion with ground-truth temporal patterns. These constraints jointly regularize the global diffusion process and reinforce structural awareness. Extensive evaluations on standard co-speech benchmarks show that GlobalDiff generates smooth and accurate motions, improving the performance by 46.0 % compared to the current SOTA under multiple speaker identities.
Problem

Research questions and friction points this paper is trying to address.

Mitigating cumulative errors in co-speech motion generation
Addressing unstable motions at end-effectors via global rotations
Compensating for structural prior absence with multi-level constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Global rotation diffusion for joint prediction decoupling
Multi-level constraints for structural integrity
Virtual anchor points for fine-grained orientation capture
๐Ÿ”Ž Similar Papers
No similar papers found.
X
Xiangyue Zhang
Tongyi Lab, Alibaba Group
J
Jianfang Li
Tongyi Lab, Alibaba Group
J
Jianqiang Ren
Tongyi Lab, Alibaba Group
Jiaxu Zhang
Jiaxu Zhang
Wuhan University
computer visiongenerative AI2D/3D character animationMLLM