๐ค AI Summary
Existing speech-driven motion generation methods model local joint rotations, leading to hierarchical error accumulation manifested as end-effector jitter and motion distortion. To address this, we propose GlobalDiffโthe first diffusion-based framework operating directly in the global joint rotation space, thereby eliminating conventional parent-child joint dependencies and suppressing error propagation at its source. GlobalDiff incorporates three structural constraints: (i) joint topology regularization guided by virtual anchor points, (ii) skeletal angle consistency constraints, and (iii) temporal dynamic modeling via a multi-scale variational encoder. Evaluated on standard benchmarks, GlobalDiff achieves a substantial 46.0% improvement over state-of-the-art methods, yielding significantly smoother, more accurate, and temporally coherent motions. Moreover, it demonstrates strong generalization to multi-speaker scenarios without speaker-specific fine-tuning.
๐ Abstract
Reliable co-speech motion generation requires precise motion representation and consistent structural priors across all joints. Existing generative methods typically operate on local joint rotations, which are defined hierarchically based on the skeleton structure. This leads to cumulative errors during generation, manifesting as unstable and implausible motions at end-effectors. In this work, we propose GlobalDiff, a diffusion-based framework that operates directly in the space of global joint rotations for the first time, fundamentally decoupling each joint's prediction from upstream dependencies and alleviating hierarchical error accumulation. To compensate for the absence of structural priors in global rotation space, we introduce a multi-level constraint scheme. Specifically, a joint structure constraint introduces virtual anchor points around each joint to better capture fine-grained orientation. A skeleton structure constraint enforces angular consistency across bones to maintain structural integrity. A temporal structure constraint utilizes a multi-scale variational encoder to align the generated motion with ground-truth temporal patterns. These constraints jointly regularize the global diffusion process and reinforce structural awareness. Extensive evaluations on standard co-speech benchmarks show that GlobalDiff generates smooth and accurate motions, improving the performance by 46.0 % compared to the current SOTA under multiple speaker identities.