🤖 AI Summary
Existing motion generation models struggle to maintain kinematic continuity at transition boundaries when concatenating multi-semantic motion segments, resulting in jerky artifacts. To address this, we propose a Semantic Phase Diffusion framework comprising a Semantic Phase Diffusion module and a Transition Phase Diffusion module. Operating within the latent frequency domain of a pre-trained motion phase autoencoder, our approach jointly models semantic consistency and transition-phase continuity. The framework enables end-to-end long-sequence generation, controllable motion interpolation, and intermediate-frame synthesis under fixed phase conditions. Experiments demonstrate significant improvements in semantic alignment and transition naturalness for composite motion sequences. Comprehensive evaluations across multiple benchmark tasks—including motion segmentation, interpolation, and conditional generation—validate the framework’s generality and practical effectiveness.
📝 Abstract
Recent research on motion generation has shown significant progress in generating semantically aligned motion with singular semantics. However, when employing these models to create composite sequences containing multiple semantically generated motion clips, they often struggle to preserve the continuity of motion dynamics at the transition boundaries between clips, resulting in awkward transitions and abrupt artifacts. To address these challenges, we present Compositional Phase Diffusion, which leverages the Semantic Phase Diffusion Module (SPDM) and Transitional Phase Diffusion Module (TPDM) to progressively incorporate semantic guidance and phase details from adjacent motion clips into the diffusion process. Specifically, SPDM and TPDM operate within the latent motion frequency domain established by the pre-trained Action-Centric Motion Phase Autoencoder (ACT-PAE). This allows them to learn semantically important and transition-aware phase information from variable-length motion clips during training. Experimental results demonstrate the competitive performance of our proposed framework in generating compositional motion sequences that align semantically with the input conditions, while preserving phase transitional continuity between preceding and succeeding motion clips. Additionally, motion inbetweening task is made possible by keeping the phase parameter of the input motion sequences fixed throughout the diffusion process, showcasing the potential for extending the proposed framework to accommodate various application scenarios. Codes are available at https://github.com/asdryau/TransPhase.