🤖 AI Summary
This study addresses fundamental challenges in human motion generation—specifically, motion representation design and loss function formulation. We propose vMDM, a lightweight surrogate diffusion model, to systematically evaluate six mainstream motion representations across multiple datasets in terms of generation quality, diversity, and training efficiency. Crucially, we introduce v-loss—a unified prediction objective—for the first time. Through controlled ablation experiments, we reveal that motion representation choice critically governs latent-space distribution modeling capability and conditional generation performance. Notably, representations combining joint velocities with rotation matrices yield substantial improvements: up to 2.3× faster convergence and significantly lower FID scores. Our findings provide both theoretical insights and empirical guidelines for motion representation selection and loss optimization in diffusion-based motion generation frameworks.
📝 Abstract
Diffusion models have emerged as a widely utilized and successful methodology in human motion synthesis. Task-oriented diffusion models have significantly advanced action-to-motion, text-to-motion, and audio-to-motion applications. In this paper, we investigate fundamental questions regarding motion representations and loss functions in a controlled study, and we enumerate the impacts of various decisions in the workflow of the generative motion diffusion model. To answer these questions, we conduct empirical studies based on a proxy motion diffusion model (MDM). We apply v loss as the prediction objective on MDM (vMDM), where v is the weighted sum of motion data and noise. We aim to enhance the understanding of latent data distributions and provide a foundation for improving the state of conditional motion diffusion models. First, we evaluate the six common motion representations in the literature and compare their performance in terms of quality and diversity metrics. Second, we compare the training time under various configurations to shed light on how to speed up the training process of motion diffusion models. Finally, we also conduct evaluation analysis on a large motion dataset. The results of our experiments indicate clear performance differences across motion representations in diverse datasets. Our results also demonstrate the impacts of distinct configurations on model training and suggest the importance and effectiveness of these decisions on the outcomes of motion diffusion models.