MotionLab: Unified Human Motion Generation and Editing via the Motion-Condition-Motion Paradigm

๐Ÿ“… 2025-02-04
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing motion generation and editing methods are typically designed in isolation for specific tasks, lacking a unified framework that supports fine-grained editing, cross-task knowledge sharing, and multimodal conditional coordination. To address this, we propose MotionLabโ€”the first unified framework integrating motion generation and editing. It introduces the novel Motion-Condition-Motion paradigm, modeling diverse tasks via a ternary structure comprising source motion, conditional signal, and target motion. Methodologically, we design the MotionFlow Transformer, alignment-aware rotational positional encoding, task-instruction modulation, and motion-curriculum learning, and formulate motion mapping via rectified flows. Extensive experiments on multiple benchmarks demonstrate significant improvements in generalization and inference efficiency. MotionLab enables zero-shot editing, cross-modal conditional generation, and other complex scenarios. Code and demonstration videos are publicly available.

Technology Category

Application Category

๐Ÿ“ Abstract
Human motion generation and editing are key components of computer graphics and vision. However, current approaches in this field tend to offer isolated solutions tailored to specific tasks, which can be inefficient and impractical for real-world applications. While some efforts have aimed to unify motion-related tasks, these methods simply use different modalities as conditions to guide motion generation. Consequently, they lack editing capabilities, fine-grained control, and fail to facilitate knowledge sharing across tasks. To address these limitations and provide a versatile, unified framework capable of handling both human motion generation and editing, we introduce a novel paradigm: Motion-Condition-Motion, which enables the unified formulation of diverse tasks with three concepts: source motion, condition, and target motion.Based on this paradigm, we propose a unified framework, MotionLab, which incorporates rectified flows to learn the mapping from source motion to target motion, guided by the specified conditions.In MotionLab, we introduce the 1) MotionFlow Transformer to enhance conditional generation and editing without task-specific modules; 2) Aligned Rotational Position Encoding} to guarantee the time synchronization between source motion and target motion; 3) Task Specified Instruction Modulation; and 4) Motion Curriculum Learning for effective multi-task learning and knowledge sharing across tasks. Notably, our MotionLab demonstrates promising generalization capabilities and inference efficiency across multiple benchmarks for human motion. Our code and additional video results are available at: https://diouo.github.io/motionlab.github.io/.
Problem

Research questions and friction points this paper is trying to address.

Unifies human motion generation and editing tasks.
Addresses lack of editing and fine-grained control.
Facilitates knowledge sharing across diverse motion tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified Motion-Condition-Motion paradigm
MotionFlow Transformer enhances editing
Aligned Rotational Position Encoding ensures synchronization
๐Ÿ”Ž Similar Papers
Z
Ziyan Guo
Singapore University of Technology and Design, Singapore
Z
Zeyu Hu
LightSpeed Studios, Singapore
N
Na Zhao
Singapore University of Technology and Design, Singapore
De Wen Soh
De Wen Soh
Singapore University of Technology and Design
machine learningcomputer visionnatural language processingAInetwork algorithms