SoPo: Text-to-Motion Generation Using Semi-Online Preference Optimization

📅 2024-12-06
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Text-to-motion generation suffers from insufficient motion coherence, realism, and misalignment with human preferences. To address these challenges, we propose Semi-online Preference Optimization (SoPo), the first method to identify inherent limitations of offline and online Direct Preference Optimization (DPO)—namely, overfitting and biased sampling. SoPo innovatively introduces a “semi-online” pairwise data paradigm, synergistically combining online-generated low-quality motions with offline-curated high-quality ones to enable bidirectional compensation. Built upon the DPO framework, SoPo integrates online policy sampling, offline static preference modeling, and gradient-coordinated parameter updates. Evaluated on MLD and MDM, SoPo achieves +3.25% and +2.91% improvements in MM-Dist, respectively—outperforming MoDiPO significantly. After fine-tuning MLD, SoPo attains state-of-the-art performance in both R-precision and MM-Dist.

Technology Category

Application Category

📝 Abstract
Text-to-motion generation is essential for advancing the creative industry but often presents challenges in producing consistent, realistic motions. To address this, we focus on fine-tuning text-to-motion models to consistently favor high-quality, human-preferred motions, a critical yet largely unexplored problem. In this work, we theoretically investigate the DPO under both online and offline settings, and reveal their respective limitation: overfitting in offline DPO, and biased sampling in online DPO. Building on our theoretical insights, we introduce Semi-online Preference Optimization (SoPo), a DPO-based method for training text-to-motion models using"semi-online"data pair, consisting of unpreferred motion from online distribution and preferred motion in offline datasets. This method leverages both online and offline DPO, allowing each to compensate for the other's limitations. Extensive experiments demonstrate that SoPo outperforms other preference alignment methods, with an MM-Dist of 3.25% (vs e.g. 0.76% of MoDiPO) on the MLD model, 2.91% (vs e.g. 0.66% of MoDiPO) on MDM model, respectively. Additionally, the MLD model fine-tuned by our SoPo surpasses the SoTA model in terms of R-precision and MM Dist. Visualization results also show the efficacy of our SoPo in preference alignment. Our project page is https://sopo-motion.github.io.
Problem

Research questions and friction points this paper is trying to address.

Improving text-to-motion generation for realistic motions
Addressing limitations of online and offline DPO methods
Enhancing preference alignment in motion generation models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semi-online data pair optimization
Combines online and offline DPO
Improves motion quality and preference
Xiaofeng Tan
Xiaofeng Tan
Research Intern at Tencent; Master at Southeast University; Dual BSc at Shenzhen Unversity.
AIGCRLHF
H
Hongsong Wang
Department of Computer Science and Engineering, Southeast University, Nanjing, China; Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications
Xin Geng
Xin Geng
School of Computer Science and Engineering, Southeast University
Artificial IntelligencePattern RecognitionMachine Learning
P
Pan Zhou
Singapore Management University