🤖 AI Summary
Existing CLIP text encoders are trained solely on image–text pairs and thus lack explicit modeling capacity for motion temporal dynamics and kinematic structure, limiting accuracy and dynamism in text-to-human-motion generation. To address this, we propose the first motion-aware CLIP fine-tuning paradigm: (1) a dedicated motion encoding head; (2) a novel tethering loss that enforces alignment between textual semantics and motion structural priors; and (3) an integrated training strategy combining contrastive multimodal fine-tuning, motion sequence encoding, and CLIP knowledge distillation. On text-to-motion retrieval, our method achieves significant improvements in Top-1/2/3 accuracy. Fréchet Inception Distance (FID) remains competitive, while text–motion semantic alignment quality is markedly enhanced. This work constitutes the first successful adaptation of CLIP’s powerful semantic representation capability to the domain of temporally coherent, physically grounded human motion generation.
📝 Abstract
Human motion generation is essential for fields such as animation, robotics, and virtual reality, requiring models that effectively capture motion dynamics from text descriptions. Existing approaches often rely on Contrastive Language-Image Pretraining (CLIP)-based text encoders, but their training on text-image pairs constrains their ability to understand temporal and kinematic structures inherent in motion and motion generation. This work introduces MoCLIP, a fine-tuned CLIP model with an additional motion encoding head, trained on motion sequences using contrastive learning and tethering loss. By explicitly incorporating motion-aware representations, MoCLIP enhances motion fidelity while remaining compatible with existing CLIP-based pipelines and seamlessly integrating into various CLIP-based methods. Experiments demonstrate that MoCLIP improves Top-1, Top-2, and Top-3 accuracy while maintaining competitive FID, leading to improved text-to-motion alignment results. These results highlight MoCLIP's versatility and effectiveness, establishing it as a robust framework for enhancing motion generation.