Lang2Motion: Bridging Language and Motion through Joint Embedding Spaces

📅 2025-12-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper introduces the first language-guided 3D point trajectory generation framework for generic objects, addressing the novel problem of synthesizing arbitrary object motion trajectories directly from natural language descriptions. Methodologically, it employs a frozen CLIP encoder to align textual and rendered-trajectory visual representations in a shared embedding space, modeling the motion manifold via a Transformer-based autoencoder; dual supervision (text and rendered trajectories) and motion priors extracted from point tracking in real videos further enhance fidelity. Its key contribution lies in extending language-to-motion generation beyond human-centric or video-based settings to generic objects, enabling cross-domain transfer, style transfer, semantic interpolation, and latent-space editing. Experiments demonstrate state-of-the-art performance: text-to-trajectory retrieval Recall@1 reaches 34.2% (+12.5 over prior video-based methods), mean displacement error is 12.4 (significantly lower than 18.3–25.3), and action recognition Top-1 accuracy achieves 88.3%.

Technology Category

Application Category

📝 Abstract
We present Lang2Motion, a framework for language-guided point trajectory generation by aligning motion manifolds with joint embedding spaces. Unlike prior work focusing on human motion or video synthesis, we generate explicit trajectories for arbitrary objects using motion extracted from real-world videos via point tracking. Our transformer-based auto-encoder learns trajectory representations through dual supervision: textual motion descriptions and rendered trajectory visualizations, both mapped through CLIP's frozen encoders. Lang2Motion achieves 34.2% Recall@1 on text-to-trajectory retrieval, outperforming video-based methods by 12.5 points, and improves motion accuracy by 33-52% (12.4 ADE vs 18.3-25.3) compared to video generation baselines. We demonstrate 88.3% Top-1 accuracy on human action recognition despite training only on diverse object motions, showing effective transfer across motion domains. Lang2Motion supports style transfer, semantic interpolation, and latent-space editing through CLIP-aligned trajectory representations.
Problem

Research questions and friction points this paper is trying to address.

Generates explicit object motion trajectories from language
Aligns motion manifolds with text using joint embedding spaces
Transfers learned motion representations across different domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aligns motion manifolds with joint embedding spaces
Uses transformer auto-encoder with dual CLIP supervision
Generates explicit trajectories from real-world video tracking
🔎 Similar Papers
No similar papers found.