🤖 AI Summary
This paper introduces the first language-guided 3D point trajectory generation framework for generic objects, addressing the novel problem of synthesizing arbitrary object motion trajectories directly from natural language descriptions. Methodologically, it employs a frozen CLIP encoder to align textual and rendered-trajectory visual representations in a shared embedding space, modeling the motion manifold via a Transformer-based autoencoder; dual supervision (text and rendered trajectories) and motion priors extracted from point tracking in real videos further enhance fidelity. Its key contribution lies in extending language-to-motion generation beyond human-centric or video-based settings to generic objects, enabling cross-domain transfer, style transfer, semantic interpolation, and latent-space editing. Experiments demonstrate state-of-the-art performance: text-to-trajectory retrieval Recall@1 reaches 34.2% (+12.5 over prior video-based methods), mean displacement error is 12.4 (significantly lower than 18.3–25.3), and action recognition Top-1 accuracy achieves 88.3%.
📝 Abstract
We present Lang2Motion, a framework for language-guided point trajectory generation by aligning motion manifolds with joint embedding spaces. Unlike prior work focusing on human motion or video synthesis, we generate explicit trajectories for arbitrary objects using motion extracted from real-world videos via point tracking. Our transformer-based auto-encoder learns trajectory representations through dual supervision: textual motion descriptions and rendered trajectory visualizations, both mapped through CLIP's frozen encoders. Lang2Motion achieves 34.2% Recall@1 on text-to-trajectory retrieval, outperforming video-based methods by 12.5 points, and improves motion accuracy by 33-52% (12.4 ADE vs 18.3-25.3) compared to video generation baselines. We demonstrate 88.3% Top-1 accuracy on human action recognition despite training only on diverse object motions, showing effective transfer across motion domains. Lang2Motion supports style transfer, semantic interpolation, and latent-space editing through CLIP-aligned trajectory representations.