🤖 AI Summary
To address the scarcity of action annotations in robotic manipulation, this paper proposes a generative self-supervised learning framework leveraging large-scale unlabeled video data. Methodologically, it introduces (1) implicit motion tokenization—a hardware-agnostic, semantically interpretable representation that encodes video dynamics as a “motion language”; and (2) Moto-GPT, the first autoregressive Transformer pretrained on motion tokens, equipped with unsupervised video-to-motion token encoding and vision-action co-finetuning. Evaluated on real-world dexterous robotic manipulation benchmarks, the approach significantly improves sample efficiency and robustness, enabling high-fidelity motion trajectory generation, prediction, and plausibility assessment. Crucially, it effectively transfers motion priors embedded in videos to downstream control tasks, demonstrating strong generalization without explicit action labels.
📝 Abstract
Recent developments in Large Language Models pre-trained on extensive corpora have shown significant success in various natural language processing tasks with minimal fine-tuning. This success offers new promise for robotics, which has long been constrained by the high cost of action-labeled data. We ask: given the abundant video data containing interaction-related knowledge available as a rich"corpus", can a similar generative pre-training approach be effectively applied to enhance robot learning? The key challenge is to identify an effective representation for autoregressive pre-training that benefits robot manipulation tasks. Inspired by the way humans learn new skills through observing dynamic environments, we propose that effective robotic learning should emphasize motion-related knowledge, which is closely tied to low-level actions and is hardware-agnostic, facilitating the transfer of learned motions to actual robot actions. To this end, we introduce Moto, which converts video content into latent Motion Token sequences by a Latent Motion Tokenizer, learning a bridging"language"of motion from videos in an unsupervised manner. We pre-train Moto-GPT through motion token autoregression, enabling it to capture diverse visual motion knowledge. After pre-training, Moto-GPT demonstrates the promising ability to produce semantically interpretable motion tokens, predict plausible motion trajectories, and assess trajectory rationality through output likelihood. To transfer learned motion priors to real robot actions, we implement a co-fine-tuning strategy that seamlessly bridges latent motion token prediction and real robot control. Extensive experiments show that the fine-tuned Moto-GPT exhibits superior robustness and efficiency on robot manipulation benchmarks, underscoring its effectiveness in transferring knowledge from video data to downstream visual manipulation tasks.