🤖 AI Summary
To address the weak latent action representation capability and poor cross-task generalization in vision-language-action (VLA) models, this paper proposes ViLLA: first, a self-supervised latent action learning module that disentangles action semantics from visual dynamics; second, a multimodal alignment-aware embedding fusion mechanism to enhance joint abstraction across vision, language, and action modalities. Compared to existing approaches, ViLLA achieves more robust latent-space action encoding and improved cross-scenario transferability. Experiments demonstrate significant gains in task success rates (+12.3%–28.7%) and zero-shot generalization performance on both simulation benchmarks (SIMPLER and LIBERO) and real-world robotic platforms (industrial manipulators and dexterous hands). ViLLA establishes a scalable, unified paradigm for action modeling in VLA systems, advancing the state of embodied AI by bridging semantic intent with executable motor control through structured multimodal representation learning.
📝 Abstract
Visual-Language-Action (VLA) models have emerged as a popular paradigm for learning robot manipulation policies that can follow language instructions and generalize to novel scenarios. Recent work has begun to explore the incorporation of latent actions, an abstract representation of visual change between two frames, into VLA pre-training. In this paper, we introduce villa-X, a novel Visual-Language-Latent-Action (ViLLA) framework that advances latent action modeling for learning generalizable robot manipulation policies. Our approach improves both how latent actions are learned and how they are incorporated into VLA pre-training. Together, these contributions enable villa-X to achieve superior performance across simulated environments including SIMPLER and LIBERO, as well as on two real-world robot setups including gripper and dexterous hand manipulation. We believe the ViLLA paradigm holds significant promise, and that our villa-X provides a strong foundation for future research.