🤖 AI Summary
This work addresses the challenge of enabling language-guided robots to manipulate diverse objects efficiently and safely in resource-constrained environments. To this end, the authors propose a lightweight vision-language-action (VLA) model that, for the first time, integrates a deep state space model into the VLA architecture to replace the conventional Transformer backbone. This design enables efficient processing of multimodal input sequences and rapid generation of action trajectories. The proposed approach significantly reduces computational overhead while maintaining strong task generalization capabilities. In real-world experiments, it achieves a 21-percentage-point higher task success rate compared to leading large-scale VLA models and demonstrates approximately three times faster inference speed.
📝 Abstract
In this study, we address the problem of language-guided robotic manipulation, where a robot is required to manipulate a wide range of objects based on visual observations and natural language instructions. This task is essential for service robots that operate in human environments, and requires safety, efficiency, and task-level generality. Although Vision-Language-Action models (VLAs) have demonstrated strong performance for this task, their deployment in resource-constrained environments remains challenging because of the computational cost of standard transformer backbones. To overcome this limitation, we propose AnoleVLA, a lightweight VLA that uses a deep state space model to process multimodal sequences efficiently. The model leverages its lightweight and fast sequential state modeling to process visual and textual inputs, which allows the robot to generate trajectories efficiently. We evaluated the proposed method in both simulation and physical experiments. Notably, in real-world evaluations, AnoleVLA outperformed a representative large-scale VLA by 21 points for the task success rate while achieving an inference speed approximately three times faster.