AnoleVLA: Lightweight Vision-Language-Action Model with Deep State Space Models for Mobile Manipulation

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of enabling language-guided robots to manipulate diverse objects efficiently and safely in resource-constrained environments. To this end, the authors propose a lightweight vision-language-action (VLA) model that, for the first time, integrates a deep state space model into the VLA architecture to replace the conventional Transformer backbone. This design enables efficient processing of multimodal input sequences and rapid generation of action trajectories. The proposed approach significantly reduces computational overhead while maintaining strong task generalization capabilities. In real-world experiments, it achieves a 21-percentage-point higher task success rate compared to leading large-scale VLA models and demonstrates approximately three times faster inference speed.

Technology Category

Application Category

📝 Abstract
In this study, we address the problem of language-guided robotic manipulation, where a robot is required to manipulate a wide range of objects based on visual observations and natural language instructions. This task is essential for service robots that operate in human environments, and requires safety, efficiency, and task-level generality. Although Vision-Language-Action models (VLAs) have demonstrated strong performance for this task, their deployment in resource-constrained environments remains challenging because of the computational cost of standard transformer backbones. To overcome this limitation, we propose AnoleVLA, a lightweight VLA that uses a deep state space model to process multimodal sequences efficiently. The model leverages its lightweight and fast sequential state modeling to process visual and textual inputs, which allows the robot to generate trajectories efficiently. We evaluated the proposed method in both simulation and physical experiments. Notably, in real-world evaluations, AnoleVLA outperformed a representative large-scale VLA by 21 points for the task success rate while achieving an inference speed approximately three times faster.
Problem

Research questions and friction points this paper is trying to address.

language-guided robotic manipulation
Vision-Language-Action models
resource-constrained deployment
mobile manipulation
computational efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Language-Action
State Space Model
Lightweight Robotics
Mobile Manipulation
Multimodal Sequence Modeling
🔎 Similar Papers
No similar papers found.
Y
Yusuke Takagi
Keio University, 3-14-1 Hiyoshi, Kohoku, Yokohama, Kanagawa 223-8522, Japan
M
Motonari Kambara
Keio University, 3-14-1 Hiyoshi, Kohoku, Yokohama, Kanagawa 223-8522, Japan
D
Daichi Yashima
Keio University, 3-14-1 Hiyoshi, Kohoku, Yokohama, Kanagawa 223-8522, Japan
K
Koki Seno
Keio University, 3-14-1 Hiyoshi, Kohoku, Yokohama, Kanagawa 223-8522, Japan
K
Kento Tokura
Keio University, 3-14-1 Hiyoshi, Kohoku, Yokohama, Kanagawa 223-8522, Japan
Komei Sugiura
Komei Sugiura
Professor, Keio University
Multimodal AIRobot LearningEmbodied AIMachine Learning