LiteVLA-Edge: Quantized On-Device Multimodal Control for Embedded Robotics

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of deploying vision-language-action (VLA) models on embedded robotic platforms, where high computational overhead and latency typically hinder real-time operation. The authors propose an efficient edge-oriented VLA inference pipeline that enables fully offline, closed-loop perception-reasoning-action integration within ROS 2 on a Jetson Orin. Their approach combines FP32 image-to-action fine-tuning, 4-bit GGUF post-training quantization, and GPU-accelerated inference via llama.cpp, achieving significant model compression while preserving modular interfaces. The resulting system attains an end-to-end average latency of 150.5 ms (≈6.6 Hz), demonstrating—for the first time—the feasibility of real-time language-conditioned control on embedded hardware and establishing a reproducible baseline for edge-based VLA deployment.

Technology Category

Application Category

📝 Abstract
Vision-Language-Action (VLA) models provide a unified framework for perception, language conditioning, and action generation, but many existing systems remain difficult to deploy in embedded robotic settings because of their computational requirements and inference latency. In this paper, we present LiteVLA-Edge, a deployment-oriented VLA pipeline for fully on-device inference on Jetson Orin-class hardware. Our approach combines supervised image-to-action fine-tuning in FP32 with post-training 4-bit GGUF quantization and GPU-accelerated inference through the \texttt{llama.cpp} runtime. Under our deployment configuration, LiteVLA-Edge achieves a mean end-to-end latency of 150.5\,ms (approximately 6.6\,Hz) while operating entirely offline within a ROS~2-integrated perception--reasoning--action pipeline. Rather than introducing a new policy objective, our contribution is a practical systems path for executing compact multimodal control models locally on embedded hardware while preserving modular interfaces between perception, reasoning, and actuation. These results establish timing feasibility for reactive language-conditioned control and provide a reproducible baseline for future task-level evaluation of on-device VLAs in robotics.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action
on-device inference
embedded robotics
computational latency
multimodal control
Innovation

Methods, ideas, or system contributions that make the work stand out.

on-device inference
4-bit quantization
vision-language-action models
embedded robotics
llama.cpp
🔎 Similar Papers
No similar papers found.