On Robustness of Vision-Language-Action Model against Multi-Modal Perturbations

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current Vision-Language-Action (VLA) models exhibit insufficient robustness to multimodal perturbations—particularly in the action modality—under disturbances in actions, instructions, environments, and observations. Method: This work presents the first systematic evaluation of 17 cross-modal perturbations and proposes an input-output joint robustness optimization framework: (i) a multi-armed bandit–based adaptive noise selection mechanism; (ii) offline adversarial training, label smoothing under flow matching, anomaly response penalization, and input-transformation consistency constraints; and (iii) a diffusion-based action head to enhance action modeling capacity. Contribution/Results: On the LIBERO benchmark, our method improves task success rates by 12.6% over pi0 and 10.4% over OpenVLA, while accelerating inference by 50.6×. Under real-robot four-modal perturbations, task success increases by 65.6%, significantly advancing the practical deployability and robustness of VLA models.

Technology Category

Application Category

📝 Abstract
In Vision-Language-Action (VLA) models, robustness to real-world perturbations is critical for deployment. Existing methods target simple visual disturbances, overlooking the broader multi-modal perturbations that arise in actions, instructions, environments, and observations. Here, we first evaluate the robustness of mainstream VLAs under 17 perturbations across four modalities. We find (1) actions as the most fragile modality, (2) Existing visual-robust VLA do not gain robustness in other modality, and (3) pi0 demonstrates superior robustness with a diffusion-based action head. To build multi-modal robust VLAs, we propose RobustVLA against perturbations in VLA inputs and outputs. For output robustness, we perform offline robust optimization against worst-case action noise that maximizes mismatch in flow matching objective. This can be seen as adversarial training, label smoothing, and outlier penalization. For input robustness, we enforce consistent actions across input variations that preserve task semantics. To account for multiple perturbations, we formulate robustness as a multi-armed bandit problem and apply an upper confidence bound algorithm to automatically identify the most harmful noise. Experiments on LIBERO demonstrate our RobustVLA delivers absolute gains over baselines of 12.6% on the pi0 backbone and 10.4% on the OpenVLA backbone across all 17 perturbations, achieving 50.6x faster inference than existing visual-robust VLAs, and a 10.4% gain under mixed perturbations. Our RobustVLA is particularly effective on real-world FR5 robot with limited demonstrations, showing absolute gains by 65.6% under perturbations of four modalities.
Problem

Research questions and friction points this paper is trying to address.

Evaluating VLA model robustness against multi-modal perturbations
Developing RobustVLA to handle input and output perturbations
Improving real-world robot performance under diverse disturbances
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion-based action head enhances robustness
Offline robust optimization against worst-case action noise
Multi-armed bandit identifies harmful perturbations automatically
🔎 Similar Papers
No similar papers found.
J
Jianing Guo
School of Artificial Intelligence, Beihang University
Z
Zhenhong Wu
School of Computer Science and Engineering, Beihang University
C
Chang Tu
Department of Computer Science and Engineering, The Chinese University of Hong Kong
Y
Yiyao Ma
Department of Computer Science and Engineering, The Chinese University of Hong Kong
X
Xiangqi Kong
School of Computer Science and Engineering, Beihang University
Z
Zhiqian Liu
School of Computer Science and Engineering, Beihang University
J
Jiaming Ji
Institute of Artificial Intelligence, Peking University
Shuning Zhang
Shuning Zhang
Tsinghua University
HCIUsable Privacy and SecurityAI
Yuanpei Chen
Yuanpei Chen
South China University of Technology
Robotic
K
Kai Chen
Department of Computer Science and Engineering, The Chinese University of Hong Kong
X
Xianglong Liu
School of Computer Science and Engineering, Beihang University
Q
Qi Dou
Department of Computer Science and Engineering, The Chinese University of Hong Kong
Y
Yaodong Yang
Institute of Artificial Intelligence, Peking University
H
Huijie Zhao
School of Artificial Intelligence, Beihang University
W
Weifeng Lv
School of Computer Science and Engineering, Beihang University
S
Simin Li
School of Computer Science and Engineering, Beihang University