Selective Perception for Robot: Task-Aware Attention in Multimodal VLA

πŸ“… 2026-02-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the inefficiency and robustness limitations of existing vision-language-action (VLA) models that employ static fusion strategies for multi-view inputs, often introducing task-irrelevant noise and computational redundancy. Inspired by human active perception, the authors propose a dynamic information fusion framework featuring a lightweight adaptive routing network that evaluates the task relevance of each view in real time based on textual instructions and wrist-mounted camera observations. This mechanism dynamically selects salient visual features while suppressing redundant computations. The study innovatively integrates a task-aware dynamic attention mechanism into multimodal VLA modeling and introduces a vision-language model–driven automated annotation pipeline to reduce data labeling costs. Experiments on real-world robotic manipulation tasks demonstrate significant improvements in inference efficiency and control performance, validating the effectiveness and practicality of dynamic fusion for resource-constrained real-time robotic control.

Technology Category

Application Category

πŸ“ Abstract
In robotics, Vision-Language-Action (VLA) models that integrate diverse multimodal signals from multi-view inputs have emerged as an effective approach. However, most prior work adopts static fusion that processes all visual inputs uniformly, which incurs unnecessary computational overhead and allows task-irrelevant background information to act as noise. Inspired by the principles of human active perception, we propose a dynamic information fusion framework designed to maximize the efficiency and robustness of VLA models. Our approach introduces a lightweight adaptive routing architecture that analyzes the current text prompt and observations from a wrist-mounted camera in real-time to predict the task-relevance of multiple camera views. By conditionally attenuating computations for views with low informational utility and selectively providing only essential visual features to the policy network, Our framework achieves computation efficiency proportional to task relevance. Furthermore, to efficiently secure large-scale annotation data for router training, we established an automated labeling pipeline utilizing Vision-Language Models (VLMs) to minimize data collection and annotation costs. Experimental results in real-world robotic manipulation scenarios demonstrate that the proposed approach achieves significant improvements in both inference efficiency and control performance compared to existing VLA models, validating the effectiveness and practicality of dynamic information fusion in resource-constrained, real-time robot control environments.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action
multimodal fusion
task-irrelevant information
computational overhead
robotic perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

dynamic information fusion
task-aware attention
adaptive routing
Vision-Language-Action (VLA)
efficient robotic perception
πŸ”Ž Similar Papers
No similar papers found.
Y
Young-Chae Son
Dongguk University
J
Jung-Woo Lee
Dongguk University
Y
Yoon-Ji Choi
Dongguk University
D
Dae-Kwan Ko
Dongguk University
Soo-Chul Lim
Soo-Chul Lim
Interactive Robotics Lab, Dongguk University
robothapticsdeep learinghuman-robot interactionreinforcement learning