🤖 AI Summary
Existing vision–language–action (VLA) approaches for humanoid robots often suffer from unstable dynamic task execution due to inefficient reasoning or insufficient semantic guidance in whole-body coordination control. To address this, this work proposes a semantic–motor-intention-guided, physics-aware multi-brain VLA framework that, for the first time, integrates multi-brain latent flow matching with physics-based constraint modeling. By robustly fusing visual, linguistic, and motor signals through intention-aligned tracking, the method enables efficient and semantically grounded whole-body coordination. This approach significantly enhances the stability and reliability of humanoid robots performing dynamic tasks under vision–language guidance.
📝 Abstract
In the domain of humanoid robot control, the fusion of Vision-Language-Action (VLA) with whole-body control is essential for semantically guided execution of real-world tasks. However, existing methods encounter challenges in terms of low VLA inference efficiency or an absence of effective semantic guidance for whole-body control, resulting in instability in dynamic limb-coordinated tasks. To bridge this gap, we present a semantic-motion intent guided, physics-aware multi-brain VLA framework for humanoid whole-body control. A series of experiments was conducted to evaluate the performance of the proposed framework. The experimental results demonstrated that the framework enabled reliable vision-language-guided full-body coordination for humanoid robots.