3D-Mix for VLA: A Plug-and-Play Module for Integrating VGGT-based 3D Information into Vision-Language-Action Models

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Vision-Language-Action (VLA) models suffer from limited three-dimensional spatial awareness due to their reliance on 2D training data, which constrains their performance in robotic manipulation tasks. This work presents the first systematic evaluation of nine strategies for integrating 3D information into VLA systems and introduces a novel fusion mechanism based on semantic conditional gating. The authors further design 3D-Mix, a lightweight, plug-and-play module that enhances spatial intelligence without requiring modifications to existing multimodal large language models (MLLMs) or action policies. Compatible with mainstream VLA architectures such as GR00T-style and π-style, 3D-Mix demonstrates consistent improvements across MLLM variants ranging from 2B to 8B parameters. Evaluated on the SIMPLER and LIBERO benchmarks, the method achieves an average 7.0% absolute increase in success rate on out-of-domain SIMPLER tasks.

Technology Category

Application Category

📝 Abstract
Vision-Language-Action (VLA) models leverage Multimodal Large Language Models (MLLMs) for robotic control, but recent studies reveal that MLLMs exhibit limited spatial intelligence due to training predominantly on 2D data, resulting in inadequate 3D perception for manipulation tasks. While recent approaches incorporate specialized 3D vision models such as VGGT to enhance spatial understanding, they employ diverse integration mechanisms without systematic investigation, leaving the optimal fusion strategy unclear. We conduct a comprehensive pilot study comparing nine VGGT integration schemes on standardized benchmarks and find that semantic-conditioned gated fusion, which adaptively balances 2D semantic and 3D geometric features based on task context, achieved the strongest performance among all nine evaluated fusion schemes in our pilot study. We present 3D-Mix, a plug-and-play module that integrates into diverse VLA architectures (GR00T-style and $π$-style) without modifying existing MLLM or action expert components. Experiments across six MLLM series (nine model variants, 2B--8B parameters) on SIMPLER and LIBERO show that 3D-Mix delivers consistent performance gains, averaging +7.0% on the out-of-domain (OOD) SIMPLER benchmark across all nine GR00T-style variants, establishing a principled approach for enhancing spatial intelligence in VLA systems.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action
3D perception
spatial intelligence
VGGT
multimodal fusion
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D-Mix
semantic-conditioned gated fusion
Vision-Language-Action
spatial intelligence
plug-and-play module
🔎 Similar Papers
No similar papers found.
B
Bin Yu
HIT
S
Shijie Lian
ZGCA
X
Xiaopeng Lin
ZGCA
Z
Zhaolong Shen
ZGCA
Y
Yuliang Wei
HIT
Haishan Liu
Haishan Liu
SmartNews
Data MiningMachine LearningBig Datathe Semantic Web
C
Changti Wu
ZGCA
H
Hang Yuan
ZGCA
B
Bailing Wang
HIT
Cong Huang
Cong Huang
University of Science and Technology of China
Image/Video processing
Kai Chen
Kai Chen
Computer Science, Zhejiang University
Large Language Model