🤖 AI Summary
This study addresses the limited linguistic diversity and lack of systematic analysis in the instructional language of current vision–language–action (VLA) datasets. It presents the first multidimensional linguistic audit of mainstream VLA datasets, employing natural language processing techniques—including lexical statistics, semantic similarity computation, syntactic tree analysis, and duplication detection—to quantitatively assess characteristics such as lexical diversity, repetitiveness, semantic coverage, and syntactic complexity. The findings reveal that instructions in most datasets are highly templated, structurally homogeneous, and linguistically narrow. These insights provide empirical grounding and actionable directions for improving dataset documentation standards, selection strategies, and augmentation methodologies in VLA research.
📝 Abstract
Language plays a critical role in Vision-Language-Action (VLA) models, yet the linguistic characteristics of the datasets used to train and evaluate these systems remain poorly documented. In this work, we present a systematic dataset audit of several widely used VLA corpora, aiming to characterize what kinds of instructions these datasets actually contain and how much linguistic variety they provide. We quantify instruction language along complementary dimensions-including lexical variety, duplication and overlap, semantic similarity, and syntactic complexity. Our analysis shows that many datasets rely on highly repetitive, template-like commands with limited structural variation, yielding a narrow distribution of instruction forms. We position these findings as descriptive documentation of the language signal available in current VLA training and evaluation data, intended to support more detailed dataset reporting, more principled dataset selection, and targeted curation or augmentation strategies that broaden language coverage.