Alignment among Language, Vision and Action Representations

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study challenges the assumption of modality-specific representations by investigating whether language, vision, and action share a common semantic structure. Leveraging the BabyAI platform, the authors train Transformer-based agents via behavioral cloning to execute natural language instructions, thereby generating language embeddings shaped by sensorimotor control. Cross-modal alignment between these action-derived representations and those of prominent models is evaluated using representational geometry analysis (precision@15). The work reports the first evidence that action representations significantly align with decoder-style large language models (LLaMA, Qwen, DeepSeek) and the vision–language model BLIP (precision@15 = 0.70–0.73), approaching the level of alignment observed among language models themselves, while showing weaker alignment with CLIP and BERT. These findings provide empirical support for a modality-agnostic organization of semantic knowledge.

Technology Category

Application Category

📝 Abstract
A fundamental question in cognitive science and AI concerns whether different learning modalities: language, vision, and action, give rise to distinct or shared internal representations. Traditional views assume that models trained on different data types develop specialized, non-transferable representations. However, recent evidence suggests unexpected convergence: models optimized for distinct tasks may develop similar representational geometries. We investigate whether this convergence extends to embodied action learning by training a transformer-based agent to execute goal-directed behaviors in response to natural language instructions. Using behavioral cloning on the BabyAI platform, we generated action-grounded language embeddings shaped exclusively by sensorimotor control requirements. We then compared these representations with those extracted from state-of-the-art large language models (LLaMA, Qwen, DeepSeek, BERT) and vision-language models (CLIP, BLIP). Despite substantial differences in training data, modality, and objectives, we observed robust cross-modal alignment. Action representations aligned strongly with decoder-only language models and BLIP (precision@15: 0.70-0.73), approaching the alignment observed among language models themselves. Alignment with CLIP and BERT was significantly weaker. These findings indicate that linguistic, visual, and action representations converge toward partially shared semantic structures, supporting modality-independent semantic organization and highlighting potential for cross-domain transfer in embodied AI systems.
Problem

Research questions and friction points this paper is trying to address.

alignment
language
vision
action
representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

cross-modal alignment
embodied action learning
representational geometry
behavioral cloning
modality-independent semantics
🔎 Similar Papers
No similar papers found.
N
Nicola Milano
Institute of Cognitive Sciences and Technologies, National Research Council, Roma, Italy; University of Naples “Federico II”, Natural and Artificial Cognition Laboratory “Orazio Miglino”, Napoli, Italy
Stefano Nolfi
Stefano Nolfi
Research Director, National Research Council
Evolutionary RoboticsAdaptive BehaviorArtificial LifeCognitive RoboticsEmbodiment