🤖 AI Summary
This study investigates whether visual grounding genuinely enhances large language models’ (LLMs) understanding of embodied knowledge. Method: Grounded in psychological perceptual theory, we construct the first standardized multimodal embodied knowledge benchmark covering six sensory modalities—vision, audition, touch, taste, olfaction, and interoception—and evaluate 30 state-of-the-art LLMs on 1,700+ items via vector similarity matching and multiple-choice question answering. Contribution/Results: Contrary to expectations, vision-language models do not outperform text-only models; notably, they perform worst on vision-related items. Model performance is strongly influenced by lexical frequency, revealing deficits in spatial reasoning and cross-modal integration. This work establishes the first standardized evaluation framework for multisensory embodied understanding, exposing fundamental limitations of current multimodal models at the level of embodied cognition and providing a critical diagnostic benchmark for embodied AI development.
📝 Abstract
Despite significant progress in multimodal language models (LMs), it remains unclear whether visual grounding enhances their understanding of embodied knowledge compared to text-only models. To address this question, we propose a novel embodied knowledge understanding benchmark based on the perceptual theory from psychology, encompassing visual, auditory, tactile, gustatory, olfactory external senses, and interoception. The benchmark assesses the models' perceptual abilities across different sensory modalities through vector comparison and question-answering tasks with over 1,700 questions. By comparing 30 state-of-the-art LMs, we surprisingly find that vision-language models (VLMs) do not outperform text-only models in either task. Moreover, the models perform significantly worse in the visual dimension compared to other sensory dimensions. Further analysis reveals that the vector representations are easily influenced by word form and frequency, and the models struggle to answer questions involving spatial perception and reasoning. Our findings underscore the need for more effective integration of embodied knowledge in LMs to enhance their understanding of the physical world.