See, Symbolize, Act: Grounding VLMs with Spatial Representations for Better Gameplay

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge faced by vision-language models (VLMs) in translating perceptual inputs into executable actions within interactive environments. The authors propose a method that integrates raw visual frames with symbolic scene representations and present the first systematic evaluation of how symbolic information influences VLM-based action generation. Multimodal policy experiments are conducted across Atari, VizDoom, and AI2-THOR platforms. Results demonstrate that high-quality symbolic representations substantially enhance VLMs’ decision-making performance in gameplay. However, symbols extracted autonomously by the model are often compromised by its inherent limitations and environmental complexity, and noisy or inaccurate symbols can severely degrade action efficacy. The study identifies the reliability of symbol extraction as a critical bottleneck for achieving effective symbol grounding in embodied interactive tasks.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) excel at describing visual scenes, yet struggle to translate perception into precise, grounded actions. We investigate whether providing VLMs with both the visual frame and the symbolic representation of the scene can improve their performance in interactive environments. We evaluate three state-of-the-art VLMs across Atari games, VizDoom, and AI2-THOR, comparing frame-only, frame with self-extracted symbols, frame with ground-truth symbols, and symbol-only pipelines. Our results indicate that all models benefit when the symbolic information is accurate. However, when VLMs extract symbols themselves, performance becomes dependent on model capability and scene complexity. We further investigate how accurately VLMs can extract symbolic information from visual inputs and how noise in these symbols affects decision-making and gameplay performance. Our findings reveal that symbolic grounding is beneficial in VLMs only when symbol extraction is reliable, and highlight perception quality as a central bottleneck for future VLM-based agents.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language Models
symbolic grounding
interactive environments
perception-to-action
gameplay performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

symbolic grounding
vision-language models
spatial representations
interactive environments
perception-action loop
🔎 Similar Papers
No similar papers found.
A
Ashish Baghel
Lossfunk
Paras Chopra
Paras Chopra
Independent Researcher