FlySearch: Exploring how vision-language models explore

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the goal-directed active exploration capabilities of vision-language models (VLMs) in realistic, unstructured 3D environments. To this end, we introduce FlySearch—the first photorealistic 3D navigation benchmark for outdoor illumination variation, built in Unity—supporting multi-difficulty vision-language navigation tasks and enabling end-to-end zero-shot evaluation and fine-grained attribution analysis for VLMs (e.g., LLaVA, Qwen-VL). Our systematic study reveals three fundamental failure modes in VLM-driven exploration: visual hallucination, contextual misinterpretation, and planning failure—previously uncharacterized. We find that state-of-the-art VLMs achieve less than 30% success even on the simplest tasks, substantially underperforming humans. We propose differentiable remediation pathways that improve task completion rates by over 2.1×. The full-stack benchmark—including simulator, tasks, and evaluation framework—is open-sourced, establishing a new paradigm and diagnostic toolkit for embodied VLM research.

Technology Category

Application Category

📝 Abstract
The real world is messy and unstructured. Uncovering critical information often requires active, goal-driven exploration. It remains to be seen whether Vision-Language Models (VLMs), which recently emerged as a popular zero-shot tool in many difficult tasks, can operate effectively in such conditions. In this paper, we answer this question by introducing FlySearch, a 3D, outdoor, photorealistic environment for searching and navigating to objects in complex scenes. We define three sets of scenarios with varying difficulty and observe that state-of-the-art VLMs cannot reliably solve even the simplest exploration tasks, with the gap to human performance increasing as the tasks get harder. We identify a set of central causes, ranging from vision hallucination, through context misunderstanding, to task planning failures, and we show that some of them can be addressed by finetuning. We publicly release the benchmark, scenarios, and the underlying codebase.
Problem

Research questions and friction points this paper is trying to address.

Assessing VLMs' ability in active, goal-driven exploration
Evaluating VLMs' performance in complex 3D outdoor scenarios
Identifying and addressing key VLM limitations in exploration tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

FlySearch: 3D outdoor photorealistic environment for exploration
Addresses VLM limitations via finetuning and task planning
Public benchmark for complex scene navigation tasks
🔎 Similar Papers
No similar papers found.