🤖 AI Summary
This work addresses the challenge of eliciting multimodal reasoning and tool-use capabilities from frozen vision-language models (VLMs) without fine-tuning. We propose an evolutionary algorithm-based zero-shot prompt search framework that integrates XML-structured markup for explicit reasoning scaffolding and Python interpreter support for executable tool invocation. The method automatically discovers generalizable, system-level prompt templates enabling stepwise visual reasoning—including image processing and geometric computation. Our key contribution is the first demonstration of *prompt-driven emergent multi-step reasoning and tool calling* in VLMs, achieved entirely through input prompting without any model parameter updates. Evaluated on challenging spatial vision benchmarks—MathVista, M3CoT, and GeoBench-VLM—our approach achieves up to 50% relative zero-shot accuracy improvement over strong baselines, with significantly enhanced cross-task generalization.
📝 Abstract
We present a framework for optimizing prompts in vision-language models to elicit multimodal reasoning without model retraining. Using an evolutionary algorithm to guide prompt updates downstream of visual tasks, our approach improves upon baseline prompt-updating algorithms, which lack evolution-style"survival of the fittest"iteration. Crucially, we find this approach enables the language model to independently discover progressive problem-solving techniques across several evolution generations. For example, the model reasons that to"break down"visually complex spatial tasks, making a tool call to a Python interpreter to perform tasks (such as cropping, image segmentation, or saturation changes) would improve performance significantly. Our experimentation shows that explicitly evoking this"tool calling"call, via system-level XML $... exttt{} ... exttt{}...$ tags, can effectively flag Python interpreter access for the same language model to generate relevant programs, generating advanced multimodal functionality. This functionality can be crystallized into a system-level prompt that induces improved performance at inference time, and our experimentation suggests up to $approx 50%$ relative improvement across select visual tasks. Downstream performance is trained and evaluated across subtasks from MathVista, M3CoT, and GeoBench-VLM datasets. Importantly, our approach shows that evolutionary prompt optimization guides language models towards self-reasoning discoveries, which result in improved zero-shot generalization across tasks.