Evolutionary Prompt Optimization Discovers Emergent Multimodal Reasoning Strategies in Vision-Language Models

📅 2025-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of eliciting multimodal reasoning and tool-use capabilities from frozen vision-language models (VLMs) without fine-tuning. We propose an evolutionary algorithm-based zero-shot prompt search framework that integrates XML-structured markup for explicit reasoning scaffolding and Python interpreter support for executable tool invocation. The method automatically discovers generalizable, system-level prompt templates enabling stepwise visual reasoning—including image processing and geometric computation. Our key contribution is the first demonstration of *prompt-driven emergent multi-step reasoning and tool calling* in VLMs, achieved entirely through input prompting without any model parameter updates. Evaluated on challenging spatial vision benchmarks—MathVista, M3CoT, and GeoBench-VLM—our approach achieves up to 50% relative zero-shot accuracy improvement over strong baselines, with significantly enhanced cross-task generalization.

Technology Category

Application Category

📝 Abstract
We present a framework for optimizing prompts in vision-language models to elicit multimodal reasoning without model retraining. Using an evolutionary algorithm to guide prompt updates downstream of visual tasks, our approach improves upon baseline prompt-updating algorithms, which lack evolution-style"survival of the fittest"iteration. Crucially, we find this approach enables the language model to independently discover progressive problem-solving techniques across several evolution generations. For example, the model reasons that to"break down"visually complex spatial tasks, making a tool call to a Python interpreter to perform tasks (such as cropping, image segmentation, or saturation changes) would improve performance significantly. Our experimentation shows that explicitly evoking this"tool calling"call, via system-level XML $... exttt{} ... exttt{}...$ tags, can effectively flag Python interpreter access for the same language model to generate relevant programs, generating advanced multimodal functionality. This functionality can be crystallized into a system-level prompt that induces improved performance at inference time, and our experimentation suggests up to $approx 50%$ relative improvement across select visual tasks. Downstream performance is trained and evaluated across subtasks from MathVista, M3CoT, and GeoBench-VLM datasets. Importantly, our approach shows that evolutionary prompt optimization guides language models towards self-reasoning discoveries, which result in improved zero-shot generalization across tasks.
Problem

Research questions and friction points this paper is trying to address.

Optimizing prompts for multimodal reasoning in vision-language models
Using evolutionary algorithms to improve prompt-updating strategies
Enhancing zero-shot generalization via self-reasoning discoveries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evolutionary algorithm optimizes multimodal prompts
XML tags enable Python interpreter tool calling
Self-reasoning discoveries improve zero-shot generalization
🔎 Similar Papers
S
Sid Bharthulwar
Harvard College
J
John Rho
Harvard College
Katrina Brown
Katrina Brown
Senior Researcher, James Hutton Institute
Geography