Bandit-Based Prompt Design Strategy Selection Improves Prompt Optimizers

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the implicit and inefficient strategy selection in large language model (LLM) prompt optimization by proposing the first explicit prompt design strategy selection mechanism. Methodologically, it introduces Thompson sampling—a multi-armed bandit algorithm—into the prompt optimization pipeline for the first time, enabling dynamic, learnable strategy scheduling within the EvoPrompt framework. Evaluated on the BIG-Bench Hard benchmark, the mechanism significantly improves performance for both Llama-3-8B-Instruct and GPT-4o mini, outperforming fixed or heuristic baseline strategies. Ablation studies confirm that Thompson sampling yields superior scheduling efficacy compared to alternative approaches. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Prompt optimization aims to search for effective prompts that enhance the performance of large language models (LLMs). Although existing prompt optimization methods have discovered effective prompts, they often differ from sophisticated prompts carefully designed by human experts. Prompt design strategies, representing best practices for improving prompt performance, can be key to improving prompt optimization. Recently, a method termed the Autonomous Prompt Engineering Toolbox (APET) has incorporated various prompt design strategies into the prompt optimization process. In APET, the LLM is needed to implicitly select and apply the appropriate strategies because prompt design strategies can have negative effects. This implicit selection may be suboptimal due to the limited optimization capabilities of LLMs. This paper introduces Optimizing Prompts with sTrategy Selection (OPTS), which implements explicit selection mechanisms for prompt design. We propose three mechanisms, including a Thompson sampling-based approach, and integrate them into EvoPrompt, a well-known prompt optimizer. Experiments optimizing prompts for two LLMs, Llama-3-8B-Instruct and GPT-4o mini, were conducted using BIG-Bench Hard. Our results show that the selection of prompt design strategies improves the performance of EvoPrompt, and the Thompson sampling-based mechanism achieves the best overall results. Our experimental code is provided at https://github.com/shiralab/OPTS .
Problem

Research questions and friction points this paper is trying to address.

Improves prompt optimization for large language models
Explicitly selects effective prompt design strategies
Enhances performance using Thompson sampling-based approach
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explicit strategy selection for prompt optimization
Thompson sampling-based mechanism integration
Enhanced EvoPrompt performance via strategy selection
🔎 Similar Papers
No similar papers found.