ProRefine: Inference-time Prompt Refinement with Textual Feedback

๐Ÿ“… 2025-06-05
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In multi-agent collaborative reasoning, erroneous prompt design leads to error propagation during inference. Method: This paper proposes an unsupervised, zero-training, LLM-based self-feedback framework for real-time prompt refinement: it dynamically parses errors from LLM-generated textual feedback, iteratively rewrites prompts, and adapts them to multi-step reasoning tasksโ€”requiring no additional annotations or model fine-tuning. Contribution/Results: Evaluated on five mathematical reasoning benchmarks, our method improves over zero-shot chain-of-thought by 3โ€“37 percentage points, substantially narrowing the performance gap between small and large language models. It is the first approach to enable purely text-feedback-driven online prompt optimization, enhancing both reliability and scalability of multi-agent reasoning systems.

Technology Category

Application Category

๐Ÿ“ Abstract
Agentic workflows, where multiple AI agents collaborate to accomplish complex tasks like reasoning or planning, are becoming increasingly prevalent. However, these workflows often suffer from error propagation and sub-optimal performance, largely due to poorly designed prompts that fail to effectively guide individual agents. This is a critical problem because it limits the reliability and scalability of these powerful systems. We introduce ProRefine, an innovative inference-time prompt optimization method that leverages textual feedback from large language models (LLMs) to address this challenge. ProRefine dynamically refines prompts for multi-step reasoning tasks without additional training or ground truth labels. Evaluated on five benchmark mathematical reasoning datasets, ProRefine significantly surpasses zero-shot Chain-of-Thought baselines by 3 to 37 percentage points. This approach not only boosts accuracy but also allows smaller models to match the performance of larger ones, highlighting its potential for efficient and scalable AI deployment, and democratizing access to high-performing AI.
Problem

Research questions and friction points this paper is trying to address.

Optimizing prompts in multi-agent workflows to reduce errors
Improving reasoning accuracy without training or labeled data
Enabling smaller models to match larger ones' performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic prompt refinement using LLM feedback
No additional training or labels required
Boosts small model performance significantly
๐Ÿ”Ž Similar Papers
No similar papers found.