ReZero: Enhancing LLM search ability by trying one-more-time

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Retrieval-Augmented Generation (RAG) systems often prematurely terminate search in knowledge-intensive tasks due to initial retrieval failure, and existing approaches lack explicit modeling of retry behavior. Method: We propose the first reinforcement learning framework explicitly driven by a “try again” principle, modeling search persistence as a learnable policy: an LLM autonomously generates reformulated queries, and the system provides explicit rewards for retry actions conditioned on retrieval feedback—such as absence of relevant documents. Contribution/Results: Unlike prior methods that optimize only query rewriting or answer reasoning, our approach is the first to integrate post-failure retry into an end-to-end differentiable paradigm. Evaluated on standard knowledge retrieval benchmarks, it achieves 46.88% accuracy—improving over the strongest baseline by 21.88 percentage points—and significantly enhances LLM robustness and adaptability in complex information-seeking scenarios.

Technology Category

Application Category

📝 Abstract
Retrieval-Augmented Generation (RAG) improves Large Language Model (LLM) performance on knowledge-intensive tasks but depends heavily on initial search query quality. Current methods, often using Reinforcement Learning (RL), typically focus on query formulation or reasoning over results, without explicitly encouraging persistence after a failed search. We introduce ReZero (Retry-Zero), a novel RL framework that directly rewards the act of retrying a search query following an initial unsuccessful attempt. This incentivizes the LLM to explore alternative queries rather than prematurely halting. ReZero demonstrates significant improvement, achieving 46.88% accuracy compared to a 25% baseline. By rewarding persistence, ReZero enhances LLM robustness in complex information-seeking scenarios where initial queries may prove insufficient.
Problem

Research questions and friction points this paper is trying to address.

Improving LLM search persistence after failed attempts
Enhancing query retry mechanisms in retrieval-augmented generation
Boosting LLM robustness in complex information-seeking tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

RL framework rewarding retry after failed search
Encourages alternative queries instead of halting
Improves LLM robustness in complex searches
🔎 Similar Papers
No similar papers found.