ReSearch: Learning to Reason with Search for LLMs via Reinforcement Learning

📅 2025-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) struggle to coordinate reasoning and multi-step external search in complex multi-hop question answering. Method: This paper proposes the first purely reinforcement learning–driven “reasoning-as-search” joint modeling framework, explicitly embedding search operations into chains of thought. It employs text-guided search decisions and result-feedback-driven reasoning loops, trained end-to-end via policy gradients and reward shaping—without requiring any human-annotated reasoning traces or step-level supervision. Contribution/Results: Built upon the Qwen2.5 series, the framework autonomously emergent capabilities such as reflection and self-correction. After training on a single dataset, the 32B model achieves strong zero-shot generalization across multiple cross-domain reasoning and search benchmarks, outperforming most supervised fine-tuning approaches.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have shown remarkable capabilities in reasoning, exemplified by the success of OpenAI-o1 and DeepSeek-R1. However, integrating reasoning with external search processes remains challenging, especially for complex multi-hop questions requiring multiple retrieval steps. We propose ReSearch, a novel framework that trains LLMs to Reason with Search via reinforcement learning without using any supervised data on reasoning steps. Our approach treats search operations as integral components of the reasoning chain, where when and how to perform searches is guided by text-based thinking, and search results subsequently influence further reasoning. We train ReSearch on Qwen2.5-7B(-Instruct) and Qwen2.5-32B(-Instruct) models and conduct extensive experiments. Despite being trained on only one dataset, our models demonstrate strong generalizability across various benchmarks. Analysis reveals that ReSearch naturally elicits advanced reasoning capabilities such as reflection and self-correction during the reinforcement learning process.
Problem

Research questions and friction points this paper is trying to address.

Integrating reasoning with external search for LLMs
Handling complex multi-hop questions with multiple retrievals
Training LLMs to reason with search via reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning trains LLMs to reason with search
Search operations integrated into reasoning chain dynamically
Self-correction and reflection emerge during training
🔎 Similar Papers
No similar papers found.