SEM: Reinforcement Learning for Search-Efficient Large Language Models

📅 2025-05-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently perform redundant web searches during tool-augmented reasoning, leading to low inference efficiency and unnecessary computational overhead. Method: We propose the first explicit search-efficiency optimization framework, comprising: (i) a balanced discriminative dataset constructed by integrating MuSiQue and MMLU; (ii) a structured reasoning template that explicitly guides search-decision making; and (iii) post-training via Group Relative Policy Optimization (GRPO), guided by a novel composite reward function jointly optimizing answer accuracy and search necessity—thereby mitigating over-searching inherent in standard RL approaches. Contribution/Results: Experiments demonstrate substantial reductions in redundant search frequency across multiple benchmarks, while maintaining or improving answer accuracy. The framework enhances LLMs’ ability to invoke external knowledge sources in a more judicious, autonomous, and context-aware manner, advancing efficient and reliable tool-augmented reasoning.

Technology Category

Application Category

📝 Abstract
Recent advancements in Large Language Models(LLMs) have demonstrated their capabilities not only in reasoning but also in invoking external tools, particularly search engines. However, teaching models to discern when to invoke search and when to rely on their internal knowledge remains a significant challenge. Existing reinforcement learning approaches often lead to redundant search behaviors, resulting in inefficiencies and over-cost. In this paper, we propose SEM, a novel post-training reinforcement learning framework that explicitly trains LLMs to optimize search usage. By constructing a balanced dataset combining MuSiQue and MMLU, we create scenarios where the model must learn to distinguish between questions it can answer directly and those requiring external retrieval. We design a structured reasoning template and employ Group Relative Policy Optimization(GRPO) to post-train the model's search behaviors. Our reward function encourages accurate answering without unnecessary search while promoting effective retrieval when needed. Experimental results demonstrate that our method significantly reduces redundant search operations while maintaining or improving answer accuracy across multiple challenging benchmarks. This framework advances the model's reasoning efficiency and extends its capability to judiciously leverage external knowledge.
Problem

Research questions and friction points this paper is trying to address.

Optimizing search usage in LLMs to reduce redundancy
Distinguishing when to search versus use internal knowledge
Improving reasoning efficiency while maintaining answer accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Post-training RL framework optimizes search usage
Balanced dataset trains model to distinguish queries
GRPO enhances search behavior efficiency
🔎 Similar Papers
No similar papers found.
Zeyang Sha
Zeyang Sha
Ant Group
computer science and security
S
Shiwen Cui
Ant Group
W
Weiqiang Wang
Ant Group