Tool-Star: Empowering LLM-Brained Multi-Tool Reasoner via Reinforcement Learning

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitation of large language models (LLMs) in autonomously invoking and coordinating multiple external tools for complex reasoning, this paper proposes Tool-Star, a reinforcement learning–based framework. Methodologically, it introduces: (1) a novel tool-integrated reasoning data synthesis pipeline; (2) a two-stage training paradigm—first, cold-start fine-tuning to bootstrap tool exploration, followed by multi-tool self-critical RL with hierarchical reward modeling to deepen collaborative reasoning; and (3) dynamic, stepwise orchestration of six categories of external tools during inference. Evaluated across over ten challenging reasoning benchmarks, Tool-Star consistently outperforms state-of-the-art methods, demonstrating both efficacy and scalability. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Recently, large language models (LLMs) have shown remarkable reasoning capabilities via large-scale reinforcement learning (RL). However, leveraging the RL algorithm to empower effective multi-tool collaborative reasoning in LLMs remains an open challenge. In this paper, we introduce Tool-Star, an RL-based framework designed to empower LLMs to autonomously invoke multiple external tools during stepwise reasoning. Tool-Star integrates six types of tools and incorporates systematic designs in both data synthesis and training. To address the scarcity of tool-use data, we propose a general tool-integrated reasoning data synthesis pipeline, which combines tool-integrated prompting with hint-based sampling to automatically and scalably generate tool-use trajectories. A subsequent quality normalization and difficulty-aware classification process filters out low-quality samples and organizes the dataset from easy to hard. Furthermore, we propose a two-stage training framework to enhance multi-tool collaborative reasoning by: (1) cold-start fine-tuning, which guides LLMs to explore reasoning patterns via tool-invocation feedback; and (2) a multi-tool self-critic RL algorithm with hierarchical reward design, which reinforces reward understanding and promotes effective tool collaboration. Experimental analyses on over 10 challenging reasoning benchmarks highlight the effectiveness and efficiency of Tool-Star. The code is available at https://github.com/dongguanting/Tool-Star.
Problem

Research questions and friction points this paper is trying to address.

Enabling LLMs to autonomously use multiple external tools during reasoning
Addressing scarcity of tool-use data via automated synthesis pipeline
Enhancing multi-tool collaboration via two-stage RL training framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

RL-based framework for multi-tool reasoning
Tool-integrated data synthesis pipeline
Two-stage training with hierarchical rewards
🔎 Similar Papers
No similar papers found.