🤖 AI Summary
Large language models (LLMs) struggle to autonomously internalize tool-use strategies without access to annotated reasoning trajectories.
Method: We propose a reinforcement learning (RL) framework that relies solely on binary reward signals—assessing structural validity and functional correctness—eliminating the need for supervised fine-tuning or strong-model distillation. Leveraging Qwen-2.5-Instruct as the base, we develop the Nemotron-Research-Tool-N1 series (7B/14B) and employ rule-guided RL to enable end-to-end acquisition of structured reasoning and tool interaction capabilities.
Contribution/Results: To our knowledge, this is the first approach to achieve internalization and generalization of tool-use policies without explicit reasoning annotations or trajectory-level supervision. Our models achieve state-of-the-art performance on BFCL and API-Bank benchmarks, significantly outperforming GPT-4o. These results demonstrate that lightweight, binary reward signals can effectively drive the emergence of advanced reasoning and tool-integration capabilities in LLMs.
📝 Abstract
Enabling large language models with external tools has become a pivotal strategy for extending their functionality beyond text generation tasks. Prior work typically enhances tool-use abilities by either applying supervised fine-tuning (SFT) to enforce tool-call correctness or distilling reasoning traces from stronger models for SFT. However, both approaches fall short, either omitting reasoning entirely or producing imitative reasoning that limits generalization. Inspired by the success of DeepSeek-R1 in eliciting reasoning through rule-based reinforcement learning, we develop the Nemotron-Research-Tool-N1 series of tool-using language models using a similar training paradigm. Instead of restrictively supervising intermediate reasoning traces distilled from stronger models, Nemotron-Research-Tool-N1 is optimized with a binary reward that evaluates only the structural validity and functional correctness of tool invocations. This lightweight supervision allows the model to autonomously internalize reasoning strategies, without the need for annotated reasoning trajectories. Experiments on the BFCL and API-Bank benchmarks show that Nemotron-Research-Tool-N1-7B and Nemotron-Research-Tool-N1-14B, built on Qwen-2.5-7B/14B-Instruct, achieve state-of-the-art results, outperforming GPT-4o on both evaluations.