🤖 AI Summary
Existing tool-use evaluation benchmarks predominantly focus on single-turn, stateless scenarios, neglecting the dynamic state evolution and tool lifecycle dependencies inherent in multi-turn dialogues. This work introduces DialogTool—the first benchmark explicitly designed for stateful, multi-turn tool usage—alongside VirtualMobile, a configurable virtual mobile environment. DialogTool systematically models the full tool lifecycle across three phases: tool creation, perception/selection/execution, and role-consistent response generation. Its innovations include multi-turn state-tracking annotations, an API simulation execution environment, and a six-task, phase-wise evaluation protocol. We conduct a comprehensive cross-model evaluation across 13 state-of-the-art LLMs. Results reveal significant performance degradation in long-horizon, state-sensitive tool use: all models exhibit consistent accuracy decay across dialogue turns, exposing fundamental limitations in state representation and long-range dependency modeling.
📝 Abstract
Existing benchmarks that assess Language Models (LMs) as Language Agents (LAs) for tool use primarily focus on stateless, single-turn interactions or partial evaluations, such as tool selection in a single turn, overlooking the inherent stateful nature of interactions in multi-turn applications. To fulfill this gap, we propose exttt{DialogTool}, a multi-turn dialogue dataset with stateful tool interactions considering the whole life cycle of tool use, across six key tasks in three stages: 1) extit{tool creation}; 2) extit{tool utilization}: tool awareness, tool selection, tool execution; and 3) extit{role-consistent response}: response generation and role play. Furthermore, we build exttt{VirtualMobile} -- an embodied virtual mobile evaluation environment to simulate API calls and assess the robustness of the created APIsfootnote{We will use tools and APIs alternatively, there are no significant differences between them in this paper.}. Taking advantage of these artifacts, we conduct comprehensive evaluation on 13 distinct open- and closed-source LLMs and provide detailed analysis at each stage, revealing that the existing state-of-the-art LLMs still cannot perform well to use tools over long horizons.