π€ AI Summary
Current large language models face two key challenges in multi-turn function-calling tasks (e.g., travel planning, multi-stage analysis): (1) insufficient capability to summarize task progress from historical interactions, and (2) difficulty in aligning local actions with global objectives. To address these, we propose PARL-MT, a novel framework centered on explicit progress awareness. First, we introduce Progress Awareness Generation (PAG), a data synthesis method that constructs high-quality, progress-annotated multi-turn training data. Second, we design Progress Awareness-Guided Reinforcement Learning (PAG-RL), a reinforcement learning algorithm that explicitly models global task progression, suppresses redundant actions, and strengthens actionβgoal consistency. Evaluated on two public benchmarks, PARL-MT achieves significant improvements over state-of-the-art methods, demonstrating that explicit progress modeling is critical for enhancing coherence, efficiency, and robustness in multi-turn function calling.
π Abstract
Large language models (LLMs) have achieved impressive success in single-turn function calling, yet real-world applications such as travel planning or multi-stage data analysis typically unfold across multi-turn conversations. In these settings, LLMs must not only issue accurate function calls at each step but also maintain progress awareness, the ability to summarize past interactions and plan future actions to ensure coherent, long-horizon task execution. Existing approaches, however, either reduce multi-turn training to isolated single-turn samples, which neglects task-level planning, or employ end-to-end reinforcement learning (RL) that struggles with redundancy and lacks explicit integration of progress awareness. To overcome these limitations, we introduce PARL-MT, a framework that explicitly incorporates progress awareness into LLM training for multi-turn function calling. PARL-MT combines (i) a Progress Awareness Generation (PAG) pipeline, which automatically constructs datasets coupling conversation summaries with future task planning, and (ii) a Progress Awareness-Guided Reinforcement Learning (PAG-RL) algorithm, which integrates progress awareness into RL training to reduce contextual redundancy and improve alignment between local actions and global task completion. Empirical results on two public benchmarks demonstrate that PARL-MT significantly outperforms existing methods, highlighting the effectiveness of progress awareness in enabling robust and efficient multi-turn function calling.