PARL-MT: Learning to Call Functions in Multi-Turn Conversation with Progress Awareness

πŸ“… 2025-09-27
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current large language models face two key challenges in multi-turn function-calling tasks (e.g., travel planning, multi-stage analysis): (1) insufficient capability to summarize task progress from historical interactions, and (2) difficulty in aligning local actions with global objectives. To address these, we propose PARL-MT, a novel framework centered on explicit progress awareness. First, we introduce Progress Awareness Generation (PAG), a data synthesis method that constructs high-quality, progress-annotated multi-turn training data. Second, we design Progress Awareness-Guided Reinforcement Learning (PAG-RL), a reinforcement learning algorithm that explicitly models global task progression, suppresses redundant actions, and strengthens action–goal consistency. Evaluated on two public benchmarks, PARL-MT achieves significant improvements over state-of-the-art methods, demonstrating that explicit progress modeling is critical for enhancing coherence, efficiency, and robustness in multi-turn function calling.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) have achieved impressive success in single-turn function calling, yet real-world applications such as travel planning or multi-stage data analysis typically unfold across multi-turn conversations. In these settings, LLMs must not only issue accurate function calls at each step but also maintain progress awareness, the ability to summarize past interactions and plan future actions to ensure coherent, long-horizon task execution. Existing approaches, however, either reduce multi-turn training to isolated single-turn samples, which neglects task-level planning, or employ end-to-end reinforcement learning (RL) that struggles with redundancy and lacks explicit integration of progress awareness. To overcome these limitations, we introduce PARL-MT, a framework that explicitly incorporates progress awareness into LLM training for multi-turn function calling. PARL-MT combines (i) a Progress Awareness Generation (PAG) pipeline, which automatically constructs datasets coupling conversation summaries with future task planning, and (ii) a Progress Awareness-Guided Reinforcement Learning (PAG-RL) algorithm, which integrates progress awareness into RL training to reduce contextual redundancy and improve alignment between local actions and global task completion. Empirical results on two public benchmarks demonstrate that PARL-MT significantly outperforms existing methods, highlighting the effectiveness of progress awareness in enabling robust and efficient multi-turn function calling.
Problem

Research questions and friction points this paper is trying to address.

Enhancing multi-turn function calling with progress awareness
Addressing redundancy in reinforcement learning for conversations
Improving long-horizon task execution through conversation summarization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progress Awareness Generation pipeline for dataset construction
Progress Awareness-Guided Reinforcement Learning algorithm
Explicitly incorporates progress awareness into LLM training
πŸ”Ž Similar Papers
No similar papers found.
H
Huacan Chai
Shanghai Jiao Tong University
Z
Zijie Cao
Shanghai Jiao Tong University
M
Maolin Ran
Shanghai Jiao Tong University
Yingxuan Yang
Yingxuan Yang
Shanghai Jiaotong University
LLM AgentLLM-based MASLLM
Jianghao Lin
Jianghao Lin
Shanghai Jiao Tong University
Large Language ModelsAI AgentsRecommender Systems
P
pengxin
LongShine AI Research
H
Hairui Wang
LongShine AI Research
R
Renjie Ding
LongShine AI Research
Z
Ziyu Wan
Shanghai Jiao Tong University
Muning Wen
Muning Wen
Research Assistant Professor, Shanghai Jiao Tong University
(multi-agent) reinforcement learninglanguage agent/LLM-based agent
Weiwen Liu
Weiwen Liu
Associate Professor, Shanghai Jiao Tong University
large language modelsAI agentsrecommender systems
W
Weinan Zhang
Shanghai Jiao Tong University, Shanghai Innovation Institute
F
Fei Huang
LongShine AI Research
Ying Wen
Ying Wen
Associate Professor, Shanghai Jiao Tong University
Multi-Agent LearningReinforcement Learning