AgentAsk: Multi-Agent Systems Need to Ask

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multi-agent LLM systems often underperform single-agent baselines due to cascading edge errors in message passing. To address this, we propose AgentAsk—a lightweight, plug-and-play clarification module that interrupts error propagation by inserting minimal, contextually necessary queries into interaction chains. We introduce a fine-grained edge-error taxonomy, design a link-local intervention mechanism, and formulate a three-stage decision optimization framework governing *when*, *what*, *whom*, and *how* to query. A compact policy model—trained via failure-trajectory distillation and optimized online with an E-GRPO reinforcement learning objective—enables architecture-agnostic, zero-configuration deployment. Evaluated on mathematical reasoning, logical deduction, and code-generation benchmarks, AgentAsk achieves significant gains in accuracy and robustness, with marginal overhead: latency and computational cost increase by less than 5%, while performance approaches that of strong oracle evaluators.

Technology Category

Application Category

📝 Abstract
Multi-agent systems built on large language models (LLMs) promise enhanced problem-solving capabilities through collaborative division of labor. However, they frequently underperform single-agent baselines due to edge-level error cascades: minor inaccuracies at one message handoff propagate across the entire chain. We propose AgentAsk, a lightweight and plug-and-play clarification module that treats every inter-agent message as a potential failure point and inserts minimally necessary questions to arrest error propagation. AgentAsk follows a three-stage pipeline: (i) distilling edge-level judgments from curated failure traces into a compact policy, (ii) supervising the policy to determine when/what/whom/how to ask, and (iii) optimizing online with E-GRPO, a reinforcement learning objective that balances accuracy, latency, and cost. The module is architecture-agnostic and easy to integrate into existing orchestration. Across math, reasoning, and coding benchmarks, AgentAsk consistently improves accuracy and robustness over public multi-agent implementations while keeping overhead minimal, with latency and extra cost all less than 5%, approaching the performance of a strong evaluator. Beyond empirical improvements, we contribute a principled taxonomy of edge-level errors and a practical recipe for link-local intervention, offering a scalable pathway toward more reliable LLM-based multi-agent systems.
Problem

Research questions and friction points this paper is trying to address.

Preventing error propagation in multi-agent LLM systems
Reducing cascade failures through targeted clarification questions
Balancing accuracy with minimal latency and cost overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

Plug-and-play clarification module prevents error propagation
Three-stage pipeline distills policies and supervises questioning
E-GRPO optimization balances accuracy with latency and cost
🔎 Similar Papers
No similar papers found.
B
Bohan Lin
University of Science and Technology of China
K
Kuo Yang
University of Science and Technology of China
Y
Yingchuan Lai
Xi’an Jiaotong University
Yudong Zhang
Yudong Zhang
University of Leicester, HFWLA/FIET/FEAI/FBCS/SMIEEE/SMACM/DSACM, Clarivate Highly Cited Researcher
artificial intelligencedeep learningmedical image processing
C
Chen Zhang
University of Science and Technology of China
Guibin Zhang
Guibin Zhang
National University of Singapore
Multi-Agent SystemEfficient AI
Xinlei Yu
Xinlei Yu
Beijing University of Posts and Telecommunications
Stochastic Geometry
M
Miao Yu
University of Science and Technology of China
X
Xu Wang
University of Science and Technology of China
Y
Yang Wang
University of Science and Technology of China