Parallelism Meets Adaptiveness: Scalable Documents Understanding in Multi-Agent LLM Systems

📅 2025-07-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multi-agent frameworks for complex document understanding suffer from static workflows, fixed agent roles, and inefficient inter-agent communication. To address these limitations, this paper proposes a dynamic collaborative large language model (LLM) multi-agent system. Our method introduces three core innovations: (1) confidence-driven dynamic task routing and reallocation; (2) structured bidirectional critique exchange coupled with parallel agent evaluation; and (3) a competitive result selection strategy guided by multi-dimensional quality criteria. The system adopts a modular architecture integrating LLM-based agents, structured feedback mechanisms, and adaptive coordination control. Experimental results demonstrate significant improvements over static and partially adaptive baselines across factual coverage, content coherence, and execution efficiency—validating the effectiveness of the “dynamic collaboration + structured competition” paradigm.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) agents have shown increasing promise for collaborative task completion. However, existing multi-agent frameworks often rely on static workflows, fixed roles, and limited inter-agent communication, reducing their effectiveness in open-ended, high-complexity domains. This paper proposes a coordination framework that enables adaptiveness through three core mechanisms: dynamic task routing, bidirectional feedback, and parallel agent evaluation. The framework allows agents to reallocate tasks based on confidence and workload, exchange structured critiques to iteratively improve outputs, and crucially compete on high-ambiguity subtasks with evaluator-driven selection of the most suitable result. We instantiate these principles in a modular architecture and demonstrate substantial improvements in factual coverage, coherence, and efficiency over static and partially adaptive baselines. Our findings highlight the benefits of incorporating both adaptiveness and structured competition in multi-agent LLM systems.
Problem

Research questions and friction points this paper is trying to address.

Static workflows limit multi-agent LLM effectiveness
Open-ended tasks need dynamic task routing
High-ambiguity subtasks require evaluator-driven competition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic task routing based on confidence
Bidirectional feedback for iterative improvement
Parallel agent evaluation with competition
🔎 Similar Papers
No similar papers found.
C
Chengxuan Xia
University of California, Santa Cruz, CA, USA
Q
Qianye Wu
Carnegie Mellon University, Pittsburgh, PA, USA
S
Sixuan Tian
Carnegie Mellon University, Pittsburgh, PA, USA
Yilun Hao
Yilun Hao
Massachusetts Institute of Technology
RoboticsLarge Language ModelsMachine LearningPlanning