Scaling Large-Language-Model-based Multi-Agent Collaboration

📅 2024-06-11
🏛️ arXiv.org
📈 Citations: 25
Influential: 2
📄 PDF
🤖 AI Summary
It remains unclear whether multi-agent collaboration follows scaling laws analogous to those observed in large language models, particularly given inherent limitations in single-agent reasoning. Method: We propose MacNet—a directed acyclic graph (DAG)-structured, topology-aware multi-agent collaboration network—enabling scalable, distributed cooperative inference across over one thousand agents. Contribution/Results: We empirically discover the first multi-agent collaborative scaling law: performance exhibits logistic growth, with collaborative emergence occurring earlier than neural emergence; irregular topologies significantly outperform regular ones; and multidimensional interactive reflection is identified as the core mechanism driving collaborative enhancement. MacNet substantially improves reasoning completeness and output quality on complex tasks. The open-source framework ChatDev-MacNet has been validated in real-world deployment.

Technology Category

Application Category

📝 Abstract
Recent breakthroughs in large language model-driven autonomous agents have revealed that multi-agent collaboration often surpasses each individual through collective reasoning. Inspired by the neural scaling law--increasing neurons enhances performance, this study explores whether the continuous addition of collaborative agents can yield similar benefits. Technically, we utilize directed acyclic graphs to organize agents into a multi-agent collaboration network (MacNet), upon which their interactive reasoning is topologically orchestrated for autonomous task solving. Extensive evaluations reveal that it effectively supports collaboration among over a thousand agents, with irregular topologies outperforming regular ones. We also identify a collaborative scaling law--the overall performance follows a logistic growth pattern as agents scale, with collaborative emergence occurring earlier than traditional neural emergence. We speculate this may be because scaling agents catalyzes their multidimensional considerations during interactive reflection and refinement, thereby producing more comprehensive artifacts. The code is available at https://github.com/OpenBMB/ChatDev/tree/macnet.
Problem

Research questions and friction points this paper is trying to address.

Explores benefits of scaling multi-agent collaboration using large language models.
Investigates if adding more agents improves collective reasoning and task solving.
Identifies a collaborative scaling law showing logistic performance growth with agent scaling.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Directed acyclic graphs organize multi-agent collaboration
MacNet orchestrates interactive reasoning for task solving
Collaborative scaling law shows logistic performance growth
🔎 Similar Papers
No similar papers found.
C
Cheng Qian
Tsinghua University
Z
Zihao Xie
Tsinghua University
Y
Yifei Wang
Tsinghua University
W
Wei Liu
Tsinghua University
Yufan Dang
Yufan Dang
Tsinghua University
Natural Language ProcessingMachine LearningArtificial Intelligence
Z
Zhuoyun Du
Tsinghua University
Weize Chen
Weize Chen
Tsinghua University
NLPML
C
Cheng Yang
Beijing University of Posts and Telecommunications
Z
Zhiyuan Liu
Tsinghua University
Maosong Sun
Maosong Sun
Professor of Computer Science and Technology, Tsinghua University
Natural Language ProcessingArtificial IntelligenceSocial Computing