🤖 AI Summary
This study addresses the unclear mechanisms behind diminishing returns and instability in large-scale language model multi-agent systems as they scale. Through a large-scale empirical analysis encompassing over 1.5 million interactions, the authors model collective reasoning as a coordination cascade process and uncover three coupled regularities: coordination cascades follow a heavy-tailed distribution, preferential attachment fosters intellectual elites, and increasing system size amplifies extreme events. Building on these insights, they propose Deficit-Triggered Integration (DTI), a dynamic intervention mechanism that precisely alleviates integration bottlenecks. Experiments demonstrate that DTI significantly enhances performance in scenarios of coordination failure while preserving large-scale reasoning capabilities, thereby establishing quantifiable laws of collective cognition and introducing a new optimization dimension for scalable multi-agent systems.
📝 Abstract
Large Language Model (LLM) multi-agent systems are increasingly deployed as interacting agent societies, yet scaling these systems often yields diminishing or unstable returns, the causes of which remain poorly understood. We present the first large-scale empirical study of coordination dynamics in LLM-based multi-agent systems, introducing an atomic event-level formulation that reconstructs reasoning as cascades of coordination. Analyzing over 1.5 Million interactions across tasks, topologies, and scales, we uncover three coupled laws: coordination follows heavy-tailed cascades, concentrates via preferential attachment into intellectual elites, and produces increasingly frequent extreme events as system size grows. We show that these effects are coupled through a single structural mechanism: an integration bottleneck, in which coordination expansion scales with system size while consolidation does not, producing large but weakly integrated reasoning processes. To test this mechanism, we introduce Deficit-Triggered Integration (DTI), which selectively increases integration under imbalance. DTI improves performance precisely where coordination fails, without suppressing large-scale reasoning. Together, our results establish quantitative laws of collective cognition and identify coordination structure as a fundamental, previously unmeasured axis for understanding and improving scalable multi-agent intelligence.