Thought Communication in Multiagent Collaboration

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing natural-language-based multi-agent collaboration suffers from fundamental limitations—including information loss, semantic ambiguity, and indirect expression. This paper introduces “thought communication,” a novel paradigm that formally defines and models latent inter-agent mental interaction, enabling language-free, telepathy-like direct thought sharing. Theoretically, we prove the identifiability of latent thoughts and the recoverability of global shared mental structures—without auxiliary information. Methodologically, we propose a nonparametric latent-variable model that jointly performs latent state decomposition and shared pattern inference to disentangle agents’ shared versus private thoughts from behavioral observations. Experiments on synthetic and real-world tasks demonstrate substantial improvements in collaboration efficiency and accuracy over language-based baselines. Our work establishes a new foundation for collective intelligence beyond linguistic mediation.

Technology Category

Application Category

📝 Abstract
Natural language has long enabled human cooperation, but its lossy, ambiguous, and indirect nature limits the potential of collective intelligence. While machines are not subject to these constraints, most LLM-based multi-agent systems still rely solely on natural language, exchanging tokens or their embeddings. To go beyond language, we introduce a new paradigm, thought communication, which enables agents to interact directly mind-to-mind, akin to telepathy. To uncover these latent thoughts in a principled way, we formalize the process as a general latent variable model, where agent states are generated by an unknown function of underlying thoughts. We prove that, in a nonparametric setting without auxiliary information, both shared and private latent thoughts between any pair of agents can be identified. Moreover, the global structure of thought sharing, including which agents share which thoughts and how these relationships are structured, can also be recovered with theoretical guarantees. Guided by the established theory, we develop a framework that extracts latent thoughts from all agents prior to communication and assigns each agent the relevant thoughts, along with their sharing patterns. This paradigm naturally extends beyond LLMs to all modalities, as most observational data arise from hidden generative processes. Experiments on both synthetic and real-world benchmarks validate the theory and demonstrate the collaborative advantages of thought communication. We hope this work illuminates the potential of leveraging the hidden world, as many challenges remain unsolvable through surface-level observation alone, regardless of compute or data scale.
Problem

Research questions and friction points this paper is trying to address.

Overcoming limitations of natural language in multiagent collaboration systems
Enabling direct mind-to-mind communication between artificial agents
Identifying and recovering shared latent thoughts with theoretical guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introducing thought communication for direct agent interaction
Formalizing latent thoughts as general variable model
Extracting and assigning latent thoughts with sharing patterns
🔎 Similar Papers
No similar papers found.