Improving Role Consistency in Multi-Agent Collaboration via Quantitative Role Clarity

📅 2026-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the issue of role confusion in large language model–driven multi-agent systems, where agents often deviate from their designated responsibilities. The study introduces, for the first time, a quantifiable role clarity metric and formulates a role consistency regularizer by constructing a semantic similarity matrix, followed by softmax normalization and Frobenius norm computation. This regularizer enables lightweight fine-tuning to enhance role alignment among agents. Evaluated within the ChatDev framework, the proposed method drastically reduces role boundary violations—from 43.4% to 0.2%—while achieving a high role clarity score of 0.8530 and improving task success rate to 0.6763, thereby effectively preserving role consistency in multi-agent collaboration.
📝 Abstract
In large language model (LLM)-driven multi-agent systems, disobey role specification (failure to adhere to the defined responsibilities and constraints of an assigned role, potentially leading to an agent behaving like another) is a major failure mode \cite{DBLP:journals/corr/abs-2503-13657}. To address this issue, in the present paper, we propose a quantitative role clarity to improve role consistency. Firstly, we construct a role assignment matrix $S(φ)=[s_{ij}(φ)]$, where $s_{ij}(φ)$ is the semantic similarity between the $i$-th agent's behavior trajectory and the $j$-th agent's role description. Then we define role clarity matrix $M(φ)$ as $\text{softmax}(S(φ))-I$, where $\text{softmax}(S(φ))$ is a row-wise softmax of $S(φ)$ and $I$ is the identity matrix. The Frobenius norm of $M(φ)$ quantifies the alignment between agents' role descriptions and their behaviors trajectory. Moreover, we employ the role clarity matrix as a regularizer during lightweight fine-tuning to improve role consistency, thereby improving end-to-end task performance. Experiments on the ChatDev multi-agent system show that our method substantially improves role consistency and task performance: with Qwen and Llama, the role overstepping rate decreases from $46.4\%$ to $8.4\%$ and from $43.4\%$ to $0.2\%$, respectively, and the role clarity score increases from $0.5328$ to $0.9097$ and from $0.5007$ to $0.8530$, respectively, the task success rate increases from $0.6769$ to $0.6909$ and from $0.6174$ to $0.6763$, respectively.
Problem

Research questions and friction points this paper is trying to address.

role consistency
multi-agent collaboration
role specification
large language models
role overstepping
Innovation

Methods, ideas, or system contributions that make the work stand out.

role consistency
quantitative role clarity
multi-agent collaboration
role overstepping
LLM-driven agents
🔎 Similar Papers
No similar papers found.
G
Guoling Zhou
School of Information Science and Technology, Northeast Normal University
W
Wenpei Han
School of Information Science and Technology, Northeast Normal University
F
Fengqin Yang
School of Information Science and Technology, Northeast Normal University
L
Li Wang
School of Computer Science and Engineering, Guangxi Normal University
Y
Yingcong Zhou
School of Information Science and Technology, Northeast Normal University
Zhiguo Fu
Zhiguo Fu
Unknown affiliation