Who's the Mole? Modeling and Detecting Intention-Hiding Malicious Agents in LLM-Based Multi-Agent Systems

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses a novel security threat in LLM-driven multi-agent systems (LLM-MAS): intent-hiding malicious agents. First, it systematically models risks by designing four stealthy attack paradigms, empirically validating their high concealment and strong task-disruption capability across diverse communication topologies. Second, it proposes AgentXposed—a psychology-inspired detection framework integrating the HEXACO personality model with Reid interrogation techniques—to infer malicious intent via progressive questioning and multidimensional behavioral monitoring. Experiments across six benchmark datasets demonstrate that the proposed attacks exhibit significant destructiveness and evasiveness (achieving lower detection rates than baselines), while AgentXposed maintains consistently high identification accuracy for all intent-hiding behaviors. This work is the first to incorporate personality modeling and structured interrogation strategies into LLM-MAS security detection, uncovering dual structural and behavioral vulnerabilities in such systems.

Technology Category

Application Category

📝 Abstract
Multi-agent systems powered by Large Language Models (LLM-MAS) demonstrate remarkable capabilities in collaborative problem-solving. While LLM-MAS exhibit strong collaborative abilities, the security risks in their communication and coordination remain underexplored. We bridge this gap by systematically investigating intention-hiding threats in LLM-MAS, and design four representative attack paradigms that subtly disrupt task completion while maintaining high concealment. These attacks are evaluated in centralized, decentralized, and layered communication structures. Experiments conducted on six benchmark datasets, including MMLU, MMLU-Pro, HumanEval, GSM8K, arithmetic, and biographies, demonstrate that they exhibit strong disruptive capabilities. To identify these threats, we propose a psychology-based detection framework AgentXposed, which combines the HEXACO personality model with the Reid Technique, using progressive questionnaire inquiries and behavior-based monitoring. Experiments conducted on six types of attacks show that our detection framework effectively identifies all types of malicious behaviors. The detection rate for our intention-hiding attacks is slightly lower than that of the two baselines, Incorrect Fact Injection and Dark Traits Injection, demonstrating the effectiveness of intention concealment. Our findings reveal the structural and behavioral risks posed by intention-hiding attacks and offer valuable insights into securing LLM-based multi-agent systems through psychological perspectives, which contributes to a deeper understanding of multi-agent safety. The code and data are available at https://anonymous.4open.science/r/AgentXposed-F814.
Problem

Research questions and friction points this paper is trying to address.

Detect intention-hiding malicious agents in LLM-MAS
Evaluate attacks in various communication structures
Propose psychology-based detection framework AgentXposed
Innovation

Methods, ideas, or system contributions that make the work stand out.

Four attack paradigms disrupt tasks subtly
Psychology-based detection with HEXACO and Reid
AgentXposed framework identifies malicious behaviors effectively
🔎 Similar Papers
No similar papers found.
Y
Yizhe Xie
The School of Data Science, City University of Macau
Congcong Zhu
Congcong Zhu
USTC
Multimedia Understanding
Xinyue Zhang
Xinyue Zhang
Southwest University of Science and Technology
Machine Learning · Multi-view clustering
M
Minghao Wang
The School of Data Science, City University of Macau
C
Chi Liu
The School of Data Science, City University of Macau
M
Minglu Zhu
The School of Information Technology, Griffith University
Tianqing Zhu
Tianqing Zhu
City University of Macau
PrivacyCyber SecurityMachine LearningAI Security