TRACE-Bot: Detecting Emerging LLM-Driven Social Bots via Implicit Semantic Representations and AIGC-Enhanced Behavioral Patterns

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing methods in detecting large language model (LLM)-driven social bots, which often rely on a single modality, lack sensitivity to generative AI (AIGC) patterns, and struggle to effectively integrate linguistic and behavioral cues. To overcome these challenges, we propose TRACE-Bot, a dual-channel unified framework that jointly models implicit semantic representations and AIGC-enhanced multidimensional behavioral features derived from user profiles, tweet content, and interaction dynamics to construct fine-grained user representations. By integrating pretrained language models with state-of-the-art AIGC detectors and employing a lightweight classification head, TRACE-Bot enables efficient and accurate bot identification. Evaluated on two public LLM-based social bot datasets, our method achieves accuracies of 98.46% and 97.50%, respectively, substantially outperforming current approaches and demonstrating strong robustness against sophisticated adversarial strategies.
📝 Abstract
Large Language Model-driven (LLM-driven) social bots pose a growing threat to online discourse by generating human-like content that evades conventional detection. Existing methods suffer from limited detection accuracy due to overreliance on single-modality signals, insufficient sensitivity to the specific generative patterns of Artificial Intelligence-Generated Content (AIGC), and a failure to adequately model the interplay between linguistic patterns and behavioral dynamics. To address these limitations, we propose TRACE-Bot, a unified dual-channel framework that jointly models implicit semantic representations and AIGC-enhanced behavioral patterns. TRACE-Bot constructs fine-grained representations from heterogeneous sources, including personal information data, interaction behavior data and tweet data. A dual-channel architecture captures linguistic representations via a pretrained language model and behavioral irregularities via multidimensional activity features augmented with signals from state-of-the-art (SOTA) AIGC detectors. The fused representations are then classified through a lightweight prediction head. Experiments on two public LLM-driven social bot datasets demonstrate SOTA performance, achieving accuracies of 98.46% and 97.50%, respectively. The results further indicate strong robustness against advanced bot strategies, highlighting the effectiveness of jointly leveraging implicit semantic representations and AIGC-enhanced behavioral patterns for emerging LLM-driven social bot detection.
Problem

Research questions and friction points this paper is trying to address.

LLM-driven social bots
AIGC
detection accuracy
behavioral patterns
semantic representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-driven social bots
implicit semantic representations
AIGC-enhanced behavioral patterns
dual-channel framework
AIGC detection
🔎 Similar Papers
No similar papers found.
Z
Zhongbo Wang
School of Cyber Science and Engineering, Sichuan University, Sichuan 610207, China; and also with the School of Software and Microelectronics, Peking University, Beijing 102600, China
Zhiyu Lin
Zhiyu Lin
Beijing Jiaotong University
Z
Zhu Wang
Law School, Sichuan University, Sichuan 610207, China
Haizhou Wang
Haizhou Wang
School of Cyber Science and Engineering, Sichuan University
fake information detectionsocial network analysis