🤖 AI Summary
This work uncovers stealthy backdoor vulnerabilities in multimodal large language model (MLLM)-driven mobile GUI agents, specifically residing at the interaction layer—e.g., historical action sequences and environmental state representations—posing a novel supply-chain security threat. To exploit these vulnerabilities, we propose AgentGhost, a red-teaming framework featuring a novel target-level and interaction-level composite trigger mechanism; it formalizes backdoor injection as a min-max optimization problem. Our approach integrates supervised contrastive learning, multimodal representation space disentanglement, and interaction-level trigger synthesis to jointly optimize stealthiness, task utility, and generalizability. Evaluated on two major mobile GUI benchmarks, AgentGhost achieves a 99.7% attack success rate while degrading task performance by only 1%. Moreover, our proposed defense reduces the attack success rate to 22.1%, significantly enhancing agent robustness against such backdoors.
📝 Abstract
Graphical user interface (GUI) agents powered by multimodal large language models (MLLMs) have shown greater promise for human-interaction. However, due to the high fine-tuning cost, users often rely on open-source GUI agents or APIs offered by AI providers, which introduces a critical but underexplored supply chain threat: backdoor attacks. In this work, we first unveil that MLLM-powered GUI agents naturally expose multiple interaction-level triggers, such as historical steps, environment states, and task progress. Based on this observation, we introduce AgentGhost, an effective and stealthy framework for red-teaming backdoor attacks. Specifically, we first construct composite triggers by combining goal and interaction levels, allowing GUI agents to unintentionally activate backdoors while ensuring task utility. Then, we formulate backdoor injection as a Min-Max optimization problem that uses supervised contrastive learning to maximize the feature difference across sample classes at the representation space, improving flexibility of the backdoor. Meanwhile, it adopts supervised fine-tuning to minimize the discrepancy between backdoor and clean behavior generation, enhancing effectiveness and utility. Extensive evaluations of various agent models in two established mobile benchmarks show that AgentGhost is effective and generic, with attack accuracy that reaches 99.7% on three attack objectives, and shows stealthiness with only 1% utility degradation. Furthermore, we tailor a defense method against AgentGhost that reduces the attack accuracy to 22.1%. Our code is available at exttt{anonymous}.