From Prompt Injections to Protocol Exploits: Threats in LLM-Powered AI Agents Workflows

📅 2025-06-29
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses security vulnerabilities in the LLM agent ecosystem arising from rapid evolution of plugins, connectors, and protocols. Methodologically, it proposes the first end-to-end unified threat model, introducing a comprehensive, four-category threat taxonomy—covering input manipulation, model tampering, system/privacy attacks, and protocol-level vulnerabilities—and formally characterizing the multi-party interaction attack surface for the first time. It identifies novel protocol-layer attack vectors, including those targeting MCP and ACP. Through combined formal analysis and empirical evaluation—including adversarial examples, backdoor injection, inference attacks, and protocol reverse engineering—it systematically assesses existing defenses. Key contributions include: (1) the first threat modeling framework tailored to LLM agent ecosystems; (2) identification of critical defense gaps; and (3) articulation of three pivotal research directions—dynamic trust management, encrypted provenance, and resilience in federated environments—providing both theoretical foundations and practical guidance for secure agent workflow design.

Technology Category

Application Category

📝 Abstract
Autonomous AI agents powered by large language models (LLMs) with structured function-calling interfaces have dramatically expanded capabilities for real-time data retrieval, complex computation, and multi-step orchestration. Yet, the explosive proliferation of plugins, connectors, and inter-agent protocols has outpaced discovery mechanisms and security practices, resulting in brittle integrations vulnerable to diverse threats. In this survey, we introduce the first unified, end-to-end threat model for LLM-agent ecosystems, spanning host-to-tool and agent-to-agent communications, formalize adversary capabilities and attacker objectives, and catalog over thirty attack techniques. Specifically, we organized the threat model into four domains: Input Manipulation (e.g., prompt injections, long-context hijacks, multimodal adversarial inputs), Model Compromise (e.g., prompt- and parameter-level backdoors, composite and encrypted multi-backdoors, poisoning strategies), System and Privacy Attacks (e.g., speculative side-channels, membership inference, retrieval poisoning, social-engineering simulations), and Protocol Vulnerabilities (e.g., exploits in Model Context Protocol (MCP), Agent Communication Protocol (ACP), Agent Network Protocol (ANP), and Agent-to-Agent (A2A) protocol). For each category, we review representative scenarios, assess real-world feasibility, and evaluate existing defenses. Building on our threat taxonomy, we identify key open challenges and future research directions, such as securing MCP deployments through dynamic trust management and cryptographic provenance tracking; designing and hardening Agentic Web Interfaces; and achieving resilience in multi-agent and federated environments. Our work provides a comprehensive reference to guide the design of robust defense mechanisms and establish best practices for resilient LLM-agent workflows.
Problem

Research questions and friction points this paper is trying to address.

Identify threats in LLM-powered AI agent workflows
Formalize adversary capabilities and attack objectives
Catalog over thirty attack techniques across four domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified threat model for LLM-agent ecosystems
Dynamic trust management for MCP security
Cryptographic provenance tracking in protocols
🔎 Similar Papers
No similar papers found.