๐ค AI Summary
This paper addresses novel security risks emerging from LLM-driven autonomous AI agentsโsystems capable of planning, tool invocation, and cross-environment execution. We propose the first systematic security risk taxonomy specifically designed for agent-based systems. Unlike conventional AI or software security frameworks, our taxonomy comprehensively covers multi-dimensional attack surfaces across networked, software, and physical environments. It integrates threat modeling, security evaluation benchmarks, defense mechanism analysis, and governance considerations, exposing critical risks overlooked by existing approaches. Crucially, this work is the first to structurally characterize agent security risks through both technical and governance lenses, systematically identifying practical attack/defense gaps and evaluation bottlenecks. We delineate key open challenges and provide foundational insights to guide secure-by-design agent architectures, standardization efforts, and future system evolution. (149 words)
๐ Abstract
Agentic AI systems powered by large language models (LLMs) and endowed with planning, tool use, memory, and autonomy, are emerging as powerful, flexible platforms for automation. Their ability to autonomously execute tasks across web, software, and physical environments creates new and amplified security risks, distinct from both traditional AI safety and conventional software security. This survey outlines a taxonomy of threats specific to agentic AI, reviews recent benchmarks and evaluation methodologies, and discusses defense strategies from both technical and governance perspectives. We synthesize current research and highlight open challenges, aiming to support the development of secure-by-design agent systems.