"Your AI, My Shell": Demystifying Prompt Injection Attacks on Agentic AI Coding Editors

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study is the first to systematically expose prompt injection vulnerabilities in high-privilege AI programming editors (e.g., GitHub Copilot, Cursor), wherein attackers remotely hijack AI agents by poisoning external development resources—such as code repositories, documentation, and package dependencies—to execute malicious commands, effectively weaponizing intelligent coding tools as attack vectors. Method: The authors propose the first comprehensive prompt injection attack framework and taxonomy tailored for AI editors, and develop AIShellJack—a fully automated testing platform integrating 314 distinct payloads covering 70 techniques from the MITRE ATT&CK framework. Contribution/Results: Experiments demonstrate up to 84% attack success rates across mainstream AI editors, enabling end-to-end exploitation—from initial access and system reconnaissance to credential theft and data exfiltration. This work establishes the “AI-as-Shell” security paradigm, providing a foundational threat model and empirical evaluation benchmark for building trustworthy AI-powered coding systems.

Technology Category

Application Category

📝 Abstract
Agentic AI coding editors driven by large language models have recently become more popular due to their ability to improve developer productivity during software development. Modern editors such as Cursor are designed not just for code completion, but also with more system privileges for complex coding tasks (e.g., run commands in the terminal, access development environments, and interact with external systems). While this brings us closer to the "fully automated programming" dream, it also raises new security concerns. In this study, we present the first empirical analysis of prompt injection attacks targeting these high-privilege agentic AI coding editors. We show how attackers can remotely exploit these systems by poisoning external development resources with malicious instructions, effectively hijacking AI agents to run malicious commands, turning "your AI" into "attacker's shell". To perform this analysis, we implement AIShellJack, an automated testing framework for assessing prompt injection vulnerabilities in agentic AI coding editors. AIShellJack contains 314 unique attack payloads that cover 70 techniques from the MITRE ATT&CK framework. Using AIShellJack, we conduct a large-scale evaluation on GitHub Copilot and Cursor, and our evaluation results show that attack success rates can reach as high as 84% for executing malicious commands. Moreover, these attacks are proven effective across a wide range of objectives, ranging from initial access and system discovery to credential theft and data exfiltration.
Problem

Research questions and friction points this paper is trying to address.

Analyzing prompt injection attacks on AI coding editors
Assessing security vulnerabilities in high-privilege AI systems
Evaluating remote exploitation risks through poisoned development resources
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated framework tests prompt injection vulnerabilities
Attack payloads cover MITRE ATT&CK techniques
Remote exploitation hijacks AI agents via poisoned resources
🔎 Similar Papers
Y
Yue Liu
Singapore Management University, Singapore
Yanjie Zhao
Yanjie Zhao
Huazhong University of Science and Technology
Software EngineeringSoftware Security
Yunbo Lyu
Yunbo Lyu
PhD Candidate, Singapore Management University
Software Engineering
T
Ting Zhang
Monash University, Australia
H
Haoyu Wang
Huazhong University of Science and Technology, China
D
David Lo
Singapore Management University, Singapore