Are AI-assisted Development Tools Immune to Prompt Injection?

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the security vulnerabilities of AI-assisted development tools that adopt the Model Context Protocol (MCP), which are susceptible to prompt injection attacks leading to bypassed safety constraints, sensitive data leakage, and unauthorized tool invocation. The work presents the first systematic empirical evaluation of seven widely used MCP clients, analyzing their defensive capabilities against tool-poisoning scenarios across multiple dimensions—including static validation, parameter visibility, injection detection, user warnings, execution sandboxing, and audit logging. The investigation uncovers novel attack surfaces such as cross-tool poisoning and exploitation of hidden parameters, revealing substantial disparities in security robustness among implementations (e.g., Claude Desktop demonstrates relative resilience, whereas Cursor exhibits high vulnerability). Based on these findings, the paper offers actionable security hardening guidelines for both MCP implementers and developers.

Technology Category

Application Category

📝 Abstract
Prompt injection is listed as the number-one vulnerability class in the OWASP Top 10 for LLM Applications that can subvert LLM guardrails, disclose sensitive data, and trigger unauthorized tool use. Developers are rapidly adopting AI-assisted development tools built on the Model Context Protocol (MCP). However, their convenience comes with security risks, especially prompt-injection attacks delivered via tool-poisoning vectors. While prior research has studied prompt injection in LLMs, the security posture of real-world MCP clients remains underexplored. We present the first empirical analysis of prompt injection with the tool-poisoning vulnerability across seven widely used MCP clients: Claude Desktop, Claude Code, Cursor, Cline, Continue, Gemini CLI, and Langflow. We identify their detection and mitigation mechanisms, as well as the coverage of security features, including static validation, parameter visibility, injection detection, user warnings, execution sandboxing, and audit logging. Our evaluation reveals significant disparities. While some clients, such as Claude Desktop, implement strong guardrails, others, such as Cursor, exhibit high susceptibility to cross-tool poisoning, hidden parameter exploitation, and unauthorized tool invocation. We further provide actionable guidance for MCP implementers and the software engineering community seeking to build secure AI-assisted development workflows.
Problem

Research questions and friction points this paper is trying to address.

prompt injection
AI-assisted development tools
Model Context Protocol
tool-poisoning
LLM security
Innovation

Methods, ideas, or system contributions that make the work stand out.

prompt injection
Model Context Protocol (MCP)
tool poisoning
AI-assisted development
security evaluation
🔎 Similar Papers
No similar papers found.