π€ AI Summary
This study addresses critical client-side security vulnerabilities in the Model Context Protocol (MCP) when integrating external tools, particularly highlighting prompt injection attacks stemming from tool poisoning. Leveraging the STRIDE and DREAD threat modeling frameworks, the work systematically analyzes the five core MCP components and reveals, for the first time, widespread security deficiencies in mainstream MCP clientsβmost notably insufficient static validation and lack of parameter visibility. To mitigate these risks, the authors propose a multi-layered defense architecture that integrates static metadata analysis, model decision-path tracing, behavioral anomaly detection, and user transparency mechanisms. The paper further offers practical, actionable recommendations for securing MCP clients, thereby addressing a significant gap in existing research on client-side protections within the MCP ecosystem.
π Abstract
The Model Context Protocol (MCP) has rapidly emerged as a universal standard for connecting AI assistants to external tools and data sources. While MCP simplifies integration between AI applications and various services, it introduces significant security vulnerabilities, particularly on the client side. In this work we conduct threat modelings of MCP implementations using STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege) and DREAD (Damage, Reproducibility, Exploitability, Affected Users, Discoverability) frameworks across five key components: (1) MCP Host and Client, (2) LLM, (3) MCP Server, (4) External Data Stores, and (5) Authorization Server. This comprehensive analysis reveals tool poisoning-where malicious instructions are embedded in tool metadata-as the most prevalent and impactful client-side vulnerability. We therefore focus our empirical evaluation on this critical attack vector, providing a systematic comparison of how seven major MCP clients validate and defend against tool poisoning attacks. Our analysis reveals significant security issues with most tested clients due to insufficient static validation and parameter visibility. We propose a multi-layered defense strategy encompassing static metadata analysis, model decision path tracking, behavioral anomaly detection, and user transparency mechanisms. This research addresses a critical gap in MCP security, which has primarily focused on server-side vulnerabilities, and provides actionable recommendations and mitigation strategies for securing AI agent ecosystems.