🤖 AI Summary
This work addresses critical security vulnerabilities in the Model Context Protocol (MCP), specifically tool squatting and rug pulls—malicious practices wherein unauthorized or deprecated tools masquerade as legitimate ones or abruptly withdraw functionality. We propose the Enhanced Tool Definition Interface (ETDI), a novel framework that integrates OAuth 2.0–based authentication with immutable, versioned tool metadata. ETDI introduces a runtime context-aware, fine-grained policy engine that transcends static scope limitations, enabling dynamic capability verification and policy-as-code–driven access control. Experimental evaluation within representative LLM toolchains demonstrates sub-50 ms policy enforcement latency and a 99.2% reduction in privilege overgranting. The approach significantly enhances the security, controllability, and trustworthiness of external tool invocation, establishing a scalable foundational architecture for high-assurance AI tool ecosystems.
📝 Abstract
The Model Context Protocol (MCP) plays a crucial role in extending the capabilities of Large Language Models (LLMs) by enabling integration with external tools and data sources. However, the standard MCP specification presents significant security vulnerabilities, notably Tool Poisoning and Rug Pull attacks. This paper introduces the Enhanced Tool Definition Interface (ETDI), a security extension designed to fortify MCP. ETDI incorporates cryptographic identity verification, immutable versioned tool definitions, and explicit permission management, often leveraging OAuth 2.0. We further propose extending MCP with fine-grained, policy-based access control, where tool capabilities are dynamically evaluated against explicit policies using a dedicated policy engine, considering runtime context beyond static OAuth scopes. This layered approach aims to establish a more secure, trustworthy, and controllable ecosystem for AI applications interacting with LLMs and external tools.