🤖 AI Summary
This study empirically challenges the common assumption that Model Context Protocol (MCP) integration inherently enhances large language model (LLM) performance. We propose MCPGAUGE, the first dedicated benchmark framework for evaluating tool-augmented LLMs, assessing four core dimensions: proactiveness, instruction adherence, task effectiveness, and computational overhead. Our evaluation encompasses 160 prompts across 25 datasets—spanning knowledge understanding, reasoning, and code generation—and involves 6 commercial LLMs interacting with 30 distinct MCP tool suites under single- and multi-turn settings, totaling ~20,000 API calls (costing >$6,000). Results reveal that MCP integration frequently underperforms expectations, exposing critical bottlenecks in tool selection, invocation, and result interpretation. This work establishes a rigorous, interpretable, and controllable evaluation paradigm for tool-augmented LLMs, providing both a standardized benchmark and a methodological foundation for future research on controllable, explainable AI systems.
📝 Abstract
The Model Context Protocol (MCP) enables large language models (LLMs) to access external resources on demand. While commonly assumed to enhance performance, how LLMs actually leverage this capability remains poorly understood. We introduce MCPGAUGE, the first comprehensive evaluation framework for probing LLM-MCP interactions along four key dimensions: proactivity (self-initiated tool use), compliance (adherence to tool-use instructions), effectiveness (task performance post-integration), and overhead (computational cost incurred). MCPGAUGE comprises a 160-prompt suite and 25 datasets spanning knowledge comprehension, general reasoning, and code generation. Our large-scale evaluation, spanning six commercial LLMs, 30 MCP tool suites, and both one- and two-turn interaction settings, comprises around 20,000 API calls and over USD 6,000 in computational cost. This comprehensive study reveals four key findings that challenge prevailing assumptions about the effectiveness of MCP integration. These insights highlight critical limitations in current AI-tool integration and position MCPGAUGE as a principled benchmark for advancing controllable, tool-augmented LLMs.