Adaptive Tool Use in Large Language Models with Meta-Cognition Trigger

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-based tool invocation approaches neglect necessity assessment, leading to frequent invalid invocations, increased latency, and error propagation. Method: We propose MeCo, a fine-tuning-free adaptive tool invocation strategy that models metacognition as the model’s self-assessment signal regarding its own capability limitations. MeCo quantifies metacognitive scores via higher-order cognitive features extracted from hidden-layer representation spaces and employs a lightweight triggering mechanism for fine-grained, real-time tool invocation decisions. Contribution/Results: Evaluated across multiple base LLMs and benchmarks, MeCo significantly improves tool invocation accuracy, reduces invalid invocations by 47%, decreases average inference latency by 31%, and supports zero-shot, plug-and-play deployment without any model fine-tuning.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have shown remarkable emergent capabilities, transforming the execution of functional tasks by leveraging external tools for complex problems that require specialized processing or real-time data. While existing research expands LLMs access to diverse tools (e.g., program interpreters, search engines, weather/map apps), the necessity of using these tools is often overlooked, leading to indiscriminate tool invocation. This naive approach raises two key issues:(1) increased delays due to unnecessary tool calls, and (2) potential errors resulting from faulty interactions with external tools. In this paper, we introduce meta-cognition as a proxy for LLMs self-assessment of their capabilities, representing the model's awareness of its own limitations. Based on this, we propose MeCo, an adaptive decision-making strategy for external tool use. MeCo quantifies metacognitive scores by capturing high-level cognitive signals in the representation space, guiding when to invoke tools. Notably, MeCo is fine-tuning-free and incurs minimal cost. Our experiments show that MeCo accurately detects LLMs' internal cognitive signals and significantly improves tool-use decision-making across multiple base models and benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Optimizing tool use in LLMs
Reducing unnecessary tool invocation
Improving decision-making with meta-cognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-cognition for self-assessment
Adaptive tool-use decision-making
Fine-tuning-free cognitive signal detection
🔎 Similar Papers
No similar papers found.