ToolMem: Enhancing Multimodal Agents with Learnable Tool Capability Memory

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing tool-augmented agents predominantly rely on fixed, predefined tool sets and lack the capability to dynamically select optimal neural tools across diverse tasks. This work introduces ToolMem, the first framework to establish a learnable memory mechanism for tool capabilities. ToolMem leverages LLM- or VLM-driven modeling of historical interactions to enable online summarization, long-term storage, and context-aware retrieval of tool performance—supporting adaptive tool selection in both textual and multimodal tasks. Its core innovation lies in explicitly representing tool capabilities as updateable and retrievable memory embeddings. Experiments demonstrate significant improvements: tool performance prediction accuracy increases by 14.8% (text) and 28.7% (multimodal), while optimal tool selection success rates improve by 21% and 24%, respectively. These results markedly enhance the generalization and adaptability of multimodal agents.

Technology Category

Application Category

📝 Abstract
Agents utilizing tools powered by large language models (LLMs) or vision-language models (VLMs) have demonstrated remarkable progress in diverse tasks across text and visual modalities. Unlike traditional tools such as calculators, which give deterministic outputs, neural tools perform uncertainly across task scenarios. While different tools for a task may excel in varied scenarios, existing agents typically rely on fixed tools, thus limiting the flexibility in selecting the most suitable tool for specific tasks. In contrast, humans snowball their understanding of the capabilities of different tools by interacting with them, and apply this knowledge to select the optimal tool when solving a future task. To build agents that similarly benefit from this process, we propose ToolMem that enables agents to develop memories of tool capabilities from previous interactions, by summarizing their strengths and weaknesses and storing them in memory; at inference, the agent can retrieve relevant entries from ToolMem, and select the best tool to solve individual tasks more accurately. We evaluate ToolMem on learning varied text generation and text-to-image generation neural tools. Compared to no-memory, generic agents, we find ToolMem-augmented agents predict tool performance 14.8% and 28.7% more accurately across text and multimodal generation scenarios. Moreover, ToolMem facilitates optimal tool selection among multiple choices by 21% and 24% absolute increases in respective scenarios.
Problem

Research questions and friction points this paper is trying to address.

Agents lack flexible tool selection for specific tasks
Neural tools perform uncertainly across different task scenarios
Agents need memory of tool capabilities for optimal selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agents develop learnable memory of tool capabilities
ToolMem summarizes tool strengths and weaknesses for storage
Retrieves relevant entries to select optimal tools accurately
🔎 Similar Papers
No similar papers found.