🤖 AI Summary
Existing AI agents exhibit limited generalization and rely heavily on domain-specific tools. Method: This paper proposes the “Minimal Universal Toolset” paradigm, introducing OpenHands-Versa—a general-purpose agent built exclusively upon four foundational tool categories: code execution, web search, multimodal web browsing, and file access. It employs an LLM-driven multi-step reasoning framework integrating sandboxed code execution, structured web retrieval, vision-language–capable multimodal browsing, and cross-task memory. Contribution/Results: We provide the first empirical validation that a streamlined universal toolset can outperform specialized multi-domain agents—challenging the prevailing assumptions of tool specialization and architecture customization. On SWE-Bench Multimodal, GAIA, and The Agent Company benchmarks, OpenHands-Versa achieves absolute success rate improvements of +9.1%, +1.3%, and +9.1%, respectively, surpassing all prior state-of-the-art domain-specific agents.
📝 Abstract
Modern human labor is characterized by specialization; we train for years and develop particular tools that allow us to perform well across a variety of tasks. In addition, AI agents have been specialized for domains such as software engineering, web navigation, and workflow automation. However, this results in agents that are good for one thing but fail to generalize beyond their intended scope. One reason for this is that agent developers provide a highly specialized set of tools or make architectural decisions optimized for a specific use case or benchmark. In this work, we ask the question: what is the minimal set of general tools that can be used to achieve high performance across a diverse set of tasks? Our answer is OpenHands-Versa, a generalist agent built with a modest number of general tools: code editing and execution, web search, as well as multimodal web browsing and file access. Importantly, OpenHands-Versa demonstrates superior or competitive performance over leading specialized agents across three diverse and challenging benchmarks: SWE-Bench Multimodal, GAIA, and The Agent Company, outperforming the best-performing previously published results with absolute improvements in success rate of 9.1, 1.3, and 9.1 points respectively. Further, we show how existing state-of-the-art multi-agent systems fail to generalize beyond their target domains. These results demonstrate the feasibility of developing a generalist agent to solve diverse tasks and establish OpenHands-Versa as a strong baseline for future research.