🤖 AI Summary
This work addresses the limitations of large language model (LLM) agents in domain-specific tasks requiring long-tail expert knowledge, where they often fail due to knowledge gaps and struggle to effectively leverage unstructured human feedback. To overcome this, the authors propose the AHCE framework, which innovatively models human experts as interactive, callable reasoning modules and employs a reinforcement learning policy to dynamically determine when and how to request structured interventions. Evaluated in Minecraft environments, the approach significantly improves task success rates—by 32% on standard tasks and nearly 70% on challenging ones—while requiring minimal human involvement, thereby enabling efficient and scalable human–AI collaboration.
📝 Abstract
Large Language Model (LLM) based agents excel at general reasoning but often fail in specialized domains where success hinges on long-tail knowledge absent from their training data. While human experts can provide this missing knowledge, their guidance is often unstructured and unreliable, making its direct integration into an agent's plan problematic. To address this, we introduce AHCE (Active Human-Augmented Challenge Engagement), a framework for on-demand Human-AI collaboration. At its core, the Human Feedback Module (HFM) employs a learned policy to treat the human expert as an interactive reasoning tool. Extensive experiments in Minecraft demonstrate the framework's effectiveness, increasing task success rates by 32% on normal difficulty tasks and nearly 70% on highly difficult tasks, all with minimal human intervention. Our work demonstrates that successfully augmenting agents requires learning how to request expert reasoning, moving beyond simple requests for help.