🤖 AI Summary
This work addresses the scarcity of dedicated training and evaluation resources for proactive response agents in assistive and monitoring scenarios. To this end, the authors introduce ProAct-75, a benchmark comprising 75 tasks with 91,581 step-level annotations, along with the ProAct-Helper framework. This framework pioneers the use of an explicit task graph structure to guide multimodal large language models in state-awareness and parallel action planning, moving beyond conventional approaches that merely imitate human next-step behaviors. Integrated with an entropy-driven heuristic search mechanism, it enables goal-oriented proactive decision-making. Experimental results demonstrate that the proposed method improves trigger detection mF1 by 6.21%, reduces average steps per decision by 0.25, and increases the rate of parallel action execution by 15.58%, significantly outperforming multiple strong closed-source baselines.
📝 Abstract
While passive agents merely follow instructions, proactive agents align with higher-level objectives, such as assistance and safety by continuously monitoring the environment to determine when and how to act. However, developing proactive agents is hindered by the lack of specialized resources. To address this, we introduce ProAct-75, a benchmark designed to train and evaluate proactive agents across diverse domains, including assistance, maintenance, and safety monitoring. Spanning 75 tasks, our dataset features 91,581 step-level annotations enriched with explicit task graphs. These graphs encode step dependencies and parallel execution possibilities, providing the structural grounding necessary for complex decision-making. Building on this benchmark, we propose ProAct-Helper, a reference baseline powered by a Multimodal Large Language Model (MLLM) that grounds decision-making in state detection, and leveraging task graphs to enable entropy-driven heuristic search for action selection, allowing agents to execute parallel threads independently rather than mirroring the human's next step. Extensive experiments demonstrate that ProAct-Helper outperforms strong closed-source models, improving trigger detection mF1 by 6.21%, saving 0.25 more steps in online one-step decision, and increasing the rate of parallel actions by 15.58%.