ProAct: A Benchmark and Multimodal Framework for Structure-Aware Proactive Response

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the scarcity of dedicated training and evaluation resources for proactive response agents in assistive and monitoring scenarios. To this end, the authors introduce ProAct-75, a benchmark comprising 75 tasks with 91,581 step-level annotations, along with the ProAct-Helper framework. This framework pioneers the use of an explicit task graph structure to guide multimodal large language models in state-awareness and parallel action planning, moving beyond conventional approaches that merely imitate human next-step behaviors. Integrated with an entropy-driven heuristic search mechanism, it enables goal-oriented proactive decision-making. Experimental results demonstrate that the proposed method improves trigger detection mF1 by 6.21%, reduces average steps per decision by 0.25, and increases the rate of parallel action execution by 15.58%, significantly outperforming multiple strong closed-source baselines.

Technology Category

Application Category

📝 Abstract
While passive agents merely follow instructions, proactive agents align with higher-level objectives, such as assistance and safety by continuously monitoring the environment to determine when and how to act. However, developing proactive agents is hindered by the lack of specialized resources. To address this, we introduce ProAct-75, a benchmark designed to train and evaluate proactive agents across diverse domains, including assistance, maintenance, and safety monitoring. Spanning 75 tasks, our dataset features 91,581 step-level annotations enriched with explicit task graphs. These graphs encode step dependencies and parallel execution possibilities, providing the structural grounding necessary for complex decision-making. Building on this benchmark, we propose ProAct-Helper, a reference baseline powered by a Multimodal Large Language Model (MLLM) that grounds decision-making in state detection, and leveraging task graphs to enable entropy-driven heuristic search for action selection, allowing agents to execute parallel threads independently rather than mirroring the human's next step. Extensive experiments demonstrate that ProAct-Helper outperforms strong closed-source models, improving trigger detection mF1 by 6.21%, saving 0.25 more steps in online one-step decision, and increasing the rate of parallel actions by 15.58%.
Problem

Research questions and friction points this paper is trying to address.

proactive agents
benchmark
task graphs
multimodal framework
structure-aware
Innovation

Methods, ideas, or system contributions that make the work stand out.

proactive agents
task graphs
multimodal large language model
structure-aware decision-making
entropy-driven search
🔎 Similar Papers
No similar papers found.
X
Xiaomeng Zhu
Department of Computer Science and Engineering, The Hong Kong University of Science and Technology (HKUST), Hong Kong SAR, China; Tencent, Shenzhen, China
F
Fengming Zhu
Department of Computer Science and Engineering, The Hong Kong University of Science and Technology (HKUST), Hong Kong SAR, China
W
Weijie Zhou
Tencent, Shenzhen, China
Ye Tian
Ye Tian
Tencent Robotics X
Z
Zhenlin Hu
Tencent, Shenzhen, China
Y
Yufei Huang
Tencent, Shenzhen, China
Yuchun Guo
Yuchun Guo
Research Scientist, CSAIL, MIT
Computational BiologyMachine LearningRegulatory GenomicsEpigenomicsTranscriptional Regulation
Xinyu Wu
Xinyu Wu
SIAT, CAS
RobotExoskeleton Robot
Zhengyou Zhang
Zhengyou Zhang
Tencent AI Lab & Tencent Robotics X
Computer VisionMultimediaSpeechRoboticsAI
Fangzhen Lin
Fangzhen Lin
Unknown affiliation
X
Xuantang Xiong
Tencent, Shenzhen, China