๐ค AI Summary
Accurately predicting the behavior of multiple pedestrians in dense crowd environments remains a critical challenge for mobile robots, yet existing methods predominantly adopt an egocentric, single-agent perspective and lack the capacity to model third-person multi-agent behaviors and their interactions with the scene.
Method: We propose CAMP-VLM, the first context-aware multi-agent behavior prediction framework, which innovatively integrates vision-language models with scene-graph-based spatial reasoning. We further introduce the first synthetic data generation and evaluation paradigm specifically designed for third-person multi-agent behavior prediction.
Contribution/Results: Leveraging photorealistic simulation, supervised fine-tuning (SFT), and direct preference optimization (DPO), CAMP-VLM achieves significant improvements over state-of-the-art baselinesโup to 66.9%โon both synthetic and real-world sequences, demonstrating strong generalization and practical applicability.
๐ Abstract
Accurately predicting human behaviors is crucial for mobile robots operating in human-populated environments. While prior research primarily focuses on predicting actions in single-human scenarios from an egocentric view, several robotic applications require understanding multiple human behaviors from a third-person perspective. To this end, we present CAMP-VLM (Context-Aware Multi-human behavior Prediction): a Vision Language Model (VLM)-based framework that incorporates contextual features from visual input and spatial awareness from scene graphs to enhance prediction of humans-scene interactions. Due to the lack of suitable datasets for multi-human behavior prediction from an observer view, we perform fine-tuning of CAMP-VLM with synthetic human behavior data generated by a photorealistic simulator, and evaluate the resulting models on both synthetic and real-world sequences to assess their generalization capabilities. Leveraging Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO), CAMP-VLM outperforms the best-performing baseline by up to 66.9% in prediction accuracy.