🤖 AI Summary
To address the challenge of unified modeling for multiple tasks—collision recognition, temporal localization, and high-level semantic understanding—in traffic collision video analysis, this paper introduces the first multimodal large language model (MLLM) tailored for driving scenarios. Methodologically, we propose a novel task-decoupled and grouped multi-task learning strategy to mitigate negative transfer, augmented by driving-domain instruction tuning to enable cross-task collaborative optimization and domain-knowledge injection. Built upon the VideoLLaMA3 architecture, our model integrates video-language alignment, spatiotemporal feature modeling, and joint training mechanisms. Experiments demonstrate near-perfect collision recognition accuracy (~100%), with collision and pre-collision temporal localization errors reduced by 176% and 40%, respectively; BLEU and ROUGE scores improve by 0.18–0.42. Our approach achieves state-of-the-art performance across multiple benchmarks.
📝 Abstract
Automating crash video analysis is essential to leverage the growing availability of driving video data for traffic safety research and accountability attribution in autonomous driving. Crash video analysis is a challenging multitask problem due to the complex spatiotemporal dynamics of crash events in video data and the diverse analytical requirements involved. It requires capabilities spanning crash recognition, temporal grounding, and high-level video understanding. Existing models, however, cannot perform all these tasks within a unified framework, and effective training strategies for such models remain underexplored. To fill these gaps, this paper proposes CrashChat, a multimodal large language model (MLLM) for multitask traffic crash analysis, built upon VideoLLaMA3. CrashChat acquires domain-specific knowledge through instruction fine-tuning and employs a novel multitask learning strategy based on task decoupling and grouping, which maximizes the benefit of joint learning within and across task groups while mitigating negative transfer. Numerical experiments on consolidated public datasets demonstrate that CrashChat consistently outperforms existing MLLMs across model scales and traditional vision-based methods, achieving state-of-the-art performance. It reaches near-perfect accuracy in crash recognition, a 176% improvement in crash localization, and a 40% improvement in the more challenging pre-crash localization. Compared to general MLLMs, it substantially enhances textual accuracy and content coverage in crash description and reasoning tasks, with 0.18-0.41 increases in BLEU scores and 0.18-0.42 increases in ROUGE scores. Beyond its strong performance, CrashChat is a convenient, end-to-end analytical tool ready for practical implementation. The dataset and implementation code for CrashChat are available at https://github.com/Liangkd/CrashChat.