CrashChat: A Multimodal Large Language Model for Multitask Traffic Crash Video Analysis

📅 2025-12-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of unified modeling for multiple tasks—collision recognition, temporal localization, and high-level semantic understanding—in traffic collision video analysis, this paper introduces the first multimodal large language model (MLLM) tailored for driving scenarios. Methodologically, we propose a novel task-decoupled and grouped multi-task learning strategy to mitigate negative transfer, augmented by driving-domain instruction tuning to enable cross-task collaborative optimization and domain-knowledge injection. Built upon the VideoLLaMA3 architecture, our model integrates video-language alignment, spatiotemporal feature modeling, and joint training mechanisms. Experiments demonstrate near-perfect collision recognition accuracy (~100%), with collision and pre-collision temporal localization errors reduced by 176% and 40%, respectively; BLEU and ROUGE scores improve by 0.18–0.42. Our approach achieves state-of-the-art performance across multiple benchmarks.

Technology Category

Application Category

📝 Abstract
Automating crash video analysis is essential to leverage the growing availability of driving video data for traffic safety research and accountability attribution in autonomous driving. Crash video analysis is a challenging multitask problem due to the complex spatiotemporal dynamics of crash events in video data and the diverse analytical requirements involved. It requires capabilities spanning crash recognition, temporal grounding, and high-level video understanding. Existing models, however, cannot perform all these tasks within a unified framework, and effective training strategies for such models remain underexplored. To fill these gaps, this paper proposes CrashChat, a multimodal large language model (MLLM) for multitask traffic crash analysis, built upon VideoLLaMA3. CrashChat acquires domain-specific knowledge through instruction fine-tuning and employs a novel multitask learning strategy based on task decoupling and grouping, which maximizes the benefit of joint learning within and across task groups while mitigating negative transfer. Numerical experiments on consolidated public datasets demonstrate that CrashChat consistently outperforms existing MLLMs across model scales and traditional vision-based methods, achieving state-of-the-art performance. It reaches near-perfect accuracy in crash recognition, a 176% improvement in crash localization, and a 40% improvement in the more challenging pre-crash localization. Compared to general MLLMs, it substantially enhances textual accuracy and content coverage in crash description and reasoning tasks, with 0.18-0.41 increases in BLEU scores and 0.18-0.42 increases in ROUGE scores. Beyond its strong performance, CrashChat is a convenient, end-to-end analytical tool ready for practical implementation. The dataset and implementation code for CrashChat are available at https://github.com/Liangkd/CrashChat.
Problem

Research questions and friction points this paper is trying to address.

Develops a unified model for multitask traffic crash video analysis
Enhances crash recognition, localization, and description accuracy
Addresses negative transfer in joint learning through task decoupling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal large language model for crash video analysis
Instruction fine-tuning for domain-specific knowledge acquisition
Task decoupling and grouping for multitask learning
🔎 Similar Papers
No similar papers found.
K
Kaidi Liang
Stony Brook University, Department of Civil Engineering, Stony Brook, NY 11794, USA
K
Ke Li
Stony Brook University, Department of Civil Engineering, Stony Brook, NY 11794, USA
X
Xianbiao Hu
The Pennsylvania State University, Department of Civil and Environmental Engineering, University Park, PA 16802-1408, USA
Ruwen Qin
Ruwen Qin
Stony Brook University
Visual Perception and CognitionCollective Intelligence