🤖 AI Summary
Autonomous driving systems exhibit low hazard detection accuracy and weak spatiotemporal localization capability when confronting unpredictable edge cases—such as pedestrian anomalies or sudden adverse weather—thereby compromising safety and robustness. To address this, we propose INSIGHT, the first semantic-visual co-driven hierarchical vision-language model (VLM). INSIGHT jointly optimizes attention-guided spatial hazard localization and coordinate regression by fusing scene-level semantic descriptions with multi-scale visual features. This design significantly enhances generalization to rare events. Evaluated on BDD100K, INSIGHT achieves superior hazard prediction accuracy and interpretability compared to state-of-the-art end-to-end models, while simultaneously improving real-time situational awareness and system robustness.
📝 Abstract
Autonomous driving systems face significant challenges in handling unpredictable edge-case scenarios, such as adversarial pedestrian movements, dangerous vehicle maneuvers, and sudden environmental changes. Current end-to-end driving models struggle with generalization to these rare events due to limitations in traditional detection and prediction approaches. To address this, we propose INSIGHT (Integration of Semantic and Visual Inputs for Generalized Hazard Tracking), a hierarchical vision-language model (VLM) framework designed to enhance hazard detection and edge-case evaluation. By using multimodal data fusion, our approach integrates semantic and visual representations, enabling precise interpretation of driving scenarios and accurate forecasting of potential dangers. Through supervised fine-tuning of VLMs, we optimize spatial hazard localization using attention-based mechanisms and coordinate regression techniques. Experimental results on the BDD100K dataset demonstrate a substantial improvement in hazard prediction straightforwardness and accuracy over existing models, achieving a notable increase in generalization performance. This advancement enhances the robustness and safety of autonomous driving systems, ensuring improved situational awareness and potential decision-making in complex real-world scenarios.