Scalable Object Detection in the Car Interior With Vision Foundation Models

📅 2025-08-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of deploying scalable in-vehicle object detection under stringent onboard computational constraints, this paper proposes ODAL—a distributed framework that synergistically integrates a lightweight onboard vision model with a large cloud-based foundation model for efficient scene understanding. Methodologically, ODAL combines a visual foundation model (LLaVA-1.5 7B), supervised fine-tuning, and a cross-device computation architecture. We further introduce ODALbench, a dedicated evaluation benchmark featuring two novel metrics: ODAL$_{score}$ (measuring task accuracy and robustness) and ODAL$_{SNR}$ (quantifying signal-to-noise ratio in generated outputs). Experimental results show that fine-tuned ODAL-LLaVA achieves an ODAL$_{score}$ of 89%, outperforming baseline methods by 71%; its ODAL$_{SNR}$ is three times that of GPT-4o, with significantly reduced hallucination rates. Overall, ODAL surpasses GPT-4o by approximately 20% in comprehensive performance.

Technology Category

Application Category

📝 Abstract
AI tasks in the car interior like identifying and localizing externally introduced objects is crucial for response quality of personal assistants. However, computational resources of on-board systems remain highly constrained, restricting the deployment of such solutions directly within the vehicle. To address this limitation, we propose the novel Object Detection and Localization (ODAL) framework for interior scene understanding. Our approach leverages vision foundation models through a distributed architecture, splitting computational tasks between on-board and cloud. This design overcomes the resource constraints of running foundation models directly in the car. To benchmark model performance, we introduce ODALbench, a new metric for comprehensive assessment of detection and localization.Our analysis demonstrates the framework's potential to establish new standards in this domain. We compare the state-of-the-art GPT-4o vision foundation model with the lightweight LLaVA 1.5 7B model and explore how fine-tuning enhances the lightweight models performance. Remarkably, our fine-tuned ODAL-LLaVA model achieves an ODAL$_{score}$ of 89%, representing a 71% improvement over its baseline performance and outperforming GPT-4o by nearly 20%. Furthermore, the fine-tuned model maintains high detection accuracy while significantly reducing hallucinations, achieving an ODAL$_{SNR}$ three times higher than GPT-4o.
Problem

Research questions and friction points this paper is trying to address.

Detecting objects inside cars with limited onboard computing
Balancing computational tasks between vehicle and cloud systems
Improving lightweight model accuracy while reducing false detections
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributed architecture splits on-board and cloud computation
Leverages vision foundation models for interior object detection
Fine-tuned lightweight model significantly outperforms baseline and GPT-4o
🔎 Similar Papers
No similar papers found.
B
Bálint Mészáros
Technical University of Munich, School of Computation, Information and Technology
Ahmet Firintepe
Ahmet Firintepe
BMW Group/TU Kaiserslautern
Augmented RealityComputer VisionDeep Learning
S
Sebastian Schmidt
Technical University of Munich, School of Computation, Information and Technology
Stephan Günnemann
Stephan Günnemann
Professor of Computer Science, Technical University of Munich
Machine LearningGraphsGraph Neural NetworksRobustness