🤖 AI Summary
To address the challenge of deploying scalable in-vehicle object detection under stringent onboard computational constraints, this paper proposes ODAL—a distributed framework that synergistically integrates a lightweight onboard vision model with a large cloud-based foundation model for efficient scene understanding. Methodologically, ODAL combines a visual foundation model (LLaVA-1.5 7B), supervised fine-tuning, and a cross-device computation architecture. We further introduce ODALbench, a dedicated evaluation benchmark featuring two novel metrics: ODAL$_{score}$ (measuring task accuracy and robustness) and ODAL$_{SNR}$ (quantifying signal-to-noise ratio in generated outputs). Experimental results show that fine-tuned ODAL-LLaVA achieves an ODAL$_{score}$ of 89%, outperforming baseline methods by 71%; its ODAL$_{SNR}$ is three times that of GPT-4o, with significantly reduced hallucination rates. Overall, ODAL surpasses GPT-4o by approximately 20% in comprehensive performance.
📝 Abstract
AI tasks in the car interior like identifying and localizing externally introduced objects is crucial for response quality of personal assistants. However, computational resources of on-board systems remain highly constrained, restricting the deployment of such solutions directly within the vehicle. To address this limitation, we propose the novel Object Detection and Localization (ODAL) framework for interior scene understanding. Our approach leverages vision foundation models through a distributed architecture, splitting computational tasks between on-board and cloud. This design overcomes the resource constraints of running foundation models directly in the car. To benchmark model performance, we introduce ODALbench, a new metric for comprehensive assessment of detection and localization.Our analysis demonstrates the framework's potential to establish new standards in this domain. We compare the state-of-the-art GPT-4o vision foundation model with the lightweight LLaVA 1.5 7B model and explore how fine-tuning enhances the lightweight models performance. Remarkably, our fine-tuned ODAL-LLaVA model achieves an ODAL$_{score}$ of 89%, representing a 71% improvement over its baseline performance and outperforming GPT-4o by nearly 20%. Furthermore, the fine-tuned model maintains high detection accuracy while significantly reducing hallucinations, achieving an ODAL$_{SNR}$ three times higher than GPT-4o.