MiniDrive: More Efficient Vision-Language Models with Multi-Level 2D Features as Text Tokens for Autonomous Driving

📅 2024-09-11
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language models (VLMs) for autonomous driving suffer from high computational overhead, poor real-time deployability, and lack of native support for multi-image inputs. To address these limitations, this work proposes a lightweight, end-to-end VLM architecture. Methodologically, it (1) pioneers direct injection of multi-level 2D visual features as tokens into the language model—bypassing redundant visual encoding; (2) introduces FE-MoE, a Feature Engineering Mixture-of-Experts module that adaptively fuses cross-level visual representations; and (3) incorporates DI-Adapter, a Dynamic Instruction Adapter enabling instruction-driven vision–language alignment. The smallest variant contains only 83M parameters, yet achieves state-of-the-art performance while substantially reducing FLOPs and inference latency. Crucially, the architecture natively supports multi-camera image inputs, ensuring strong real-time capability and practical deployability in autonomous driving systems.

Technology Category

Application Category

📝 Abstract
Vision-language models (VLMs) serve as general-purpose end-to-end models in autonomous driving, performing subtasks such as prediction, planning, and perception through question-and-answer interactions. However, most existing methods rely on computationally expensive visual encoders and large language models (LLMs), making them difficult to deploy in real-world scenarios and real-time applications. Meanwhile, most existing VLMs lack the ability to process multiple images, making it difficult to adapt to multi-camera perception in autonomous driving. To address these issues, we propose a novel framework called MiniDrive, which incorporates our proposed Feature Engineering Mixture of Experts (FE-MoE) module and Dynamic Instruction Adapter (DI-Adapter). The FE-MoE effectively maps 2D features into visual token embeddings before being input into the language model. The DI-Adapter enables the visual token embeddings to dynamically change with the instruction text embeddings, resolving the issue of static visual token embeddings for the same image in previous approaches. Compared to previous works, MiniDrive achieves state-of-the-art performance in terms of parameter size, floating point operations, and response efficiency, with the smallest version containing only 83M parameters.
Problem

Research questions and friction points this paper is trying to address.

High computational cost in vision-language models for autonomous driving
Inability to process multiple images in existing VLMs
Static visual token embeddings in previous approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Feature Engineering Mixture of Experts (FE-MoE) module
Dynamic Instruction Adapter (DI-Adapter)
Maps 2D features into visual token embeddings
🔎 Similar Papers
No similar papers found.
E
Enming Zhang
School of Artificial Intelligence, University of Chinese Academy of Sciences; State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences
Xingyuan Dai
Xingyuan Dai
Institute of Automation, Chinese Academy of Sciences
Artificial IntelligenceParallel IntelligenceReinforcement LearningITS
Yisheng Lv
Yisheng Lv
The University of Chinese Academy of Sciences, and Chinese Academy of Sciences
Parallel IntelligenceAI for TransportationAutonomous VehiclesParallel Transportation Systems
Q
Qianghai Miao
School of Artificial Intelligence, University of Chinese Academy of Sciences