BeLLA: End-to-End Birds Eye View Large Language Assistant for Autonomous Driving

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current autonomous driving vision-language models struggle to effectively model the spatial structure of multi-camera setups: single-view encoders neglect inter-camera geometric relationships, while existing multi-view fusion approaches lack a unified spatial representation, hindering ego-centric directional understanding, relative object localization, and behavioral reasoning. This work introduces the first end-to-end autonomous driving vision-language assistant, which innovatively maps multi-camera images directly onto a unified 360° bird’s-eye view (BEV) representation and jointly trains it with a large language model. Leveraging a BEV feature encoder, vision-language alignment pretraining, and instruction tuning, the framework achieves deep integration of spatial perception and natural language question answering. On NuScenes-QA and DriveLM benchmarks, our method achieves up to a 9.3% improvement on spatial reasoning tasks, significantly outperforming state-of-the-art approaches.

Technology Category

Application Category

📝 Abstract
The rapid development of Vision-Language models (VLMs) and Multimodal Language Models (MLLMs) in autonomous driving research has significantly reshaped the landscape by enabling richer scene understanding, context-aware reasoning, and more interpretable decision-making. However, a lot of existing work often relies on either single-view encoders that fail to exploit the spatial structure of multi-camera systems or operate on aggregated multi-view features, which lack a unified spatial representation, making it more challenging to reason about ego-centric directions, object relations, and the wider context. We thus present BeLLA, an end-to-end architecture that connects unified 360° BEV representations with a large language model for question answering in autonomous driving. We primarily evaluate our work using two benchmarks - NuScenes-QA and DriveLM, where BeLLA consistently outperforms existing approaches on questions that require greater spatial reasoning, such as those involving relative object positioning and behavioral understanding of nearby objects, achieving up to +9.3% absolute improvement in certain tasks. In other categories, BeLLA performs competitively, demonstrating the capability of handling a diverse range of questions.
Problem

Research questions and friction points this paper is trying to address.

Integrates BEV representation with LLM for autonomous driving QA
Improves spatial reasoning for object positioning and behavior understanding
Outperforms existing methods on benchmarks requiring complex spatial analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Connects 360-degree BEV representations with LLM
Enables spatial reasoning for autonomous driving QA
Outperforms existing methods on spatial reasoning tasks
🔎 Similar Papers
No similar papers found.
Karthik Mohan
Karthik Mohan
University of Toronto
Machine LearningArtificial IntelligenceComputer Science
S
Sonam Singh
Robert Bosch Corporate Research India
A
Amit Arvind Kale
Robert Bosch Corporate Research India