🤖 AI Summary
Current autonomous driving vision-language models struggle to effectively model the spatial structure of multi-camera setups: single-view encoders neglect inter-camera geometric relationships, while existing multi-view fusion approaches lack a unified spatial representation, hindering ego-centric directional understanding, relative object localization, and behavioral reasoning. This work introduces the first end-to-end autonomous driving vision-language assistant, which innovatively maps multi-camera images directly onto a unified 360° bird’s-eye view (BEV) representation and jointly trains it with a large language model. Leveraging a BEV feature encoder, vision-language alignment pretraining, and instruction tuning, the framework achieves deep integration of spatial perception and natural language question answering. On NuScenes-QA and DriveLM benchmarks, our method achieves up to a 9.3% improvement on spatial reasoning tasks, significantly outperforming state-of-the-art approaches.
📝 Abstract
The rapid development of Vision-Language models (VLMs) and Multimodal Language Models (MLLMs) in autonomous driving research has significantly reshaped the landscape by enabling richer scene understanding, context-aware reasoning, and more interpretable decision-making. However, a lot of existing work often relies on either single-view encoders that fail to exploit the spatial structure of multi-camera systems or operate on aggregated multi-view features, which lack a unified spatial representation, making it more challenging to reason about ego-centric directions, object relations, and the wider context. We thus present BeLLA, an end-to-end architecture that connects unified 360° BEV representations with a large language model for question answering in autonomous driving. We primarily evaluate our work using two benchmarks - NuScenes-QA and DriveLM, where BeLLA consistently outperforms existing approaches on questions that require greater spatial reasoning, such as those involving relative object positioning and behavioral understanding of nearby objects, achieving up to +9.3% absolute improvement in certain tasks. In other categories, BeLLA performs competitively, demonstrating the capability of handling a diverse range of questions.