Bandwidth-adaptive Cloud-Assisted 360-Degree 3D Perception for Autonomous Vehicles

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of achieving real-time 360-degree 3D perception under stringent latency constraints in resource-constrained autonomous vehicles. The authors propose a V2X-enabled vehicle-cloud collaborative perception framework that leverages a Transformer to fuse multi-camera inputs into a bird’s-eye-view (BEV) representation. A bandwidth-adaptive dynamic computation partitioning strategy is introduced, which jointly optimizes the number of locally processed layers, feature quantization levels, and offloading decisions based on real-time network conditions. By integrating feature pruning, compression, and hierarchical offloading, the method maximizes detection accuracy while adhering to end-to-end latency requirements. Experimental results demonstrate a 72% reduction in end-to-end latency compared to baseline approaches, and under identical latency constraints, the proposed method achieves up to a 20% improvement in detection accuracy over static configurations.

Technology Category

Application Category

📝 Abstract
A key challenge for autonomous driving lies in maintaining real-time situational awareness regarding surrounding obstacles under strict latency constraints. The high processing requirements coupled with limited onboard computational resources can cause delay issues, particularly in complex urban settings. To address this, we propose leveraging Vehicle-to-Everything (V2X) communication to partially offload processing to the cloud, where compute resources are abundant, thus reducing overall latency. Our approach utilizes transformer-based models to fuse multi-camera sensor data into a comprehensive Bird's-Eye View (BEV) representation, enabling accurate 360-degree 3D object detection. The computation is dynamically split between the vehicle and the cloud based on the number of layers processed locally and the quantization level of the features. To further reduce network load, we apply feature vector clipping and compression prior to transmission. In a real-world experimental evaluation, our hybrid strategy achieved a 72 \% reduction in end-to-end latency compared to a traditional onboard solution. To adapt to fluctuating network conditions, we introduce a dynamic optimization algorithm that selects the split point and quantization level to maximize detection accuracy while satisfying real-time latency constraints. Trace-based evaluation under realistic bandwidth variability shows that this adaptive approach improves accuracy by up to 20 \% over static parameterization with the same latency performance.
Problem

Research questions and friction points this paper is trying to address.

autonomous driving
360-degree 3D perception
latency constraints
onboard computational resources
real-time situational awareness
Innovation

Methods, ideas, or system contributions that make the work stand out.

bandwidth-adaptive
cloud-assisted perception
BEV representation
dynamic computation offloading
transformer-based fusion
🔎 Similar Papers
No similar papers found.
F
Faisal Hawladera
Interdisciplinary Centre for Security, Reliability, and Trust (SnT), University of Luxembourg, L-1855, Luxembourg
R
Rui Meireles
Computer Science Department, Vassar College, Poughkeepsie, NY 12604, USA
Gamal Elghazaly
Gamal Elghazaly
University of Luxembourg
AIautonomous drivingroboticscomputer vision
Ana Aguiar
Ana Aguiar
Assistant Professor, Electrical and Computer Engineering, University of Porto
wireless systemswireless networksperformance evaluationapplied data sciencedigital mobility
R
Raphaël Frank
Interdisciplinary Centre for Security, Reliability, and Trust (SnT), University of Luxembourg, L-1855, Luxembourg