Fusion4CA: Boosting 3D Object Detection via Comprehensive Image Exploitation

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the overreliance on LiDAR and underutilization of visual information in existing bird’s-eye-view (BEV) fusion methods for 3D object detection. Building upon the BEVFusion framework, the authors propose a lightweight and efficient multimodal fusion strategy that calibrates image features with 3D geometric structure via a contrastive alignment module, enhances RGB information utilization during training through a camera-assisted branch, and refines BEV-space fusion by integrating a cognitive adapter with coordinate attention. The proposed method introduces only a 3.48% increase in parameters and achieves a 69.7% mAP on the nuScenes benchmark after just six training epochs—outperforming the baseline trained for 20 epochs by 1.2%. Furthermore, its generalization capability is validated in a simulated lunar environment.

Technology Category

Application Category

📝 Abstract
Nowadays, an increasing number of works fuse LiDAR and RGB data in the bird's-eye view (BEV) space for 3D object detection in autonomous driving systems. However, existing methods suffer from over-reliance on the LiDAR branch, with insufficient exploration of RGB information. To tackle this issue, we propose Fusion4CA, which is built upon the classic BEVFusion framework and dedicated to fully exploiting visual input with plug-and-play components. Specifically, a contrastive alignment module is designed to calibrate image features with 3D geometry, and a camera auxiliary branch is introduced to mine RGB information sufficiently during training. For further performance enhancement, we leverage an off-the-shelf cognitive adapter to make the most of pretrained image weights, and integrate a standard coordinate attention module into the fusion stage as a supplementary boost. Experiments on the nuScenes dataset demonstrate that our method achieves 69.7% mAP with only 6 training epochs and a mere 3.48% increase in inference parameters, yielding a 1.2% improvement over the baseline which is fully trained for 20 epochs. Extensive experiments in a simulated lunar environment further validate the effectiveness and generalization of our method. Our code will be released through Fusion4CA.
Problem

Research questions and friction points this paper is trying to address.

3D object detection
LiDAR-RGB fusion
bird's-eye view
image exploitation
autonomous driving
Innovation

Methods, ideas, or system contributions that make the work stand out.

BEV fusion
contrastive alignment
camera auxiliary branch
cognitive adapter
coordinate attention
K
Kang Luo
IRMV Lab, the Department of Automation, Shanghai Jiao Tong University
Xin Chen
Xin Chen
Principal investigator, Shanghai Jiao Tong University School of Medicine
Mechanobiology in Neuro-oncology
Y
Yangyi Xiao
IRMV Lab, the Department of Automation, Shanghai Jiao Tong University
H
Hesheng Wang
IRMV Lab, the Department of Automation, Shanghai Jiao Tong University