Methodology for a Statistical Analysis of Influencing Factors on 3D Object Detection Performance

📅 2024-11-13
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The underlying mechanisms affecting multi-modal 3D object detection performance in autonomous driving remain unclear, and attributing detection errors to specific factors is challenging. Method: This paper proposes the first multi-granularity interpretable statistical analysis framework tailored for LiDAR-camera fusion detectors. It innovatively integrates univariate statistical analysis with explainable machine learning—specifically, random forest augmented by SHAP (SHapley Additive exPlanations) values—to explicitly model feature dependencies and overcome limitations of conventional black-box analyses. Contribution/Results: On benchmarks including nuScenes, the framework systematically quantifies contributions of object attributes (e.g., occlusion, distance) and environmental factors (e.g., illumination, weather) to detection errors, precisely identifying critical influencing factors. It reduces error prediction MAE by 23%, significantly enhancing quantitative support for pinpointing detection weaknesses and safety verification.

Technology Category

Application Category

📝 Abstract
In automated driving, object detection is an essential task to perceive the environment by localizing and classifying objects. Most object detection algorithms are based on deep learning for superior performance. However, their black-box nature makes it challenging to ensure safety. In this paper, we propose a first-of-its-kind methodology for analyzing the influence of various factors related to the objects or the environment on the detection performance of both LiDAR- and camera-based 3D object detectors. We conduct a statistical univariate analysis between each factor and the detection error on pedestrians to compare their strength of influence. In addition to univariate analysis, we employ a Random Forest (RF) model to predict the errors of specific detectors based on the provided meta-information. To interpret the predictions of the RF and assess the importance of individual features, we compute Shapley Values. By considering feature dependencies, the RF captures more complex relationships between meta-information and detection errors, allowing a more nuanced analysis of the factors contributing to the observed errors. Recognizing the factors that influence detection performance helps identify performance insufficiencies in the trained object detector and supports the safe development of object detection systems.
Problem

Research questions and friction points this paper is trying to address.

LiDAR-Camera Fusion
3D Object Detection
Autonomous Vehicles
Innovation

Methods, ideas, or system contributions that make the work stand out.

Statistical Method
LiDAR and Camera-based 3D Object Detection
Random Forest and Shapley Value Analysis
🔎 Similar Papers
No similar papers found.
A
Anton Kuznietsov
Institute of Automotive Engineering, Technical University of Darmstadt, Darmstadt, Germany
D
Dirk Schweickard
Institute of Automotive Engineering, Technical University of Darmstadt, Darmstadt, Germany
Steven Peters
Steven Peters
TU Darmstadt
Automotive Engineering