Modeling and Measuring Redundancy in Multisource Multimodal Data for Autonomous Driving

πŸ“… 2026-03-06
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the long-overlooked redundancy in multi-source, multi-modal data for autonomous driving, which adversely impacts perception performance. For the first time, redundancy is formalized as a quantifiable data quality factor. The study systematically analyzes redundancy among multiple cameras and between image–LiDAR modalities in the nuScenes and Argoverse 2 datasets, proposing a method to identify redundant labels based on field-of-view overlap and cross-modal consistency. Experiments within the YOLOv8 object detection framework demonstrate that removing 4.1–8.6% of redundant labels maintains stable performance while improving mAP50 by up to 0.04 in specific regions of nuScenes. These findings validate the efficacy of redundancy-aware data curation and advance a data-centric paradigm in autonomous driving research.

Technology Category

Application Category

πŸ“ Abstract
Next-generation autonomous vehicles (AVs) rely on large volumes of multisource and multimodal ($M^2$) data to support real-time decision-making. In practice, data quality (DQ) varies across sources and modalities due to environmental conditions and sensor limitations, yet AV research has largely prioritized algorithm design over DQ analysis. This work focuses on redundancy as a fundamental but underexplored DQ issue in AV datasets. Using the nuScenes and Argoverse 2 (AV2) datasets, we model and measure redundancy in multisource camera data and multimodal image-LiDAR data, and evaluate how removing redundant labels affects the YOLOv8 object detection task. Experimental results show that selectively removing redundant multisource image object labels from cameras with shared fields of view improves detection. In nuScenes, mAP${50}$ gains from $0.66$ to $0.70$, $0.64$ to $0.67$, and from $0.53$ to $0.55$, on three representative overlap regions, while detection on other overlapping camera pairs remains at the baseline even under stronger pruning. In AV2, $4.1$-$8.6\%$ of labels are removed, and mAP${50}$ stays near the $0.64$ baseline. Multimodal analysis also reveals substantial redundancy between image and LiDAR data. These findings demonstrate that redundancy is a measurable and actionable DQ factor with direct implications for AV performance. This work highlights the role of redundancy as a data quality factor in AV perception and motivates a data-centric perspective for evaluating and improving AV datasets. Code, data, and implementation details are publicly available at: https://github.com/yhZHOU515/RedundancyAD
Problem

Research questions and friction points this paper is trying to address.

redundancy
data quality
multisource multimodal data
autonomous driving
object detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

redundancy
multisource multimodal data
data quality
autonomous driving
object detection
πŸ”Ž Similar Papers
No similar papers found.
Yuhan Zhou
Yuhan Zhou
Ph.D. student, University of North Texas
Data QualityHealth InformaticsData Science
M
Mehri Sattari
Dept. of Information Science, University of North Texas, Denton, Texas, USA
H
Haihua Chen
Dept. of Data Science, University of North Texas, Denton, Texas, USA
Kewei Sha
Kewei Sha
Associate Professor, University of North Texas
Security and PrivacyEdge ComputingBlockchainData Quality and Analytics