A Data-Driven Novelty Score for Diverse In-Vehicle Data Recording

📅 2025-07-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In autonomous driving, data collection bias induces a “rarity curse”—over-sampling of common scenes and severe under-representation of critical novel events (e.g., rare objects or anomalous traffic conditions)—degrading model generalization and safety. Method: We propose an object-level, real-time novelty detection framework for in-vehicle imagery. It introduces a data-driven, dynamic mean-shift algorithm that online models the normal visual distribution per frame using intra-frame object features’ mean and covariance, and computes frame-level novelty scores in streaming fashion; an adaptive thresholding mechanism enables redundancy suppression and active capture of rare samples. Contribution/Results: Operating at 32 FPS, our method reduces training dataset size by up to 76% while improving downstream perception model mAP by 3.2%, demonstrating its efficacy in constructing balanced, robust datasets amid highly redundant driving scenarios.

Technology Category

Application Category

📝 Abstract
High-quality datasets are essential for training robust perception systems in autonomous driving. However, real-world data collection is often biased toward common scenes and objects, leaving novel cases underrepresented. This imbalance hinders model generalization and compromises safety. The core issue is the curse of rarity. Over time, novel events occur infrequently, and standard logging methods fail to capture them effectively. As a result, large volumes of redundant data are stored, while critical novel cases are diluted, leading to biased datasets. This work presents a real-time data selection method focused on object-level novelty detection to build more balanced and diverse datasets. The method assigns a data-driven novelty score to image frames using a novel dynamic Mean Shift algorithm. It models normal content based on mean and covariance statistics to identify frames with novel objects, discarding those with redundant elements. The main findings show that reducing the training dataset size with this method can improve model performance, whereas higher redundancy tends to degrade it. Moreover, as data redundancy increases, more aggressive filtering becomes both possible and beneficial. While random sampling can offer some gains, it often leads to overfitting and unpredictability in outcomes. The proposed method supports real-time deployment with 32 frames per second and is constant over time. By continuously updating the definition of normal content, it enables efficient detection of novelties in a continuous data stream.
Problem

Research questions and friction points this paper is trying to address.

Identifying novel objects in imbalanced driving datasets
Reducing data redundancy to improve model performance
Enabling real-time novelty detection in continuous data streams
Innovation

Methods, ideas, or system contributions that make the work stand out.

Real-time data selection for novelty detection
Dynamic Mean Shift algorithm for novelty scoring
Continuous update of normal content definition
🔎 Similar Papers
No similar papers found.