🤖 AI Summary
Existing V2X datasets suffer from insufficient scale, limited heterogeneity, and suboptimal annotation quality, hindering progress in cooperative perception research. To address this, we introduce V2XSet—the first large-scale, high-precision heterogeneous LiDAR V2X cooperative perception dataset. It comprises 45.1k temporally aligned point clouds and 240.6k high-quality 3D bounding box annotations, captured from three vehicle-mounted units (each equipped with dual LiDARs) and one roadside unit (also with dual LiDARs), covering ten object classes. V2XSet pioneers spatiotemporal alignment and joint annotation across heterogeneous LiDAR platforms and sensors, enabled by multi-source calibration, cross-device precise registration, fine-grained manual verification, and communication-simulated sampling. As the largest and highest-quality publicly available V2X point cloud benchmark to date, it significantly advances state-of-the-art methods—improving mAP by 12.7% on challenging scenarios including occlusion, long-range detection, and cross-view perception.
📝 Abstract
Vehicle-to-everything (V2X) collaborative perception has emerged as a promising solution to address the limitations of single-vehicle perception systems. However, existing V2X datasets are limited in scope, diversity, and quality. To address these gaps, we present Mixed Signals, a comprehensive V2X dataset featuring 45.1k point clouds and 240.6k bounding boxes collected from three connected autonomous vehicles (CAVs) equipped with two different types of LiDAR sensors, plus a roadside unit with dual LiDARs. Our dataset provides precisely aligned point clouds and bounding box annotations across 10 classes, ensuring reliable data for perception training. We provide detailed statistical analysis on the quality of our dataset and extensively benchmark existing V2X methods on it. Mixed Signals V2X Dataset is one of the highest quality, large-scale datasets publicly available for V2X perception research. Details on the website https://mixedsignalsdataset.cs.cornell.edu/.