🤖 AI Summary
This work addresses the scarcity and inconsistent annotation quality of cone detection data in Formula Student autonomous driving competitions. To this end, we introduce FSOCO—the first collaborative visual perception benchmark dataset specifically designed for Formula Student autonomous racing. Methodologically, we propose a novel “contribute-then-use” crowdsourcing framework involving student teams, complemented by standardized annotation guidelines, an automated image filtering tool, and a multi-tier quality control pipeline for both bounding-box and instance segmentation annotations. Our key contribution is a scalable, highly consistent, and continuously evolving paradigm for autonomous driving perception data curation. Experimental results demonstrate that detectors trained on FSOCO significantly outperform those trained on prior unsupervised or small-scale datasets. The dataset is publicly available and has been adopted by over ten international Formula Student autonomous teams.
📝 Abstract
This paper presents the FSOCO dataset, a collaborative dataset for vision-based cone detection systems in Formula Student Driverless competitions. It contains human annotated ground truth labels for both bounding boxes and instance-wise segmentation masks. The data buy-in philosophy of FSOCO asks student teams to contribute to the database first before being granted access ensuring continuous growth. By providing clear labeling guidelines and tools for a sophisticated raw image selection, new annotations are guaranteed to meet the desired quality. The effectiveness of the approach is shown by comparing prediction results of a network trained on FSOCO and its unregulated predecessor. The FSOCO dataset can be found at this http URL.