๐ค AI Summary
To address the scarcity of high-quality, multi-view, annotated track datasets for autonomous racing perception research, this paper introduces RoRaTrackโthe first publicly available track detection dataset tailored to high-speed real-world vehicle scenarios. It is built from on-track imagery captured by a Dallara AV-21 race car at the Indianapolis Motor Speedway, explicitly encompassing challenging visual degradations such as motion blur, camera color bias, and absence of lane markings. We further propose RaceGAN, the first generative adversarial network specifically designed to model and mitigate high-speed racing visual degradations, incorporating synchronized multi-camera acquisition, fine-grained real-world annotation, and degradation-aware data augmentation. Experiments demonstrate that RaceGAN significantly outperforms existing state-of-the-art methods on track detection. The dataset, annotations, and source code are fully open-sourced to foster standardization and reproducibility in autonomous racing perception research.
๐ Abstract
A significant challenge in racing-related research is the lack of publicly available datasets containing raw images with corresponding annotations for the downstream task. In this paper, we introduce RoRaTrack, a novel dataset that contains annotated multi-camera image data from racing scenarios for track detection. The data is collected on a Dallara AV-21 at a racing circuit in Indiana, in collaboration with the Indy Autonomous Challenge (IAC). RoRaTrack addresses common problems such as blurriness due to high speed, color inversion from the camera, and absence of lane markings on the track. Consequently, we propose RaceGAN, a baseline model based on a Generative Adversarial Network (GAN) that effectively addresses these challenges. The proposed model demonstrates superior performance compared to current state-of-the-art machine learning models in track detection. The dataset and code for this work are available at github.com/RaceGAN.