🤖 AI Summary
This work addresses the significant performance degradation of existing visual perception systems under challenging conditions such as low illumination and the absence of large-scale autonomous driving datasets that support the integration of event cameras with deep learning. To this end, the authors introduce eAP, the largest event-based autonomous driving dataset to date, and propose a geometry-aware representation learning framework that effectively incorporates event data into mainstream 3D vehicle detection networks for the first time. This integration substantially enhances robustness under complex lighting conditions. Furthermore, the proposed approach enables real-time, high-performance event-driven time-to-contact (TTC) estimation at up to 200 frames per second.
📝 Abstract
Recent visual autonomous perception systems achieve remarkable performances with deep representation learning. However, they fail in scenarios with challenging illumination.While event cameras can mitigate this problem, there is a lack of a large-scale dataset to develop event-enhanced deep visual perception models in autonomous driving scenes. To address the gap, we present the eAP (event-enhanced Autonomous Perception) dataset, the largest dataset with event cameras for autonomous perception. We demonstrate how eAP can facilitate the study of different autonomous perception tasks, including 3D vehicle detection and object time-to-contact (TTC) estimation, through deep representation learning. Based on eAP, we demonstrate the ffrst successful use of events to improve a popular 3D vehicle detection network in challenging illumination scenarios. eAP also enables a devoted study of the representation learning problem of object TTC estimation. We show how a geometryaware representation learning framework leads to the best eventbased object TTC estimation network that operates at 200 FPS. The dataset, code, and pre-trained models will be made publicly available for future research.