🤖 AI Summary
Neuromorphic event sensors capture visual information as asynchronous, sparse pixel-level event streams, rendering conventional lossy compression and rate-adaptive transmission techniques ineffective due to their frameless nature; existing work largely overlooks efficient event data transmission. To address this, we propose the first scalable, low-latency event video streaming system tailored for machine vision. Our approach introduces a novel event stream format and pioneers the adoption of Media over QUIC (moQ) for event streaming—enabling end-to-end reliability, ultra-low latency, and dynamic rate adaptation. Integrated with lightweight event encoding and adaptive flow control, the system reduces end-to-end transmission latency by 42% (measured) and achieves real-time throughput exceeding 10,000 events per second. This work bridges a critical gap in neuromorphic vision systems by co-optimizing perception and transmission.
📝 Abstract
Lossy compression and rate-adaptive streaming are a mainstay in traditional video steams. However, a new class of neuromorphic ``event'' sensors records video with asynchronous pixel samples rather than image frames. These sensors are designed for computer vision applications, rather than human video consumption. Until now, researchers have focused their efforts primarily on application development, ignoring the crucial problem of data transmission. We survey the landscape of event-based video systems, discuss the technical issues with our recent scalable event streaming work, and propose a new low-latency event streaming format based on the latest additions to the Media Over QUIC protocol draft.