🤖 AI Summary
This work addresses the challenge of real-time high-speed visual data transmission under bandwidth-constrained channels, particularly in dynamically unstable robotic control scenarios. The authors propose a novel paradigm termed Optical Passive Visual Compression (OPVC), which uniquely integrates optical cosine transform with event-based cameras to generate compressed-domain signals directly in the optical hardware, thereby enabling analog video compression without digital computation. Through rate-distortion analysis and comparative experiments, OPVC demonstrates superior reconstruction quality over conventional standalone event camera systems, with its performance advantage becoming increasingly pronounced as spatial resolution increases. This approach offers a promising new pathway toward low-power, high-bandwidth-efficiency visual perception for resource-limited applications.
📝 Abstract
The use of remote vision sensors for autonomous decision-making poses the challenge of transmitting high-volume visual data over resource-constrained channels in real-time. In robotics and control applications, many systems can quickly destabilize, which can exacerbate the issue by necessitating higher sampling frequencies. This work proposes a novel sensing paradigm in which an event camera observes the optically generated cosine transform of a visual scene, enabling high-speed, computation-free video compression inspired by modern video codecs. In this study, we simulate this optically passive vision compression (OPVC) scheme and compare its rate-distortion performance to that of a standalone event camera (SAEC). We find that the rate-distortion performance of the OPVC scheme surpasses that of the SAEC and that this performance gap increases as the spatial resolution of the event camera increases.