🤖 AI Summary
This paper addresses the challenge of evaluating control latency in remote operation of connected and automated vehicles (CAVs). We propose the first end-to-end, architecture-agnostic Motion-to-Motion (M2M) latency measurement framework, which formally defines and quantifies the delay between a remote operator’s steering input and the vehicle’s corresponding steering execution. Our method employs Hall-effect sensors and a dual-synchronized Raspberry Pi 5 system, leveraging interrupt-driven timestamping to achieve high-precision synchronization at both human and vehicle ends, with measurement accuracy of 10–15 ms. Moving beyond conventional video-link-centric latency analysis, our framework reveals—through real-world testing—that actuator response dominates M2M latency, contributing a median delay of 750 ms. This work establishes a reproducible, standardized benchmark tool for rigorously assessing real-time performance in remote driving systems.
📝 Abstract
Latency is a key performance factor for the teleoperation of Connected and Autonomous Vehicles (CAVs). It affects how quickly an operator can perceive changes in the driving environment and apply corrective actions. Most existing work focuses on Glass-to-Glass (G2G) latency, which captures delays only in the video pipeline. However, there is no standard method for measuring Motion-to-Motion (M2M) latency, defined as the delay between the physical steering movement of the remote operator and the corresponding steering motion in the vehicle. This paper presents an M2M latency measurement framework that uses Hall-effect sensors and two synchronized Raspberry Pi~5 devices. The system records interrupt-based timestamps on both sides to estimate M2M latency, independently of the underlying teleoperation architecture. Precision tests show an accuracy of 10--15~ms, while field results indicate that actuator delays dominate M2M latency, with median values above 750~ms.