🤖 AI Summary
For latency-sensitive autonomous applications—such as autonomous driving—existing multi-XPU (CPU/GPU/FPGA) co-scheduling approaches suffer from coarse-grained, stage-agnostic orchestration. To address this, this paper proposes XAUTO, a runtime system centered on the XNODE programming abstraction, which refines task granularity from conventional module-level to algorithm-stage-level, enabling unified heterogeneous XPU modeling and cross-device collaborative scheduling. XAUTO introduces a global resource allocation mechanism and a real-time–aware, stage-level scheduling algorithm. Experimental evaluation on a representative perception pipeline demonstrates that XAUTO reduces end-to-end latency by 1.61× compared to module-level frameworks such as ROS2, while significantly improving multi-XPU resource utilization and latency determinism.
📝 Abstract
Modern autonomous applications are increasingly utilizing multiple heterogeneous processors (XPUs) to accelerate different stages of algorithm modules. However, existing runtime systems for these applications, such as ROS, can only perform module-level task management, lacking awareness of the fine-grained usage of multiple XPUs. This paper presents XAUTO, a runtime system designed to cooperatively manage XPUs for latency-sensitive autonomous applications. The key idea is a fine-grained, multi-XPU programming abstraction -- XNODE, which aligns with the stage-level task granularity and can accommodate multiple XPU implementations. XAUTO holistically assigns XPUs to XNODEs and schedules their execution to minimize end-to-end latency. Experimental results show that XAUTO can reduce the end-to-end latency of a perception pipeline for autonomous driving by 1.61x compared to a state-of-the-art module-level scheduling system (ROS2).