๐ค AI Summary
To address SLO violations in Edge Video Analytics (EVA) caused by resource constraints, network instability, and highly volatile workloads on edge devices, this paper proposes a co-optimization framework for real-time DNN inference. Our approach integrates: (1) fine-grained GPU resource allocation with spatio-temporal joint task scheduling to maximize resource colocation efficiency under strict SLO guarantees; and (2) adaptive dynamic batching coupled with multi-level, load-aware edge-cloud collaborative load balancing. Evaluated on a real-world edge platform, the framework achieves up to 10ร higher throughput compared to baselines, significantly improves robustness against workload surges and weak-network conditions, and enables plug-and-play adaptation across diverse EVA tasksโwithout requiring model retraining or infrastructure modification.
๐ Abstract
Edge Video Analytics (EVA) has gained significant attention as a major application of pervasive computing, enabling real-time visual processing. EVA pipelines, composed of deep neural networks (DNNs), typically demand efficient inference serving under stringent latency requirements, which is challenging due to the dynamic Edge environments (e.g., workload variability and network instability). Moreover, EVA pipelines also face significant resource contention caused by resource (e.g., GPU) constraints at the Edge. In this paper, we introduce OCTOPINF, a novel resource-efficient and workload-aware inference serving system designed for real-time EVA. OCTOPINF tackles the unique challenges of dynamic edge environments through fine-grained resource allocation, adaptive batching, and workload balancing between edge devices and servers. Furthermore, we propose a spatiotemporal scheduling algorithm that optimizes the co-location of inference tasks on GPUs, improving performance and ensuring service-level objectives (SLOs) compliance. Extensive evaluations on a real-world testbed demonstrate the effectiveness of our approach. It achieves an effective throughput increase of up to 10x compared to the baselines and shows better robustness in challenging scenarios. OCTOPINF can be used for any DNN-based EVA inference task with minimal adaptation and is available at https://github.com/tungngreen/PipelineScheduler.