OCTOPINF: Workload-Aware Inference Serving for Edge Video Analytics

๐Ÿ“… 2025-02-03
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address SLO violations in Edge Video Analytics (EVA) caused by resource constraints, network instability, and highly volatile workloads on edge devices, this paper proposes a co-optimization framework for real-time DNN inference. Our approach integrates: (1) fine-grained GPU resource allocation with spatio-temporal joint task scheduling to maximize resource colocation efficiency under strict SLO guarantees; and (2) adaptive dynamic batching coupled with multi-level, load-aware edge-cloud collaborative load balancing. Evaluated on a real-world edge platform, the framework achieves up to 10ร— higher throughput compared to baselines, significantly improves robustness against workload surges and weak-network conditions, and enables plug-and-play adaptation across diverse EVA tasksโ€”without requiring model retraining or infrastructure modification.

Technology Category

Application Category

๐Ÿ“ Abstract
Edge Video Analytics (EVA) has gained significant attention as a major application of pervasive computing, enabling real-time visual processing. EVA pipelines, composed of deep neural networks (DNNs), typically demand efficient inference serving under stringent latency requirements, which is challenging due to the dynamic Edge environments (e.g., workload variability and network instability). Moreover, EVA pipelines also face significant resource contention caused by resource (e.g., GPU) constraints at the Edge. In this paper, we introduce OCTOPINF, a novel resource-efficient and workload-aware inference serving system designed for real-time EVA. OCTOPINF tackles the unique challenges of dynamic edge environments through fine-grained resource allocation, adaptive batching, and workload balancing between edge devices and servers. Furthermore, we propose a spatiotemporal scheduling algorithm that optimizes the co-location of inference tasks on GPUs, improving performance and ensuring service-level objectives (SLOs) compliance. Extensive evaluations on a real-world testbed demonstrate the effectiveness of our approach. It achieves an effective throughput increase of up to 10x compared to the baselines and shows better robustness in challenging scenarios. OCTOPINF can be used for any DNN-based EVA inference task with minimal adaptation and is available at https://github.com/tungngreen/PipelineScheduler.
Problem

Research questions and friction points this paper is trying to address.

Real-time Video Analysis
Resource Allocation
Deep Neural Networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

OCTOPINF
GPU resource optimization
Edge Video Analysis (EVA)
๐Ÿ”Ž Similar Papers
No similar papers found.
T
Thanh-Tung Nguyen
School of Computing, KAIST, Republic of Korea
L
Lucas Liebe
School of Computing, KAIST, Republic of Korea
N
Nhat-Quang Tau
School of Computing, KAIST, Republic of Korea
Yuheng Wu
Yuheng Wu
KAIST
Efficient AIEmbodied Intelligent SystemAutonomous Driving
J
Jinghan Cheng
School of Computing, KAIST, Republic of Korea
Dongman Lee
Dongman Lee
KAIST
Computer NetworksUbiquitous ComputingMobile ComputingPervasive Computing