SparOA: Sparse and Operator-aware Hybrid Scheduling for Edge DNN Inference

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deploying DNN inference on resource-constrained edge devices faces a performance bottleneck: model compression degrades accuracy, custom hardware incurs high cost and poor flexibility, and existing CPU–GPU hybrid inference approaches neglect operator-level computational characteristics. Method: This paper proposes a CPU–GPU hybrid scheduling framework that jointly optimizes sparsity awareness and operator computational intensity. It innovatively integrates sparsity analysis with computational intensity modeling, and designs a threshold predictor and a reinforcement learning–based dynamic scheduler, augmented by asynchronous execution and batch-size adaptation for hardware-state–aware real-time resource allocation. Contribution/Results: Experiments demonstrate average speedups of 1.22×–1.31× over baseline methods, up to 50.7× faster than CPU-only execution, and 7%–16% reduction in energy per inference—significantly improving both energy efficiency and throughput.

Technology Category

Application Category

📝 Abstract
The resource demands of deep neural network (DNN) models introduce significant performance challenges, especially when deployed on resource-constrained edge devices. Existing solutions like model compression often sacrifice accuracy, while specialized hardware remains costly and inflexible. Hybrid inference methods, however, typically overlook how operator characteristics impact performance. In this work, we present SparOA, a CPU-GPU hybrid inference framework, which leverages both sparsity and computational intensity to optimize operator scheduling. SparOA embraces aforementioned challenges through three key components: (1) a threshold predictor that accurately determines optimal sparsity and computational intensity thresholds; (2) a reinforcement learning-based scheduler that dynamically optimizes resource allocation based on real-time hardware states; and (3) a hybrid inference engine that enhances efficiency through asynchronous execution and batch size optimization.Extensive results show that SparOA achieves an average speedup of 1.22-1.31x compared to all baselines, and outperforms the CPU-Only by up to 50.7x. Also, SparOA achieves optimal energy-per-inference, consuming 7%-16% less energy than the SOTA co-execution baseline.
Problem

Research questions and friction points this paper is trying to address.

Optimizing DNN inference on resource-constrained edge devices
Addressing accuracy loss from model compression methods
Improving hybrid inference by leveraging operator characteristics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses sparsity and computational intensity for scheduling
Employs reinforcement learning for dynamic resource allocation
Implements asynchronous execution with batch optimization
🔎 Similar Papers
No similar papers found.
Z
Ziyang Zhang
Politecnico di Milano, Milan, Italy
J
Jie Liu
Harbin Institute of Technology, Shenzhen, China
Luca Mottola
Luca Mottola
Professor, Politecnico di Milano
Battery-less IoTIntermittent ComputingNano SatellitesMobile RoboticsSmart Cities