🤖 AI Summary
Deploying DNN inference on resource-constrained edge devices faces a performance bottleneck: model compression degrades accuracy, custom hardware incurs high cost and poor flexibility, and existing CPU–GPU hybrid inference approaches neglect operator-level computational characteristics. Method: This paper proposes a CPU–GPU hybrid scheduling framework that jointly optimizes sparsity awareness and operator computational intensity. It innovatively integrates sparsity analysis with computational intensity modeling, and designs a threshold predictor and a reinforcement learning–based dynamic scheduler, augmented by asynchronous execution and batch-size adaptation for hardware-state–aware real-time resource allocation. Contribution/Results: Experiments demonstrate average speedups of 1.22×–1.31× over baseline methods, up to 50.7× faster than CPU-only execution, and 7%–16% reduction in energy per inference—significantly improving both energy efficiency and throughput.
📝 Abstract
The resource demands of deep neural network (DNN) models introduce significant performance challenges, especially when deployed on resource-constrained edge devices. Existing solutions like model compression often sacrifice accuracy, while specialized hardware remains costly and inflexible. Hybrid inference methods, however, typically overlook how operator characteristics impact performance. In this work, we present SparOA, a CPU-GPU hybrid inference framework, which leverages both sparsity and computational intensity to optimize operator scheduling. SparOA embraces aforementioned challenges through three key components: (1) a threshold predictor that accurately determines optimal sparsity and computational intensity thresholds; (2) a reinforcement learning-based scheduler that dynamically optimizes resource allocation based on real-time hardware states; and (3) a hybrid inference engine that enhances efficiency through asynchronous execution and batch size optimization.Extensive results show that SparOA achieves an average speedup of 1.22-1.31x compared to all baselines, and outperforms the CPU-Only by up to 50.7x. Also, SparOA achieves optimal energy-per-inference, consuming 7%-16% less energy than the SOTA co-execution baseline.