🤖 AI Summary
Existing DNN compression methods lack cross-device robustness against performance heterogeneity arising from manufacturing variations, environmental shifts, and hardware aging among identical-edge devices. To address this, we propose HDAP, a hardware-aware pruning framework: (1) it models the performance distribution across homogeneous devices for the first time, integrating device clustering and proxy-based evaluation to reduce hardware profiling overhead while ensuring latency-accuracy trade-off consistency across all target devices; (2) it introduces hardware-aware structured pruning and multi-device joint optimization training. Evaluated on Jetson NX/Nano platforms, HDAP achieves an average 2.86× latency reduction for models including ResNet50, with <0.5% top-1 accuracy degradation—significantly outperforming state-of-the-art methods—and enables scalable deployment across millions of identical-edge devices.
📝 Abstract
Deploying deep neural networks (DNNs) across homogeneous edge devices (the devices with the same SKU labeled by the manufacturer) often assumes identical performance among them. However, once a device model is widely deployed, the performance of each device becomes different after a period of running. This is caused by the differences in user configurations, environmental conditions, manufacturing variances, battery degradation, etc. Existing DNN compression methods have not taken this scenario into consideration and can not guarantee good compression results in all homogeneous edge devices. To address this, we propose Homogeneous-Device Aware Pruning (HDAP), a hardware-aware DNN compression framework explicitly designed for homogeneous edge devices, aiming to achieve optimal average performance of the compressed model across all devices. To deal with the difficulty of time-consuming hardware-aware evaluations for thousands or millions of homogeneous edge devices, HDAP partitions all the devices into several device clusters, which can dramatically reduce the number of devices to evaluate and use the surrogatebased evaluation instead of hardware evaluation in real-time. Extensive experiments on multiple device types (Jetson Xavier NX and Jetson Nano) and task types (image classification with ResNet50, MobileNetV1, ResNet56, VGG16; object detection with YOLOv8n) demonstrate that HDAP consistently achieves lower average latency and competitive accuracy compared to state-of-the-art methods, with significant speedups (e.g., $2.86 imes$ on ResNet50 at 1.0 G FLOPs). HDAP offers an effective solution for scalable, high-performance DNN deployment methods for homogeneous edge devices.