🤖 AI Summary
To address the high inference overhead and poor real-time deployability of deep neural networks—particularly CNNs—on edge devices, this paper proposes an efficient classification framework integrating early-exit mechanisms with entropy-regularized knowledge distillation. The core contribution is a novel regularization term in the distillation loss, derived from the entropy of misclassified samples predicted by the teacher model; this term explicitly guides the student model to learn more discriminative feature representations across all exit branches, thereby enhancing both the reliability and generalization of early-exit decisions. The method enables dynamic computational path selection and cross-layer knowledge transfer. Evaluated on CIFAR-10, CIFAR-100, and SVHN, it achieves an average 42.3% reduction in FLOPs while surpassing state-of-the-art early-exit baselines in classification accuracy by +1.2%–2.7%. The approach thus strikes an effective balance between efficiency and accuracy, making it well-suited for resource-constrained edge environments.
📝 Abstract
Although deep neural networks and in particular Convolutional Neural Networks have demonstrated state-of-the-art performance in image classification with relatively high efficiency, they still exhibit high computational costs, often rendering them impractical for real-time and edge applications. Therefore, a multitude of compression techniques have been developed to reduce these costs while maintaining accuracy. In addition, dynamic architectures have been introduced to modulate the level of compression at execution time, which is a desirable property in many resource-limited application scenarios. The proposed method effectively integrates two well-established optimization techniques: early exits and knowledge distillation, where a reduced student early-exit model is trained from a more complex teacher early-exit model. The primary contribution of this research lies in the approach for training the student early-exit model. In comparison to the conventional Knowledge Distillation loss, our approach incorporates a new entropy-based loss for images where the teacher's classification was incorrect. The proposed method optimizes the trade-off between accuracy and efficiency, thereby achieving significant reductions in computational complexity without compromising classification performance. The validity of this approach is substantiated by experimental results on image classification datasets CIFAR10, CIFAR100 and SVHN, which further opens new research perspectives for Knowledge Distillation in other contexts.