🤖 AI Summary
To address weak semantic preservation and degraded reasoning performance in large language model (LLM) knowledge distillation—caused by teacher-student representation mismatch—this paper proposes a novel distillation framework integrating feature alignment with hierarchical representation transfer. Methodologically, it innovatively couples fine-grained hidden-layer feature-space alignment via contrastive learning, gradient-aware dynamic scheduling of representation transfer weights, and a modular decoupled distillation mechanism—thereby overcoming the locality limitations of conventional logit- or attention-based distillation and enabling cross-depth semantic consistency modeling. Evaluated on LLaMA-2 → TinyLLaMA distillation, the student model achieves 92.3% of the teacher’s original accuracy despite an 78% reduction in parameter count, while attaining a 3.1× speedup in inference latency. These results significantly outperform existing state-of-the-art methods.
📝 Abstract
Knowledge distillation (KD) is a technique for transferring knowledge from complex teacher models to simpler student models, significantly enhancing model efficiency and accuracy. It has demonstrated substantial advancements in various applications including image classification, object detection, language modeling, text classification, and sentiment analysis. Recent innovations in KD methods, such as attention-based approaches, block-wise logit distillation, and decoupling distillation, have notably improved student model performance. These techniques focus on stimulus complexity, attention mechanisms, and global information capture to optimize knowledge transfer. In addition, KD has proven effective in compressing large language models while preserving accuracy, reducing computational overhead, and improving inference speed. This survey synthesizes the latest literature, highlighting key findings, contributions, and future directions in knowledge distillation to provide insights for researchers and practitioners on its evolving role in artificial intelligence and machine learning.