๐ค AI Summary
Current white-box knowledge distillation for large language model compression faces two key bottlenecks: (1) misalignment between teacher and student output spaces, limiting effective knowledge transfer, and (2) poor adaptability to heterogeneous vocabularies. To address these, we propose a dual-space distillation framework featuring a unified prediction head, dual projector initialization, mutual hidden-state mapping, andโcruciallyโthe novel Exact Token Alignment (ETA) algorithm. This enables vocabulary-agnostic, strategy-independent white-box distillation across disparate tokenizers. Notably, our method is the first to support fine-grained knowledge transfer under arbitrary tokenizer pairings. Extensive experiments on instruction-following, mathematical reasoning, and code generation tasks demonstrate that our approach consistently outperforms existing white-box and cross-tokenizer distillation methods, achieving superior student performance while preserving architectural flexibility.
๐ Abstract
Knowledge distillation (KD) is a promising solution to compress large language models (LLMs) by transferring their knowledge to smaller models. During this process, white-box KD methods usually minimize the distance between the output distributions of the teacher model and the student model to transfer more information. However, we reveal that the current white-box KD framework exhibits two limitations: a) bridging probability distributions from different output spaces will limit the similarity between the teacher model and the student model; b) this framework cannot be applied to LLMs with different vocabularies. One of the root causes for these limitations is that the distributions from the teacher and the student for KD are output by different prediction heads, which yield distributions in different output spaces and dimensions. Therefore, in this paper, we propose a dual-space knowledge distillation (DSKD) framework that unifies the prediction heads of the teacher and the student models for KD. Specifically, we first introduce two projectors with ideal initialization to project the teacher/student hidden states into the student/teacher representation spaces. After this, the hidden states from different models can share the same head and unify the output spaces of the distributions. Furthermore, we develop an exact token alignment (ETA) algorithm to align the same tokens in two differently-tokenized sequences. Based on the above, our DSKD framework is a general KD framework that supports both off-policy and on-policy KD, and KD between any two LLMs regardless of their vocabularies. Extensive experiments on instruction-following, mathematical reasoning, and code generation benchmarks show that DSKD significantly outperforms existing methods based on the current white-box KD framework and surpasses other cross-tokenizer KD methods for LLMs with different vocabularies.