GUIDE: Guided Initialization and Distillation of Embeddings

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In knowledge distillation, student models typically only mimic teacher outputs (e.g., logits), failing to inherit the teacher’s internal representational capacity. Method: This paper proposes GUIDE—the first distillation method extended into parameter space—leveraging embedding-layer-guided initialization and parameter-space alignment to enable direct inheritance of the teacher’s internal representation structure, rather than output-level matching alone. GUIDE incurs no additional training or inference overhead and integrates seamlessly with standard knowledge distillation. Results: Evaluated on 400M–1B parameter language models, GUIDE reduces the teacher–student quality gap by 25%–26% using only ~20B training tokens. When applied standalone, it significantly outperforms conventional knowledge distillation, demonstrating its effectiveness and generality as a novel distillation paradigm.

Technology Category

Application Category

📝 Abstract
Algorithmic efficiency techniques such as distillation (cite{hinton2015distillation}) are useful in improving model quality without increasing serving costs, provided a larger teacher model is available for a smaller student model to learn from during training. Standard distillation methods are limited to only forcing the student to match the teacher's outputs. Given the costs associated with training a large model, we believe we should be extracting more useful information from a teacher model than by just making the student match the teacher's outputs. In this paper, we introduce guide (Guided Initialization and Distillation of Embeddings). guide can be considered a distillation technique that forces the student to match the teacher in the parameter space. Using guide we show 25-26% reduction in the teacher-student quality gap when using large student models (400M - 1B parameters) trained on $approx$ 20B tokens. We also present a thorough analysis demonstrating that guide can be combined with knowledge distillation with near additive improvements. Furthermore, we show that applying guide alone leads to substantially better model quality than applying knowledge distillation by itself. Most importantly, guide introduces no training or inference overhead and hence any model quality gains from our method are virtually free.
Problem

Research questions and friction points this paper is trying to address.

Enhancing student model quality beyond standard distillation limitations
Reducing teacher-student performance gap through parameter space alignment
Achieving quality improvements without training or inference overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

Guided initialization and distillation of embeddings
Forces student to match teacher in parameter space
Introduces no training or inference overhead
🔎 Similar Papers
No similar papers found.
Khoa Trinh
Khoa Trinh
Google Research
Theoretical Computer ScienceApproximation AlgorithmsRandomized AlgorithmsMachine Learning
G
Gaurav Menghani
Google Research, Mountain View, CA
E
Erik Vee
Google Research, Mountain View, CA