SWITCH: Studying with Teacher for Knowledge Distillation of Large Language Models

📅 2024-10-25
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from high inference costs and substantial memory overhead; moreover, conventional knowledge distillation (KD) often leads to teacher misguidance due to noisy and biased student-generated outputs (SGOs), especially in long-sequence generation. Method: This paper proposes SWITCH, the first framework enabling dynamic, selective teacher intervention during sequence generation. It identifies divergence points via token-level probability discrepancies and employs a sequence-length-aware confidence gating mechanism to precisely correct low-confidence segments, integrated within a multi-stage distillation paradigm. Contribution/Results: Extensive experiments across three model families and five instruction-following benchmarks demonstrate that SWITCH significantly enhances long-text generation quality, achieving average improvements of 3.2–5.7 points in BLEU and ROUGE scores—outperforming all existing KD methods.

Technology Category

Application Category

📝 Abstract
Despite the success of Large Language Models (LLMs), they still face challenges related to high inference costs and memory requirements. To address these issues, Knowledge Distillation (KD) has emerged as a popular method for model compression, with student-generated outputs (SGOs) being particularly notable for reducing the mismatch between training and inference. However, SGOs often produce noisy and biased sequences, which can lead to misguidance from the teacher model, especially in long sequences. To mitigate these challenges, we propose SWITCH (Studying WIth TeaCHer for Knowledge Distillation), a novel approach that strategically incorporates the teacher model during the student's sequence generation. SWITCH identifies discrepancies between the token probabilities of the teacher and student models, allowing the teacher to intervene selectively, particularly in long sequences that are more prone to teacher misguidance. Extensive experimental results across three model families and five instruction-following datasets show that SWITCH surpasses traditional KD methods, particularly excelling in the generation of long sequential data.
Problem

Research questions and friction points this paper is trying to address.

Reducing high inference costs and memory requirements of LLMs
Mitigating noisy and biased sequences in student-generated outputs
Improving teacher guidance in long sequence generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Teacher intervenes in student sequence generation
Identifies token probability discrepancies
Selective intervention for long sequences
🔎 Similar Papers
No similar papers found.
J
Jahyun Koo
IPAI, Seoul National University
Y
Yerin Hwang
IPAI, Seoul National University
Y
Yongi-Mi Kim
LG AI Research
T
Taegwan Kang
LG AI Research
Hyunkyung Bae
Hyunkyung Bae
NYU Courant
Kyomin Jung
Kyomin Jung
Professor, Department of Electrical and Computer Engineering, Seoul National University
Machine LearningNatural Language ProcessingSocial Network Analytics