🤖 AI Summary
To address the tension between high deployment costs of large language models (LLMs) and the scarcity of labeled data for small models, this paper proposes LLKD, a pseudo-labeling-based knowledge distillation framework. LLKD leverages an LLM as a teacher to generate pseudo-labels for unlabeled data and introduces a novel adaptive sample selection mechanism that jointly models teacher confidence and student information需求—dynamically identifying high-quality, high-value training instances to mitigate noise from erroneous pseudo-labels. By integrating uncertainty modeling with dual-model collaborative evaluation, LLKD enhances both student model performance and data efficiency. Extensive experiments across multiple NLP benchmarks demonstrate that LLKD consistently outperforms state-of-the-art knowledge distillation and pseudo-labeling baselines under identical labeling budgets, achieving significant gains in accuracy and robustness while reducing reliance on costly human annotation.
📝 Abstract
In real-world NLP applications, Large Language Models (LLMs) offer promising solutions due to their extensive training on vast datasets. However, the large size and high computation demands of LLMs limit their practicality in many applications, especially when further fine-tuning is required. To address these limitations, smaller models are typically preferred for deployment. However, their training is hindered by the scarcity of labeled data. In contrast, unlabeled data is often readily which can be leveraged by using LLMs to generate pseudo-labels for training smaller models. This enables the smaller models (student) to acquire knowledge from LLMs(teacher) while reducing computational costs. This process introduces challenges, such as potential noisy pseudo-labels. Selecting high-quality and informative data is therefore critical to enhance model performance while improving the efficiency of data utilization. To address this, we propose LLKD that enables Learning with Less computational resources and less data for Knowledge Distillation from LLMs. LLKD is an adaptive sample selection method that incorporates signals from both the teacher and student. Specifically, it prioritizes samples where the teacher demonstrates high confidence in its labeling, indicating reliable labels, and where the student exhibits a high information need, identifying challenging samples that require further learning. Our comprehensive experiments show that LLKD achieves superior performance across various datasets with higher data efficiency.