Boomerang Distillation Enables Zero-Shot Model Size Interpolation

📅 2025-10-06
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
Deploying large language models (LLMs) across diverse hardware constraints necessitates multiple model sizes, but conventional approaches—training or distilling each size independently—incur high computational costs and yield coarse-grained, inflexible model families. Method: This paper proposes a zero-shot model-size interpolation framework grounded in knowledge distillation and layer-block alignment. Leveraging the “boomerang distillation” phenomenon—where knowledge distilled from a large model to a small one enables reverse interpolation to reconstruct intermediate-sized models—we eliminate the need for additional training. Layer-block reorganization and structured pruning are jointly employed to achieve fine-grained, smooth performance scaling. Contribution/Results: The interpolated intermediate models match or surpass dedicated training and distillation baselines at equivalent parameter counts across diverse benchmarks. Experiments demonstrate strong effectiveness, generalization across architectures and tasks, and enhanced deployment flexibility—enabling on-the-fly adaptation to heterogeneous hardware without retraining.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are typically deployed under diverse memory and compute constraints. Existing approaches build model families by training each size independently, which is prohibitively expensive and provides only coarse-grained size options. In this work, we identify a novel phenomenon that we call boomerang distillation: starting from a large base model (the teacher), one first distills down to a small student and then progressively reconstructs intermediate-sized models by re-incorporating blocks of teacher layers into the student without any additional training. This process produces zero-shot interpolated models of many intermediate sizes whose performance scales smoothly between the student and teacher, often matching or surpassing pretrained or distilled models of the same size. We further analyze when this type of interpolation succeeds, showing that alignment between teacher and student through pruning and distillation is essential. Boomerang distillation thus provides a simple and efficient way to generate fine-grained model families, dramatically reducing training cost while enabling flexible adaptation across deployment environments. The code and models are available at https://github.com/dcml-lab/boomerang-distillation.
Problem

Research questions and friction points this paper is trying to address.

Enables zero-shot model size interpolation without additional training
Reduces training costs for creating fine-grained model families
Allows flexible adaptation across diverse deployment constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distills large teacher to small student
Reconstructs models by re-incorporating teacher blocks
Enables zero-shot interpolation without additional training
🔎 Similar Papers
No similar papers found.