Skrull: Towards Efficient Long Context Fine-tuning through Dynamic Data Scheduling

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In long-context supervised fine-tuning (Long-SFT), mixed-length sequence training induces severe computational load imbalance, hindering simultaneous optimization of efficiency and resource utilization. To address this, we formulate Long-SFT data scheduling as a joint optimization problem—the first such formulation in the literature—and propose a lightweight online dynamic scheduling algorithm that enables zero-overhead, length-adaptive batching. Our method, built atop DeepSpeed, integrates real-time sequence-length awareness, compute-memory trade-off analysis, and dynamic batch construction. Evaluated on realistic Long-SFT workloads, it achieves an average end-to-end training speedup of 3.76×, peaking at 7.54×, substantially outperforming native DeepSpeed. This work delivers a scalable, system-level solution for efficient long-context fine-tuning.

Technology Category

Application Category

📝 Abstract
Long-context supervised fine-tuning (Long-SFT) plays a vital role in enhancing the performance of large language models (LLMs) on long-context tasks. To smoothly adapt LLMs to long-context scenarios, this process typically entails training on mixed datasets containing both long and short sequences. However, this heterogeneous sequence length distribution poses significant challenges for existing training systems, as they fail to simultaneously achieve high training efficiency for both long and short sequences, resulting in sub-optimal end-to-end system performance in Long-SFT. In this paper, we present a novel perspective on data scheduling to address the challenges posed by the heterogeneous data distributions in Long-SFT. We propose Skrull, a dynamic data scheduler specifically designed for efficient long-SFT. Through dynamic data scheduling, Skrull balances the computation requirements of long and short sequences, improving overall training efficiency. Furthermore, we formulate the scheduling process as a joint optimization problem and thoroughly analyze the trade-offs involved. Based on those analysis, Skrull employs a lightweight scheduling algorithm to achieve near-zero cost online scheduling in Long-SFT. Finally, we implement Skrull upon DeepSpeed, a state-of-the-art distributed training system for LLMs. Experimental results demonstrate that Skrull outperforms DeepSpeed by 3.76x on average (up to 7.54x) in real-world long-SFT scenarios.
Problem

Research questions and friction points this paper is trying to address.

Optimizing long-context fine-tuning efficiency with dynamic data scheduling
Balancing computation for mixed long and short sequence training
Reducing scheduling overhead in large language model fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic data scheduling for efficient long-SFT
Lightweight algorithm for near-zero cost scheduling
Balances computation for long and short sequences
🔎 Similar Papers
No similar papers found.