You Don't Need All Attentions: Distributed Dynamic Fine-Tuning for Foundation Models

📅 2025-04-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fine-tuning foundation models on memory-bandwidth-constrained commercial hardware incurs prohibitive computational and communication overhead. Method: This paper proposes D2FT, a distributed dynamic fine-tuning framework. Its core innovation is the first identification that certain attention modules can be safely skipped during fine-tuning, enabling three dynamic importance-aware pruning strategies. D2FT further employs multi-constraint multi-knapsack optimization for cross-device load-balanced scheduling and seamlessly integrates with parameter-efficient methods (e.g., LoRA). It unifies dynamic attention pruning, communication compression, and model parallelism. Results: On CIFAR-10/100 and Stanford Cars, D2FT reduces computation by 40% and communication by 50%, with only 1–2% accuracy degradation. When combined with LoRA, accuracy loss remains bounded within 4–6%. D2FT significantly enhances fine-tuning efficiency and scalability under resource constraints.

Technology Category

Application Category

📝 Abstract
Fine-tuning plays a crucial role in adapting models to downstream tasks with minimal training efforts. However, the rapidly increasing size of foundation models poses a daunting challenge for accommodating foundation model fine-tuning in most commercial devices, which often have limited memory bandwidth. Techniques like model sharding and tensor parallelism address this issue by distributing computation across multiple devices to meet memory requirements. Nevertheless, these methods do not fully leverage their foundation nature in facilitating the fine-tuning process, resulting in high computational costs and imbalanced workloads. We introduce a novel Distributed Dynamic Fine-Tuning (D2FT) framework that strategically orchestrates operations across attention modules based on our observation that not all attention modules are necessary for forward and backward propagation in fine-tuning foundation models. Through three innovative selection strategies, D2FT significantly reduces the computational workload required for fine-tuning foundation models. Furthermore, D2FT addresses workload imbalances in distributed computing environments by optimizing these selection strategies via multiple knapsack optimization. Our experimental results demonstrate that the proposed D2FT framework reduces the training computational costs by 40% and training communication costs by 50% with only 1% to 2% accuracy drops on the CIFAR-10, CIFAR-100, and Stanford Cars datasets. Moreover, the results show that D2FT can be effectively extended to recent LoRA, a state-of-the-art parameter-efficient fine-tuning technique. By reducing 40% computational cost or 50% communication cost, D2FT LoRA top-1 accuracy only drops 4% to 6% on Stanford Cars dataset.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational costs in fine-tuning large foundation models
Addressing workload imbalances in distributed computing environments
Optimizing attention module usage for efficient fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributed Dynamic Fine-Tuning (D2FT) framework
Selective attention module optimization
Multiple knapsack workload balancing
🔎 Similar Papers
No similar papers found.
Shiwei Ding
Shiwei Ding
PhD of Computer Science, Michigan Tech University
L
Lan Zhang
Clemson University
Z
Zhenlin Wang
Michigan Technological University
Giuseppe Ateniese
Giuseppe Ateniese
George Mason University
Cloud SecurityCybersecurityApplied Cryptography
X
Xiaoyong Yuan
Clemson University