HBO: Hierarchical Balancing Optimization for Fine-Tuning Large Language Models

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual imbalance problem—both cross-dataset (global) and intra-dataset (local)—arising in fine-tuning large language models (LLMs) on heterogeneous multi-source data, this paper proposes a hierarchical balancing optimization framework. A global Actor orchestrates data source distribution, while multiple local Actors dynamically model intra-dataset sample difficulty and heterogeneity. A two-level reinforcement learning mechanism enables training-state-driven adaptive sampling and difficulty-aware sample allocation. Crucially, this work is the first to jointly model global and local imbalances without handcrafted sampling heuristics. Extensive experiments across three mainstream LLM backbones and nine multilingual, multitask benchmarks demonstrate consistent and significant improvements over state-of-the-art methods, with substantial average accuracy gains. Results validate both the effectiveness and strong generalization capability of the proposed hierarchical regulation mechanism.

Technology Category

Application Category

📝 Abstract
Fine-tuning large language models (LLMs) on a mixture of diverse datasets poses challenges due to data imbalance and heterogeneity. Existing methods often address these issues across datasets (globally) but overlook the imbalance and heterogeneity within individual datasets (locally), which limits their effectiveness. We introduce Hierarchical Balancing Optimization (HBO), a novel method that enables LLMs to autonomously adjust data allocation during fine-tuning both across datasets (globally) and within each individual dataset (locally). HBO employs a bilevel optimization strategy with two types of actors: a Global Actor, which balances data sampling across different subsets of the training mixture, and several Local Actors, which optimizes data usage within each subset based on difficulty levels. These actors are guided by reward functions derived from the LLM's training state, which measure learning progress and relative performance improvement. We evaluate HBO on three LLM backbones across nine diverse tasks in multilingual and multitask setups. Results show that HBO consistently outperforms existing baselines, achieving significant accuracy gains. Our in-depth analysis further demonstrates that both the global actor and local actors of HBO effectively adjust data usage during fine-tuning. HBO provides a comprehensive solution to the challenges of data imbalance and heterogeneity in LLM fine-tuning, enabling more effective training across diverse datasets.
Problem

Research questions and friction points this paper is trying to address.

Addresses data imbalance and heterogeneity in LLM fine-tuning
Balances data allocation globally and locally during training
Improves accuracy across multilingual and multitask setups
Innovation

Methods, ideas, or system contributions that make the work stand out.

HBO hierarchically balances data allocation globally and locally
Uses bilevel optimization with Global and Local Actors
Actors adjust data usage based on training state rewards
🔎 Similar Papers
No similar papers found.