DiBS-MTL: Transformation-Invariant Multitask Learning with Direction Oracles

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multi-task learning (MTL), non-affine scaling of task losses often induces task dominance, violating Pareto optimality; moreover, the convergence of existing direction-based bargaining solutions (DiBS) remains unestablished for non-convex MTL. This paper introduces DiBS to MTL for the first time and proposes a novel direction-guided bargaining method that is strictly invariant under monotonic non-affine transformations. By constructing the bargaining mechanism directly on gradient directions and integrating non-convex optimization theory with weighted loss design, we theoretically guarantee both the existence of Pareto-stationary solutions and algorithmic convergence. Experiments on standard multi-task benchmarks show that our method matches state-of-the-art performance under nominal settings and significantly outperforms existing approaches when task losses undergo non-affine transformations—thereby enhancing robustness and fairness in MTL.

Technology Category

Application Category

📝 Abstract
Multitask learning (MTL) algorithms typically rely on schemes that combine different task losses or their gradients through weighted averaging. These methods aim to find Pareto stationary points by using heuristics that require access to task loss values, gradients, or both. In doing so, a central challenge arises because task losses can be arbitrarily, nonaffinely scaled relative to one another, causing certain tasks to dominate training and degrade overall performance. A recent advance in cooperative bargaining theory, the Direction-based Bargaining Solution (DiBS), yields Pareto stationary solutions immune to task domination because of its invariance to monotonic nonaffine task loss transformations. However, the convergence behavior of DiBS in nonconvex MTL settings is currently not understood. To this end, we prove that under standard assumptions, a subsequence of DiBS iterates converges to a Pareto stationary point when task losses are possibly nonconvex, and propose DiBS-MTL, a computationally efficient adaptation of DiBS to the MTL setting. Finally, we validate DiBS-MTL empirically on standard MTL benchmarks, showing that it achieves competitive performance with state-of-the-art methods while maintaining robustness to nonaffine monotonic transformations that significantly degrade the performance of existing approaches, including prior bargaining-inspired MTL methods. Code available at https://github.com/suryakmurthy/dibs-mtl.
Problem

Research questions and friction points this paper is trying to address.

Addresses task domination in multitask learning from loss scaling
Proves convergence of DiBS method for nonconvex multitask optimization
Develops transformation-invariant MTL method robust to loss rescaling
Innovation

Methods, ideas, or system contributions that make the work stand out.

DiBS-MTL uses transformation-invariant multitask learning
It employs direction oracles for Pareto stationary solutions
Method maintains robustness to nonaffine loss transformations
🔎 Similar Papers
No similar papers found.