🤖 AI Summary
This work addresses the information loss and performance degradation in progressive domain adaptation caused by reliance on sample-based log-likelihood estimation. To overcome this limitation, the authors propose an Entropy-Regularized Semi-dual Unbalanced Optimal Transport framework (E-SUOT). By constructing a sample-based intermediate domain, E-SUOT reformulates the flow-model-driven adaptation process as a Lagrangian dual problem and derives an equivalent semi-dual objective that circumvents explicit likelihood estimation. This formulation transforms the unstable minimax training paradigm into a stable alternating optimization procedure, for which the authors provide theoretical guarantees on stability and generalization. Experimental results demonstrate that the proposed framework significantly improves performance across multiple benchmarks in progressive domain adaptation.
📝 Abstract
Gradual domain adaptation (GDA) aims to mitigate domain shift by progressively adapting models from the source domain to the target domain via intermediate domains. However, real intermediate domains are often unavailable or ineffective, necessitating the synthesis of intermediate samples. Flow-based models have recently been used for this purpose by interpolating between source and target distributions; however, their training typically relies on sample-based log-likelihood estimation, which can discard useful information and thus degrade GDA performance. The key to addressing this limitation is constructing the intermediate domains via samples directly. To this end, we propose an Entropy-regularized Semi-dual Unbalanced Optimal Transport (E-SUOT) framework to construct intermediate domains. Specifically, we reformulate flow-based GDA as a Lagrangian dual problem and derive an equivalent semi-dual objective that circumvents the need for likelihood estimation. However, the dual problem leads to an unstable min-max training procedure. To alleviate this issue, we further introduce entropy regularization to convert it into a more stable alternative optimization procedure. Based on this, we propose a novel GDA training framework and provide theoretical analysis in terms of stability and generalization. Finally, extensive experiments are conducted to demonstrate the efficacy of the E-SUOT framework.