π€ AI Summary
Existing IRM-TV methods for out-of-distribution (OOD) generalization suffer from limited performance and reliance on strong distributional assumptions. Method: This paper proposes OOD-TV-IRMβthe first approach to explicitly model the total variation (TV) regularization coefficient as a Lagrange multiplier, thereby formulating a primal-dual optimization framework that seeks a semi-Nash equilibrium between training loss and the OOD generalization objective. The method jointly integrates invariant risk minimization, TV regularization, and adversarial learning without requiring additional distributional assumptions. Contribution/Results: OOD-TV-IRM enjoys theoretical interpretability and guaranteed algorithmic convergence. Extensive experiments on multiple benchmark datasets demonstrate that it significantly outperforms the original IRM-TV, achieving superior OOD generalization performance and enhanced optimization stability.
π Abstract
Invariant risk minimization is an important general machine learning framework that has recently been interpreted as a total variation model (IRM-TV). However, how to improve out-of-distribution (OOD) generalization in the IRM-TV setting remains unsolved. In this paper, we extend IRM-TV to a Lagrangian multiplier model named OOD-TV-IRM. We find that the autonomous TV penalty hyperparameter is exactly the Lagrangian multiplier. Thus OOD-TV-IRM is essentially a primal-dual optimization model, where the primal optimization minimizes the entire invariant risk and the dual optimization strengthens the TV penalty. The objective is to reach a semi-Nash equilibrium where the balance between the training loss and OOD generalization is maintained. We also develop a convergent primal-dual algorithm that facilitates an adversarial learning scheme. Experimental results show that OOD-TV-IRM outperforms IRM-TV in most situations.