๐ค AI Summary
Existing offline meta-learning approaches for loss function optimization only tune the meta-objective during the initial few inner-loop iterations, leading to suboptimal final model performance. To address this limitation, we propose an online loss function learning framework that dynamically adapts the loss function after every model parameter updateโenabling, for the first time, fully online, end-to-end adaptive loss optimization. Methodologically, our approach introduces gradient-coupled bilevel optimization, an online meta-update mechanism, and differentiable parametrization of loss functions, thereby overcoming the temporal constraints inherent in offline meta-learning. Extensive experiments across diverse neural architectures and benchmark datasets demonstrate consistent superiority over cross-entropy and state-of-the-art offline loss learning methods, yielding average final-test accuracy improvements of 1.2โ2.8%.
๐ Abstract
Loss function learning is a new meta-learning paradigm that aims to automate the essential task of designing a loss function for a machine learning model. Existing techniques for loss function learning have shown promising results, often improving a model's training dynamics and final inference performance. However, a significant limitation of these techniques is that the loss functions are meta-learned in an offline fashion, where the meta-objective only considers the very first few steps of training, which is a significantly shorter time horizon than the one typically used for training deep neural networks. This causes significant bias towards loss functions that perform well at the very start of training but perform poorly at the end of training. To address this issue we propose a new loss function learning technique for adaptively updating the loss function online after each update to the base model parameters. The experimental results show that our proposed method consistently outperforms the cross-entropy loss and offline loss function learning techniques on a diverse range of neural network architectures and datasets.