🤖 AI Summary
This work addresses the challenge in hybrid modeling where the high flexibility of machine learning components often marginalizes scientific models, thereby compromising interpretability and physical consistency. To mitigate this issue, the study introduces Sharpness-Aware Minimization (SAM) into the hybrid modeling framework for the first time. By optimizing the flatness of minima in the loss landscape, SAM implicitly enforces a simplicity bias without requiring explicit regularization terms tailored to specific model architectures or domain knowledge. Experimental results across diverse model architectures and datasets demonstrate that this approach significantly enhances the robustness and interpretability of hybrid models while promoting more effective integration of scientific models into the learning process.
📝 Abstract
Hybrid modeling, the combination of machine learning models and scientific mathematical models, enables flexible and robust data-driven prediction with partial interpretability. However, effectively the scientific models may be ignored in prediction due to the flexibility of the machine learning model, making the idea of hybrid modeling pointless. Typically some regularization is applied to hybrid model learning to avoid such a failure case, but the formulation of the regularizer strongly depends on model architectures and domain knowledge. In this paper, we propose to focus on the flatness of loss minima in learning hybrid models, aiming to make the model as simple as possible. We employ the idea of sharpness-aware minimization and adapt it to the hybrid modeling setting. Numerical experiments show that the SAM-based method works well across different choices of models and datasets.