π€ AI Summary
This paper addresses three key challenges in physics-informed modeling: limited model interpretability, inconsistency with fundamental conservation laws (e.g., energy/momentum), and difficulty in cross-domain knowledge transfer. To this end, we propose Derivative Learning (DERL), a framework that directly learns partial derivatives of system states via supervised learning to construct interpretable, physically consistent dynamical models. Our contributions include: (i) the first multi-stage incremental physics modeling paradigm; (ii) a derivative distillation protocol for knowledge transfer, with theoretical guarantees ensuring strict adherence of the transferred model to true physical laws; and (iii) support for parameter generalization across ODEs/PDEs and rigorous verification of physical constraint satisfaction. Experiments demonstrate that DERL significantly outperforms state-of-the-art methods on unseen initial conditions and unseen PDE parameter tasks, while successfully generalizing to novel physical regimes and extended parameter ranges.
π Abstract
We propose Derivative Learning (DERL), a supervised approach that models physical systems by learning their partial derivatives. We also leverage DERL to build physical models incrementally, by designing a distillation protocol that effectively transfers knowledge from a pre-trained to a student model. We provide theoretical guarantees that our approach can learn the true physical system, being consistent with the underlying physical laws, even when using empirical derivatives. DERL outperforms state-of-the-art methods in generalizing an ODE to unseen initial conditions and a parametric PDE to unseen parameters. We finally propose a method based on DERL to transfer physical knowledge across models by extending them to new portions of the physical domain and new range of PDE parameters. We believe this is the first attempt at building physical models incrementally in multiple stages.