🤖 AI Summary
This work addresses the limitations of conventional physics-informed neural networks (PINNs) in accurately capturing the pronounced dynamic features of time-dependent partial differential equations (PDEs), which stem from their use of static, shared weights and result in compromised accuracy and training stability. To overcome this, the authors propose Time-Induced Neural Networks (TINNs), which model network weights as learnable functions of time, thereby enabling adaptive evolution of spatial representational capacity while preserving architectural weight sharing. By parameterizing weights as explicit time-dependent functions, TINNs effectively decouple spatiotemporal modeling and better adapt to evolving dynamics. The framework is combined with the Levenberg–Marquardt algorithm for efficient solution of the resulting nonlinear least-squares problem. Experimental results demonstrate that TINNs achieve up to a fourfold improvement in accuracy and a tenfold acceleration in convergence over PINNs and strong baseline methods across multiple time-dependent PDEs.
📝 Abstract
Physics-informed neural networks (PINNs) solve time-dependent partial differential equations (PDEs) by learning a mesh-free, differentiable solution that can be evaluated anywhere in space and time. However, standard space--time PINNs take time as an input but reuse a single network with shared weights across all times, forcing the same features to represent markedly different dynamics. This coupling degrades accuracy and can destabilize training when enforcing PDE, boundary, and initial constraints jointly. We propose Time-Induced Neural Networks (TINNs), a novel architecture that parameterizes the network weights as a learned function of time, allowing the effective spatial representation to evolve over time while maintaining shared structure. The resulting formulation naturally yields a nonlinear least-squares problem, which we optimize efficiently using a Levenberg--Marquardt method. Experiments on various time-dependent PDEs show up to $4\times$ improved accuracy and $10\times$ faster convergence compared to PINNs and strong baselines.