TINNs: Time-Induced Neural Networks for Solving Time-Dependent PDEs

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of conventional physics-informed neural networks (PINNs) in accurately capturing the pronounced dynamic features of time-dependent partial differential equations (PDEs), which stem from their use of static, shared weights and result in compromised accuracy and training stability. To overcome this, the authors propose Time-Induced Neural Networks (TINNs), which model network weights as learnable functions of time, thereby enabling adaptive evolution of spatial representational capacity while preserving architectural weight sharing. By parameterizing weights as explicit time-dependent functions, TINNs effectively decouple spatiotemporal modeling and better adapt to evolving dynamics. The framework is combined with the Levenberg–Marquardt algorithm for efficient solution of the resulting nonlinear least-squares problem. Experimental results demonstrate that TINNs achieve up to a fourfold improvement in accuracy and a tenfold acceleration in convergence over PINNs and strong baseline methods across multiple time-dependent PDEs.

Technology Category

Application Category

📝 Abstract
Physics-informed neural networks (PINNs) solve time-dependent partial differential equations (PDEs) by learning a mesh-free, differentiable solution that can be evaluated anywhere in space and time. However, standard space--time PINNs take time as an input but reuse a single network with shared weights across all times, forcing the same features to represent markedly different dynamics. This coupling degrades accuracy and can destabilize training when enforcing PDE, boundary, and initial constraints jointly. We propose Time-Induced Neural Networks (TINNs), a novel architecture that parameterizes the network weights as a learned function of time, allowing the effective spatial representation to evolve over time while maintaining shared structure. The resulting formulation naturally yields a nonlinear least-squares problem, which we optimize efficiently using a Levenberg--Marquardt method. Experiments on various time-dependent PDEs show up to $4\times$ improved accuracy and $10\times$ faster convergence compared to PINNs and strong baselines.
Problem

Research questions and friction points this paper is trying to address.

time-dependent PDEs
physics-informed neural networks
shared weights
training instability
solution accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Time-Induced Neural Networks
Physics-Informed Neural Networks
Time-Dependent PDEs
Parameterized Weights
Levenberg-Marquardt Optimization
C
Chen-Yang Dai
Department of Applied Mathematics, National Yang Ming Chiao Tung University, Taiwan
C
Che-Chia Chang
Institute of Artificial Intelligence Innovation, National Yang Ming Chiao Tung University, Taiwan
Te-Sheng Lin
Te-Sheng Lin
National Yang Ming Chiao Tung University
Mathematical modelingscientific computationfluid mechanics.
M
Ming-Chih Lai
Department of Applied Mathematics, National Yang Ming Chiao Tung University, Taiwan
C
Chieh-Hsin Lai
Department of Applied Mathematics, National Yang Ming Chiao Tung University, Taiwan