🤖 AI Summary
Traditional physics-informed neural networks (PINNs) suffer from low accuracy, slow convergence, reliance on deep architectures, and dense collocation point sampling when solving complex partial differential equations (PDEs).
Method: We propose compleX-PINN, the first PINN framework to incorporate complex analysis into the activation mechanism. Inspired by the Cauchy integral theorem, we design a learnable complex-valued activation function and jointly optimize its parameters end-to-end. Crucially, compleX-PINN achieves high-fidelity solutions using only a single hidden layer and requires no explicit hard enforcement of physical constraints.
Results: On multiple strongly nonlinear PDEs—including Burgers’, KdV, and Navier–Stokes equations—compleX-PINN attains one-order-of-magnitude higher accuracy than standard PINNs, accelerates training convergence significantly, and employs substantially simpler network topologies. This work establishes a novel paradigm for lightweight, efficient, and physics-driven modeling grounded in complex function theory.
📝 Abstract
We propose compleX-PINN, a novel physics-informed neural network (PINN) architecture that incorporates a learnable activation function inspired by Cauchy integral theorem. By learning the parameters of the activation function, compleX-PINN achieves high accuracy with just a single hidden layer. Empirical results show that compleX-PINN effectively solves problems where traditional PINNs struggle and consistently delivers significantly higher precision, often by an order of magnitude.