ACCELERATING CONVERGENCE ALGORITHM FOR PHYSICS-INFORMED NEURAL NETWORKS BASED ON NTK THEORY AND MODIFIED CAUSALITY
-
-
Abstract
Physics-informed neural networks (PINNs) are a class of neural networks that embed prior physical knowledge into the neural network, and have emerged as a focal area in the study of solving partial differential equations. Despite showing the significant potential in numerical simulation, PINNs still encounter the challenge of slow convergence. Through the lens of neural tangent kernel (NTK) theory, this paper conducts an analysis on single-hidden-layer neural network models, derives the specific form of the NTK matrix for PINNs, and further analyzes the factors affecting the convergence rate of PINNs, proposing two necessary conditions for high convergence rate. Applying the NTK theory, analysis of three algorithms in the PINNs domain including causality, Fourier feature embedding and learning rate annealing indicates that none of them satisfies all the necessary conditions for high convergence rate. This paper proposes dynamic Fourier feature embedding causality (DFFEC) method which takes both the impact of NTK matrix eigenvalue balance and chronological convergence on the convergence speed into account. The numerical experiments on four benchmark problems including Allen-Cahn, Reaction, Burgers and Advection, illustrate that the proposed DFFEC method can remarkably improve the convergence rate of PINNs. Especially, in the Allen-Cahn case, the proposed DFFEC method achieves an acceleration effect of at least 50 times compared to the causality algorithm.
-
-