EI、Scopus 收录
中文核心期刊
蒋子超, 江俊扬, 姚清河, 杨耿超. 基于神经网络的差分方程快速求解方法. 力学学报, 2021, 53(7): 1912-1921. DOI: 10.6052/0459-1879-21-040
引用本文: 蒋子超, 江俊扬, 姚清河, 杨耿超. 基于神经网络的差分方程快速求解方法. 力学学报, 2021, 53(7): 1912-1921. DOI: 10.6052/0459-1879-21-040
Jiang Zichao, Jiang Junyang, Yao Qinghe, Yang Gengchao. A fast solver based on deep neural network for difference equation. Chinese Journal of Theoretical and Applied Mechanics, 2021, 53(7): 1912-1921. DOI: 10.6052/0459-1879-21-040
Citation: Jiang Zichao, Jiang Junyang, Yao Qinghe, Yang Gengchao. A fast solver based on deep neural network for difference equation. Chinese Journal of Theoretical and Applied Mechanics, 2021, 53(7): 1912-1921. DOI: 10.6052/0459-1879-21-040

基于神经网络的差分方程快速求解方法

A FAST SOLVER BASED ON DEEP NEURAL NETWORK FOR DIFFERENCE EQUATION

  • 摘要: 近年来, 人工神经网络(artificial neural networks, ANN), 尤其是深度神经网络(deep neural networks, DNN)由于其在异构平台上的高计算效率与对高维复杂系统的拟合能力而成为一种在数值计算领域具有广阔前景的新方法. 在偏微分方程数值求解中, 大规模线性方程组的求解通常是耗时最长的步骤之一, 因此, 采用神经网络方法求解线性方程组成为了一种值得期待的新思路. 但是, 深度神经网络的直接预测仍在数值精度方面仍有明显的不足, 成为其在数值计算领域广泛应用的瓶颈之一. 为打破这一限制, 本文提出了一种结合残差网络结构与校正迭代方法的求解算法. 其中, 残差网络结构解决了深度网络模型的网络退化与梯度消失等问题, 将网络的损失降低至经典网络模型的1/5000; 修正迭代的方法采用同一网络模型对预测解的反复校正, 将预测解的残差下降至迭代前的10−5倍. 为验证该方法的有效性与通用性, 本文将该方法与有限差分法结合, 对热传导方程与伯格方程进行了求解. 数值结果表明, 本文所提出的算法对于规模大于1000的方程组具有10倍以上的加速效果, 且数值误差低于二阶差分格式的离散误差.

     

    Abstract: In recent years, artificial neural networks (ANNs), especially deep neural networks (DNNs), have become a promising new approach in the field of numerical computation due to their high computational efficiency on heterogeneous platforms and their ability to fit high-dimensional complex systems. In the process of numerically solving the partial differential equations, the large-scale linear equations are usually the most time-consuming problems; therefore, utilizing the neural network methods to solve linear equations has become a promising new idea. However, the direct prediction of deep neural networks still has obvious shortcomings in numerical accuracy, which becomes one of the bottlenecks for its application in the field of numerical computation. To break this limitation, a solving algorithm combining Residual network architecture and correction iteration method is proposed in this paper. In this paper, a deep neural network-based method for solving linear equations is proposed to accelerate the solving process of partial differential equations on heterogeneous platforms. Specifically, Residual network resolves the problems of network degradation and gradient vanishing of deep network models, reducing the loss of the network to 1/5000 of the classical network model; the correction iteration method iteratively reduce the error of the prediction solution based on the same network model, and the residual of the prediction solution has been decreased to 10−5 times of that before the iteration. To verify the effectiveness and universality of the proposed method, we combined the method with the finite difference method to solve the heat conduction equation and the Burger’s equation. Numerical results demonstrate that the algorithm has more than 10 times the acceleration effect for equations of size larger than 1000, and the numerical error is lower than the discrete error of the second-order difference scheme.

     

/

返回文章
返回