收藏 分销(赏)

Runge-Kutta型多...解非定常偏微分方程(英文)_陈泽斌.pdf

上传人:自信****多点 文档编号:256925 上传时间:2023-05-22 格式:PDF 页数:8 大小:505.99KB
下载 相关 举报
Runge-Kutta型多...解非定常偏微分方程(英文)_陈泽斌.pdf_第1页
第1页 / 共8页
Runge-Kutta型多...解非定常偏微分方程(英文)_陈泽斌.pdf_第2页
第2页 / 共8页
Runge-Kutta型多...解非定常偏微分方程(英文)_陈泽斌.pdf_第3页
第3页 / 共8页
亲,该文档总共8页,到这儿已超出免费预览范围,如果喜欢就下载吧!
资源描述

1、第40卷第2期2023年3月新疆大学学报(自然科学版)(中英文)Journal of Xinjiang University(Natural Science Edition in Chinese and English)Vol.40,No.2Mar.,2023Multi-Scale Neural Networks Based onRunge-Kutta Method for SolvingUnsteady Partial Differential EquationsCHEN Zebin,FENG Xinlong(School of Mathematics and System Sciences

2、,Xinjiang University,Urumqi Xinjiang 830017,China)Abstract:This paper proposes the multi-scale neural networks method based on Runge-Kutta to solve unsteady partial dif-ferential equations.The method uses q-order Runge-Kutta to construct the time iteratione scheme,and further establishes thetotal lo

3、ss function of multiple time steps,which is to realize the parameter sharing of neural networks with multiple time steps,and to predict the function value at any moment in the time domain.Besides,the m-scaling factor is adopted to speed up theconvergence of the loss function and improve the accuracy

4、 of the numerical solution.Finally,several numerical experiments arepresented to demonstrate the effectiveness of the proposed method.Key words:unsteady partial differential equations;q-order Runge-Kutta method;multi-scale neural networks;m-scaling factor;high accuracyDOI:10.13568/ki.651094.651316.2

5、022.06.25.0001CLC number:O175Document Code:AArticle ID:2096-7675(2023)02-0142-08引文格式:陈泽斌,冯新龙.Runge-Kutta 型多尺度神经网络求解非定常偏微分方程J.新疆大学学报(自然科学版)(中英文),2023,40(2):142-149.英文引文格式:CHEN Zebin,FENG Xinlong.Multi-scale neural networks based on Runge-Kutta method for solving unsteadypartial differential equations

6、J.Journal of Xinjiang University(Natural Science Edition in Chinese and English),2023,40(2):142-149.Runge-Kutta型多尺度神经网络求解非定常偏微分方程陈泽斌,冯新龙(新疆大学 数学与系统科学学院,新疆 乌鲁木齐830017)摘要:提出了基于 Runge-Kutta 的多尺度神经网络方法求解非定常偏微分方程.利用 q 阶 Runge-Kutta 构造时间迭代格式,通过建立多时间步的总损失函数,实现多时间步的神经网络参数共享,并预测时域内任意时刻的函数值.同时采用m-缩放因子加快损失函数收敛

7、,提高数值解精度.最后,给出了若干数值实验验证所提方法的有效性关键词:非定常偏微分方程;q 阶 Runge-Kutta 法;多尺度神经网络;m-缩放因子;高精度0IntroductionDeep learning has achieved satisfactory results in searching technology,natural language processing,image processing,recommendation system,personalization technology,etc.In recent years,deep learning has bee

8、n successfully applied tosolve partial differential equations and has been deeply promoted.Compared with the finite element method1and thefinite difference method2,deep learning as a meshless method,can mitigate the curse of dimensionality when solving high-dimensional partial differential equations

9、.So it is more convenient to establish a solution framework for high-dimensionalpartial differential equations.E etc3proposed the Deep-Ritz method based on deep neural networks for numerically solvingvariational problems,which was insensitive to the dimension of the problem and could be used to solv

10、e high-dimensional pr-Received Date:2022-06-25Foundation Item:This work was supported by Open Project of Key Laboratory of Xinjiang“Machine learning for incompressible magnetohydrodynam-ics models”(2020D04002).Biography:CHEN Zebin(1995-),male,master student,research fields:deep learning to solve par

11、tial differential equations.Corresponding author:FENG Xinlong(1976-),male,professor,research field:numerical solutions of partial differential equations,E-mail:fxl-.No.2CHEN Zebin,et al:Multi-Scale Neural Networks Based on Runge-Kutta Method for Solving Unsteady Partial Differential Equations143oble

12、ms.Physics-Informed Neural Networks(PINNs)4used the automatic differentiation technique5for the first time toembed the residual of the equation into the loss function of the neural networks,and obtained the numerical solution of theequation by minimizing the loss function.PINNs is a new numerical me

13、thod for solving partial differential equations,whichmakes full use of the physical information contained in the PDEs.PINNs have attracted the attention of many scholars,andliterature67shows theoretical convergence of PINNs for certain classes of PDEs.Multi-scale DNN8proposed the idea ofradial scali

14、ng in the frequency domain,which had the ability to approximate high-frequency and high-dimensional functions,and could accelerate the convergence speed of the loss function.Recently,the research on deep learning algorithms for nonlinear unsteady partial differential equations have attracted theatte

15、ntion of many scholars.The time-discrete model of PINNs can still guarantee the stability and high precision of numericalsolutions when using large time steps,but it needs high computational costs.DeLISA added the physical information ofthe governing equations into the time iteration scheme,and intr

16、oduced time-dependent input to achieve continuous-timeprediction without a large number of interior points9.On this basis,this paper proposes a multi-scale neural networksalgorithm integrating Runge-Kutta method.On the one hand,the algorithm constructs a time iteration scheme to build thetotal loss

17、function of multiple time steps,thus realizes the sharing of neural networks parameters in multiple time steps tosave computational costs.It can also predict the function value at any time in the time domain after training.On the otherhand,the algorithm can not only speed up the convergence speed of

18、 the loss function,but also improve the accuracy of thesolution.In the numerical example,we compare the stability of the time stepping scheme and the time iteration schemein multi-time-step solutions.Then,through the sensitivity analysis of boundary points and initial points,we find that thesolution

19、 accuracy does not increase with the increase of points.Therefore,we can still achieve the same calculation accuracyby selecting an appropriately small number of points,and achieving the purpose of reducing the amount of calculation.The rest of this paper is organized as follows:In section 1,we desc

20、ribe neural networks combined with Runge-Kuttamethod and automatic differentiation in detail.Section 2 shows time iteration scheme based on Runge-Kutta multi-scaleneural networks.We then present several numerical experiments,including Convection-Diffusion equation and Burgersequation in section 3.Fi

21、nally,we conclude with the key ideas raised in this work.1PreliminariesIn this section,we introduce the neural networks integrates the Runge-Kutta10method to solve unsteady partial dif-ferential equations in detail.Automatic differentiation is briefly introduced,and the computational steps of the au

22、tomaticdifferentiation are understood by calculating the derivatives of the output of a simple structured neural networks with respectto the input.1.1Fusion of Neural Networks and Runge-Kutta MethodTo understand the key idea clearly,we consider a class of unsteady partial differential equations defi

23、ned on the boundedset Rnut=N(u(x,t),(x,t)(0,Tu(x,0)=u0(x),xu(x,t)=h(x,t),(x,t)0,T(1)where x represents the space vector,and N represents the differential operator with respect to function u.We give thespace discrete of equation(1),and then integrate it in the interval tn,tn+1 to getun+1un=tn+1tnNu(x

24、k,t)dt,k=1,2,N0,where tn=nt.We use the q-order implicit Runge-Kutta method to approximate the right-hand integral term in the aboveequationun+1=un+tqj=1bjNun+cjun+ci=un+tqj=1aijNun+cj,i=1,2,q(2)144Journal of Xinjiang University(Natural Science Edition in Chinese and English)2023where un+ci(x)=u(tn+c

25、it,x),0aij,bj,cj1.As shown in Fig 1,we give the fusion frame diagram of the neural networksand the Runge-Kutta method when x=(x1,x2)R2,q=5.Fig 1 Fusion frame diagram of neural networks and Runge-Kutta methodFrom equation(2),we could include the indirect label of the output layer of the neural networ

26、ksunq+1,un+1tqj=1bjNun+cj,uni,un+citqj=1aijNun+cj,i=1,2,q.In the following,we establish the loss function for the n-th time layerSSE0,n=N0k=1q+1i=1uniun(xk)2,where un(xk)is the known function value of the n time step.Similarly,we set the loss function on the boundarySSEb,n()=N0k=1q+1i=1un+cih(xk,tn+

27、cjt)2,where un+cq+1=un+1.Then the total loss function of equation(1)consists of the above two partsLossn=SSE0,n+SSEb,n.1.2Automatic DifferentiationPINNs embed the residual of the equation into the loss function of the neural networks for the first time,which providedan effective means to numerically

28、 solve partial differential equations.By calculating the derivative of the networks outputwith respect to the input,we can obtain the residuals of the equation.The numerical solution uANNconstructed by the neuralnetworks have a specific functional expression,so the derivative can be calculated using

29、 finite difference method,symbolicdifferentiation or automatic differentiation.However,the limitations of the finite difference method lie in truncation errorand method error.Symbolic differentiation is computationally expensive and time-consuming.Automatic differentiation canovercome the limitation

30、s of the above two methods,so this paper uses the automatic differentiation technique to calculate thederivative.Automatic differentiation(AD)computes derivatives using the chain rule.AD calculation process can be divided intotwo steps:calculating the function value in the forward mode and calculati

31、ng the derivative value in the reverse mode.Welearn about AD by computing the derivative of the output of a multi-layer feedforward neural networks with respect to theinput.The neural networks consists of an input layer with two neurons x1and x2,a hidden layer with one neuron and anoutput layer with

32、 one neuron y.y=2tanh(0.1x10.3x2+0.6)0.3.The calculation of the derivative of y with respect to(x1,x2)at(2,3)is shown in Table 1.No.2CHEN Zebin,et al:Multi-Scale Neural Networks Based on Runge-Kutta Method for Solving Unsteady Partial Differential Equations145Table 1 Automatic differentiation techni

33、que derivation processForward PassBackward Passx1=2yy=1x2=3z=0.1x10.3x2+0.6=0.1y?z=(2?z0.3)?z=2?z=tanh(z)0.099 7yz=y?z?zz=y?zsech2(?z)1.980 3y=2?z0.3=0.499 3yx1=yzzx1=yz0.10.198 0yx2=yzzx2=yz(0.3)=0.594 1Iri etc11give the proof of the computational complexity of AD which proves that the computation

34、of the gradient isat most 5 times that of the function,regardless of the dimension of the independent variable.For an optimization problemwith a 100-dimensional smooth objective function,we use symbolic differentiation or numerical differentiation to computethe gradient,and then the gradient is at l

35、east 100 times more computable than the function value,not to mention the useof second-order derivative information or higher-dimensional functions.If AD is used,regardless of the dimension of theindependent variable,the computation amount of the gradient is at most 4 times than the value of the fun

36、ction with machineprecision.2Multi-Scale Neural Networks Integrating Runge-Kutta MethodAccording to the universal approximation theorem12,as long as a hidden layer contains enough neurons,the neuralnetworks can approximate a continuous function defined on a compact set with arbitrary accuracy.It sho

37、uld be noted thatthe neural networks obeys the frequency principle13when fitting the function:it tends to preferentially fit the low-frequencypart of the objective function.Based on the understanding of the frequency principle,in order to solve the characteristicsof slow learning at high-frequency,w

38、e can make some special designs for the neural networks.From the point of view offunction space,the basis function of neural networks is composed of activation functions.The basis functions of differentscales construct a feasible function space,which can approximate the objective function faster.The

39、 same basis function canbe scaled to generate different scales basis functions.In order to describe the multi-scale neural network conveniently,weonly divide the neurons into h parts in the first hidden layer,as shown in Fig 2.The input of the i-th neuron is ix,and thecorresponding output is(iwx+b).

40、Then the multi-scale neural network with L-1 hidden layers is defined asuANN=WL(WL1(W1(MW0 x+b0)+b1)+bL1),where is the Hadamard product,m-scaling factor is M=m(1,1,1,2,2,i,i,h,h).Fig 2 Multi-scale neural network example of h=3The m-scaling factor of multi-scale neural networks can not only speed up

41、the convergence speed of the loss function,but also improve the accuracy of the solution.In Fig 3,we present a schematic diagram of the framework of time iterationscheme based on multi-scale neural networks.The input-output mapping of a multi-scale neural network is(x,tn)(un+c1,un+c2,un+cq,un+1).146

42、Journal of Xinjiang University(Natural Science Edition in Chinese and English)2023Fig 3 Time iteration scheme of neural networks combined with Runge-Kutta methodIn Fig 4,we give a schematic diagram of the time step scheme of the neural networks and implicit Runge-Kutta(IRK)fusion.It is worth noting

43、that the iterative solution of each time step in this framework requires a neural network to fit.Theiterative solution of multiple time steps needs to optimize the parameters of multiple neural networks.Time iteration schememakes full use of the approximation ability of the neural networks by adjust

44、ing the input-output mapping relationship of themulti-scale neural networks,so that the iterative solutions of multiple time steps share the parameters.At n-th time step,weset the loss function as followslossn=N0k=1q+1i=1uni(xk)un,iexact(xk)2.We set the loss function for boundary conditions as follo

45、wslossb=Nn=1Nbk=1q+1i=1uni(xk)hn,i(xk)2.Similarly,we set the loss function for the initial condition as followsloss0=Nbk=1q+1i=1u0i(xk)u0,i(xk)2.So the total loss function can be written asloss=Nn=1lossn+lossb+loss0.The multi-scale neural networks algorithm incorporating the Runge-Kutta method requi

46、res only a small number of initialvalue data pairs,constructs its time iteration scheme by IRK method,and then builds the total loss function for multiple timesteps to obtain the function value at any time in the time domain by only one time optimization.Fig 4 Time stepping scheme of neural networks

47、 combined with implicit Runge-Kutta methodNo.2CHEN Zebin,et al:Multi-Scale Neural Networks Based on Runge-Kutta Method for Solving Unsteady Partial Differential Equations1473Numerical ExperimentsIn this section,in order to verify the effectiveness of the time iteration scheme algorithm based on mult

48、i-scale neuralnetworks,we give numerical experiments to solve the Convection-Diffusion equation and Burgers equation respectively,anddefine the relative L2error asError(L2)=Ni=1(uexact(xi)upred(xi)21/2Ni=1(uexact(xi)21/2.3.1Convection-Diffusion EquationConvection-Diffusion equation14is widely used t

49、o describe fluid models and many physical phenomena.The unknownfunction u can be used to express the contaminant concentration transported through the fluid.Of course,it can also beexpressed as the temperature of the fluid moving along the hot wall or the electron concentration in a semiconductor de

50、vicemodel.To verify the stability of time iteration scheme,lets consider a two-dimensional Convection-Diffusion equation.ut=uu,(x,t)(0,T(3)where the solution area of the equation is defined as =1,32,T=1,and=(a,b).The initial boundary value conditionsfor this problem are given by the following true s

展开阅读全文
相似文档                                   自信AI助手自信AI助手
猜你喜欢                                   自信AI导航自信AI导航
搜索标签

当前位置:首页 > 学术论文 > 自然科学论文

移动网页_全站_页脚广告1

关于我们      便捷服务       自信AI       AI导航        获赠5币

©2010-2024 宁波自信网络信息技术有限公司  版权所有

客服电话:4008-655-100  投诉/维权电话:4009-655-100

gongan.png浙公网安备33021202000488号   

icp.png浙ICP备2021020529号-1  |  浙B2-2024(领证中)  

关注我们 :gzh.png    weibo.png    LOFTER.png 

客服