ImageVerifierCode 换一换
格式:PDF , 页数:8 ,大小:505.99KB ,
资源ID:256925      下载积分:10 金币
验证码下载
登录下载
邮箱/手机:
验证码: 获取验证码
温馨提示:
支付成功后,系统会自动生成账号(用户名为邮箱或者手机号,密码是验证码),方便下次登录下载和查询订单;
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

开通VIP
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.zixin.com.cn/docdown/256925.html】到电脑端继续下载(重复下载【60天内】不扣币)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  
声明  |  会员权益     获赠5币     写作写作

1、填表:    下载求助     索取发票    退款申请
2、咨信平台为文档C2C交易模式,即用户上传的文档直接被用户下载,收益归上传人(含作者)所有;本站仅是提供信息存储空间和展示预览,仅对用户上传内容的表现方式做保护处理,对上载内容不做任何修改或编辑。所展示的作品文档包括内容和图片全部来源于网络用户和作者上传投稿,我们不确定上传用户享有完全著作权,根据《信息网络传播权保护条例》,如果侵犯了您的版权、权益或隐私,请联系我们,核实后会尽快下架及时删除,并可随时和客服了解处理情况,尊重保护知识产权我们共同努力。
3、文档的总页数、文档格式和文档大小以系统显示为准(内容中显示的页数不一定正确),网站客服只以系统显示的页数、文件格式、文档大小作为仲裁依据,平台无法对文档的真实性、完整性、权威性、准确性、专业性及其观点立场做任何保证或承诺,下载前须认真查看,确认无误后再购买,务必慎重购买;若有违法违纪将进行移交司法处理,若涉侵权平台将进行基本处罚并下架。
4、本站所有内容均由用户上传,付费前请自行鉴别,如您付费,意味着您已接受本站规则且自行承担风险,本站不进行额外附加服务,虚拟产品一经售出概不退款(未进行购买下载可退充值款),文档一经付费(服务费)、不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
5、如你看到网页展示的文档有www.zixin.com.cn水印,是因预览和防盗链等技术需要对页面进行转换压缩成图而已,我们并不对上传的文档进行任何编辑或修改,文档下载后都不会有水印标识(原文档上传前个别存留的除外),下载后原文更清晰;试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓;PPT和DOC文档可被视为“模板”,允许上传人保留章节、目录结构的情况下删减部份的内容;PDF文档不管是原文档转换或图片扫描而得,本站不作要求视为允许,下载前自行私信或留言给上传者【自信****多点】。
6、本文档所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用;网站提供的党政主题相关内容(国旗、国徽、党徽--等)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。
7、本文档遇到问题,请及时私信或留言给本站上传会员【自信****多点】,需本站解决可联系【 微信客服】、【 QQ客服】,若有其他问题请点击或扫码反馈【 服务填表】;文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“【 版权申诉】”(推荐),意见反馈和侵权处理邮箱:1219186828@qq.com;也可以拔打客服电话:4008-655-100;投诉/维权电话:4009-655-100。

注意事项

本文(Runge-Kutta型多...解非定常偏微分方程(英文)_陈泽斌.pdf)为本站上传会员【自信****多点】主动上传,咨信网仅是提供信息存储空间和展示预览,仅对用户上传内容的表现方式做保护处理,对上载内容不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知咨信网(发送邮件至1219186828@qq.com、拔打电话4008-655-100或【 微信客服】、【 QQ客服】),核实后会尽快下架及时删除,并可随时和客服了解处理情况,尊重保护知识产权我们共同努力。
温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载【60天内】不扣币。 服务填表

Runge-Kutta型多...解非定常偏微分方程(英文)_陈泽斌.pdf

1、第40卷第2期2023年3月新疆大学学报(自然科学版)(中英文)Journal of Xinjiang University(Natural Science Edition in Chinese and English)Vol.40,No.2Mar.,2023Multi-Scale Neural Networks Based onRunge-Kutta Method for SolvingUnsteady Partial Differential EquationsCHEN Zebin,FENG Xinlong(School of Mathematics and System Sciences

2、,Xinjiang University,Urumqi Xinjiang 830017,China)Abstract:This paper proposes the multi-scale neural networks method based on Runge-Kutta to solve unsteady partial dif-ferential equations.The method uses q-order Runge-Kutta to construct the time iteratione scheme,and further establishes thetotal lo

3、ss function of multiple time steps,which is to realize the parameter sharing of neural networks with multiple time steps,and to predict the function value at any moment in the time domain.Besides,the m-scaling factor is adopted to speed up theconvergence of the loss function and improve the accuracy

4、 of the numerical solution.Finally,several numerical experiments arepresented to demonstrate the effectiveness of the proposed method.Key words:unsteady partial differential equations;q-order Runge-Kutta method;multi-scale neural networks;m-scaling factor;high accuracyDOI:10.13568/ki.651094.651316.2

5、022.06.25.0001CLC number:O175Document Code:AArticle ID:2096-7675(2023)02-0142-08引文格式:陈泽斌,冯新龙.Runge-Kutta 型多尺度神经网络求解非定常偏微分方程J.新疆大学学报(自然科学版)(中英文),2023,40(2):142-149.英文引文格式:CHEN Zebin,FENG Xinlong.Multi-scale neural networks based on Runge-Kutta method for solving unsteadypartial differential equations

6、J.Journal of Xinjiang University(Natural Science Edition in Chinese and English),2023,40(2):142-149.Runge-Kutta型多尺度神经网络求解非定常偏微分方程陈泽斌,冯新龙(新疆大学 数学与系统科学学院,新疆 乌鲁木齐830017)摘要:提出了基于 Runge-Kutta 的多尺度神经网络方法求解非定常偏微分方程.利用 q 阶 Runge-Kutta 构造时间迭代格式,通过建立多时间步的总损失函数,实现多时间步的神经网络参数共享,并预测时域内任意时刻的函数值.同时采用m-缩放因子加快损失函数收敛

7、,提高数值解精度.最后,给出了若干数值实验验证所提方法的有效性关键词:非定常偏微分方程;q 阶 Runge-Kutta 法;多尺度神经网络;m-缩放因子;高精度0IntroductionDeep learning has achieved satisfactory results in searching technology,natural language processing,image processing,recommendation system,personalization technology,etc.In recent years,deep learning has bee

8、n successfully applied tosolve partial differential equations and has been deeply promoted.Compared with the finite element method1and thefinite difference method2,deep learning as a meshless method,can mitigate the curse of dimensionality when solving high-dimensional partial differential equations

9、.So it is more convenient to establish a solution framework for high-dimensionalpartial differential equations.E etc3proposed the Deep-Ritz method based on deep neural networks for numerically solvingvariational problems,which was insensitive to the dimension of the problem and could be used to solv

10、e high-dimensional pr-Received Date:2022-06-25Foundation Item:This work was supported by Open Project of Key Laboratory of Xinjiang“Machine learning for incompressible magnetohydrodynam-ics models”(2020D04002).Biography:CHEN Zebin(1995-),male,master student,research fields:deep learning to solve par

11、tial differential equations.Corresponding author:FENG Xinlong(1976-),male,professor,research field:numerical solutions of partial differential equations,E-mail:fxl-.No.2CHEN Zebin,et al:Multi-Scale Neural Networks Based on Runge-Kutta Method for Solving Unsteady Partial Differential Equations143oble

12、ms.Physics-Informed Neural Networks(PINNs)4used the automatic differentiation technique5for the first time toembed the residual of the equation into the loss function of the neural networks,and obtained the numerical solution of theequation by minimizing the loss function.PINNs is a new numerical me

13、thod for solving partial differential equations,whichmakes full use of the physical information contained in the PDEs.PINNs have attracted the attention of many scholars,andliterature67shows theoretical convergence of PINNs for certain classes of PDEs.Multi-scale DNN8proposed the idea ofradial scali

14、ng in the frequency domain,which had the ability to approximate high-frequency and high-dimensional functions,and could accelerate the convergence speed of the loss function.Recently,the research on deep learning algorithms for nonlinear unsteady partial differential equations have attracted theatte

15、ntion of many scholars.The time-discrete model of PINNs can still guarantee the stability and high precision of numericalsolutions when using large time steps,but it needs high computational costs.DeLISA added the physical information ofthe governing equations into the time iteration scheme,and intr

16、oduced time-dependent input to achieve continuous-timeprediction without a large number of interior points9.On this basis,this paper proposes a multi-scale neural networksalgorithm integrating Runge-Kutta method.On the one hand,the algorithm constructs a time iteration scheme to build thetotal loss

17、function of multiple time steps,thus realizes the sharing of neural networks parameters in multiple time steps tosave computational costs.It can also predict the function value at any time in the time domain after training.On the otherhand,the algorithm can not only speed up the convergence speed of

18、 the loss function,but also improve the accuracy of thesolution.In the numerical example,we compare the stability of the time stepping scheme and the time iteration schemein multi-time-step solutions.Then,through the sensitivity analysis of boundary points and initial points,we find that thesolution

19、 accuracy does not increase with the increase of points.Therefore,we can still achieve the same calculation accuracyby selecting an appropriately small number of points,and achieving the purpose of reducing the amount of calculation.The rest of this paper is organized as follows:In section 1,we desc

20、ribe neural networks combined with Runge-Kuttamethod and automatic differentiation in detail.Section 2 shows time iteration scheme based on Runge-Kutta multi-scaleneural networks.We then present several numerical experiments,including Convection-Diffusion equation and Burgersequation in section 3.Fi

21、nally,we conclude with the key ideas raised in this work.1PreliminariesIn this section,we introduce the neural networks integrates the Runge-Kutta10method to solve unsteady partial dif-ferential equations in detail.Automatic differentiation is briefly introduced,and the computational steps of the au

22、tomaticdifferentiation are understood by calculating the derivatives of the output of a simple structured neural networks with respectto the input.1.1Fusion of Neural Networks and Runge-Kutta MethodTo understand the key idea clearly,we consider a class of unsteady partial differential equations defi

23、ned on the boundedset Rnut=N(u(x,t),(x,t)(0,Tu(x,0)=u0(x),xu(x,t)=h(x,t),(x,t)0,T(1)where x represents the space vector,and N represents the differential operator with respect to function u.We give thespace discrete of equation(1),and then integrate it in the interval tn,tn+1 to getun+1un=tn+1tnNu(x

24、k,t)dt,k=1,2,N0,where tn=nt.We use the q-order implicit Runge-Kutta method to approximate the right-hand integral term in the aboveequationun+1=un+tqj=1bjNun+cjun+ci=un+tqj=1aijNun+cj,i=1,2,q(2)144Journal of Xinjiang University(Natural Science Edition in Chinese and English)2023where un+ci(x)=u(tn+c

25、it,x),0aij,bj,cj1.As shown in Fig 1,we give the fusion frame diagram of the neural networksand the Runge-Kutta method when x=(x1,x2)R2,q=5.Fig 1 Fusion frame diagram of neural networks and Runge-Kutta methodFrom equation(2),we could include the indirect label of the output layer of the neural networ

26、ksunq+1,un+1tqj=1bjNun+cj,uni,un+citqj=1aijNun+cj,i=1,2,q.In the following,we establish the loss function for the n-th time layerSSE0,n=N0k=1q+1i=1uniun(xk)2,where un(xk)is the known function value of the n time step.Similarly,we set the loss function on the boundarySSEb,n()=N0k=1q+1i=1un+cih(xk,tn+

27、cjt)2,where un+cq+1=un+1.Then the total loss function of equation(1)consists of the above two partsLossn=SSE0,n+SSEb,n.1.2Automatic DifferentiationPINNs embed the residual of the equation into the loss function of the neural networks for the first time,which providedan effective means to numerically

28、 solve partial differential equations.By calculating the derivative of the networks outputwith respect to the input,we can obtain the residuals of the equation.The numerical solution uANNconstructed by the neuralnetworks have a specific functional expression,so the derivative can be calculated using

29、 finite difference method,symbolicdifferentiation or automatic differentiation.However,the limitations of the finite difference method lie in truncation errorand method error.Symbolic differentiation is computationally expensive and time-consuming.Automatic differentiation canovercome the limitation

30、s of the above two methods,so this paper uses the automatic differentiation technique to calculate thederivative.Automatic differentiation(AD)computes derivatives using the chain rule.AD calculation process can be divided intotwo steps:calculating the function value in the forward mode and calculati

31、ng the derivative value in the reverse mode.Welearn about AD by computing the derivative of the output of a multi-layer feedforward neural networks with respect to theinput.The neural networks consists of an input layer with two neurons x1and x2,a hidden layer with one neuron and anoutput layer with

32、 one neuron y.y=2tanh(0.1x10.3x2+0.6)0.3.The calculation of the derivative of y with respect to(x1,x2)at(2,3)is shown in Table 1.No.2CHEN Zebin,et al:Multi-Scale Neural Networks Based on Runge-Kutta Method for Solving Unsteady Partial Differential Equations145Table 1 Automatic differentiation techni

33、que derivation processForward PassBackward Passx1=2yy=1x2=3z=0.1x10.3x2+0.6=0.1y?z=(2?z0.3)?z=2?z=tanh(z)0.099 7yz=y?z?zz=y?zsech2(?z)1.980 3y=2?z0.3=0.499 3yx1=yzzx1=yz0.10.198 0yx2=yzzx2=yz(0.3)=0.594 1Iri etc11give the proof of the computational complexity of AD which proves that the computation

34、of the gradient isat most 5 times that of the function,regardless of the dimension of the independent variable.For an optimization problemwith a 100-dimensional smooth objective function,we use symbolic differentiation or numerical differentiation to computethe gradient,and then the gradient is at l

35、east 100 times more computable than the function value,not to mention the useof second-order derivative information or higher-dimensional functions.If AD is used,regardless of the dimension of theindependent variable,the computation amount of the gradient is at most 4 times than the value of the fun

36、ction with machineprecision.2Multi-Scale Neural Networks Integrating Runge-Kutta MethodAccording to the universal approximation theorem12,as long as a hidden layer contains enough neurons,the neuralnetworks can approximate a continuous function defined on a compact set with arbitrary accuracy.It sho

37、uld be noted thatthe neural networks obeys the frequency principle13when fitting the function:it tends to preferentially fit the low-frequencypart of the objective function.Based on the understanding of the frequency principle,in order to solve the characteristicsof slow learning at high-frequency,w

38、e can make some special designs for the neural networks.From the point of view offunction space,the basis function of neural networks is composed of activation functions.The basis functions of differentscales construct a feasible function space,which can approximate the objective function faster.The

39、 same basis function canbe scaled to generate different scales basis functions.In order to describe the multi-scale neural network conveniently,weonly divide the neurons into h parts in the first hidden layer,as shown in Fig 2.The input of the i-th neuron is ix,and thecorresponding output is(iwx+b).

40、Then the multi-scale neural network with L-1 hidden layers is defined asuANN=WL(WL1(W1(MW0 x+b0)+b1)+bL1),where is the Hadamard product,m-scaling factor is M=m(1,1,1,2,2,i,i,h,h).Fig 2 Multi-scale neural network example of h=3The m-scaling factor of multi-scale neural networks can not only speed up

41、the convergence speed of the loss function,but also improve the accuracy of the solution.In Fig 3,we present a schematic diagram of the framework of time iterationscheme based on multi-scale neural networks.The input-output mapping of a multi-scale neural network is(x,tn)(un+c1,un+c2,un+cq,un+1).146

42、Journal of Xinjiang University(Natural Science Edition in Chinese and English)2023Fig 3 Time iteration scheme of neural networks combined with Runge-Kutta methodIn Fig 4,we give a schematic diagram of the time step scheme of the neural networks and implicit Runge-Kutta(IRK)fusion.It is worth noting

43、that the iterative solution of each time step in this framework requires a neural network to fit.Theiterative solution of multiple time steps needs to optimize the parameters of multiple neural networks.Time iteration schememakes full use of the approximation ability of the neural networks by adjust

44、ing the input-output mapping relationship of themulti-scale neural networks,so that the iterative solutions of multiple time steps share the parameters.At n-th time step,weset the loss function as followslossn=N0k=1q+1i=1uni(xk)un,iexact(xk)2.We set the loss function for boundary conditions as follo

45、wslossb=Nn=1Nbk=1q+1i=1uni(xk)hn,i(xk)2.Similarly,we set the loss function for the initial condition as followsloss0=Nbk=1q+1i=1u0i(xk)u0,i(xk)2.So the total loss function can be written asloss=Nn=1lossn+lossb+loss0.The multi-scale neural networks algorithm incorporating the Runge-Kutta method requi

46、res only a small number of initialvalue data pairs,constructs its time iteration scheme by IRK method,and then builds the total loss function for multiple timesteps to obtain the function value at any time in the time domain by only one time optimization.Fig 4 Time stepping scheme of neural networks

47、 combined with implicit Runge-Kutta methodNo.2CHEN Zebin,et al:Multi-Scale Neural Networks Based on Runge-Kutta Method for Solving Unsteady Partial Differential Equations1473Numerical ExperimentsIn this section,in order to verify the effectiveness of the time iteration scheme algorithm based on mult

48、i-scale neuralnetworks,we give numerical experiments to solve the Convection-Diffusion equation and Burgers equation respectively,anddefine the relative L2error asError(L2)=Ni=1(uexact(xi)upred(xi)21/2Ni=1(uexact(xi)21/2.3.1Convection-Diffusion EquationConvection-Diffusion equation14is widely used t

49、o describe fluid models and many physical phenomena.The unknownfunction u can be used to express the contaminant concentration transported through the fluid.Of course,it can also beexpressed as the temperature of the fluid moving along the hot wall or the electron concentration in a semiconductor de

50、vicemodel.To verify the stability of time iteration scheme,lets consider a two-dimensional Convection-Diffusion equation.ut=uu,(x,t)(0,T(3)where the solution area of the equation is defined as =1,32,T=1,and=(a,b).The initial boundary value conditionsfor this problem are given by the following true s

移动网页_全站_页脚广告1

关于我们      便捷服务       自信AI       AI导航        获赠5币

©2010-2024 宁波自信网络信息技术有限公司  版权所有

客服电话:4008-655-100  投诉/维权电话:4009-655-100

gongan.png浙公网安备33021202000488号   

icp.png浙ICP备2021020529号-1  |  浙B2-20240490  

关注我们 :gzh.png    weibo.png    LOFTER.png 

客服