ImageVerifierCode 换一换
格式:PPTX , 页数:43 ,大小:3.25MB ,
资源ID:4169736      下载积分:12 金币
验证码下载
登录下载
邮箱/手机:
图形码:
验证码: 获取验证码
温馨提示:
支付成功后,系统会自动生成账号(用户名为邮箱或者手机号,密码是验证码),方便下次登录下载和查询订单;
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

开通VIP
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.zixin.com.cn/docdown/4169736.html】到电脑端继续下载(重复下载【60天内】不扣币)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  

开通VIP折扣优惠下载文档

            查看会员权益                  [ 下载后找不到文档?]

填表反馈(24小时):  下载求助     关注领币    退款申请

开具发票请登录PC端进行申请。


权利声明

1、咨信平台为文档C2C交易模式,即用户上传的文档直接被用户下载,收益归上传人(含作者)所有;本站仅是提供信息存储空间和展示预览,仅对用户上传内容的表现方式做保护处理,对上载内容不做任何修改或编辑。所展示的作品文档包括内容和图片全部来源于网络用户和作者上传投稿,我们不确定上传用户享有完全著作权,根据《信息网络传播权保护条例》,如果侵犯了您的版权、权益或隐私,请联系我们,核实后会尽快下架及时删除,并可随时和客服了解处理情况,尊重保护知识产权我们共同努力。
2、文档的总页数、文档格式和文档大小以系统显示为准(内容中显示的页数不一定正确),网站客服只以系统显示的页数、文件格式、文档大小作为仲裁依据,个别因单元格分列造成显示页码不一将协商解决,平台无法对文档的真实性、完整性、权威性、准确性、专业性及其观点立场做任何保证或承诺,下载前须认真查看,确认无误后再购买,务必慎重购买;若有违法违纪将进行移交司法处理,若涉侵权平台将进行基本处罚并下架。
3、本站所有内容均由用户上传,付费前请自行鉴别,如您付费,意味着您已接受本站规则且自行承担风险,本站不进行额外附加服务,虚拟产品一经售出概不退款(未进行购买下载可退充值款),文档一经付费(服务费)、不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
4、如你看到网页展示的文档有www.zixin.com.cn水印,是因预览和防盗链等技术需要对页面进行转换压缩成图而已,我们并不对上传的文档进行任何编辑或修改,文档下载后都不会有水印标识(原文档上传前个别存留的除外),下载后原文更清晰;试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓;PPT和DOC文档可被视为“模板”,允许上传人保留章节、目录结构的情况下删减部份的内容;PDF文档不管是原文档转换或图片扫描而得,本站不作要求视为允许,下载前可先查看【教您几个在下载文档中可以更好的避免被坑】。
5、本文档所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用;网站提供的党政主题相关内容(国旗、国徽、党徽--等)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。
6、文档遇到问题,请及时联系平台进行协调解决,联系【微信客服】、【QQ客服】,若有其他问题请点击或扫码反馈【服务填表】;文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“【版权申诉】”,意见反馈和侵权处理邮箱:1219186828@qq.com;也可以拔打客服电话:4009-655-100;投诉/维权电话:18658249818。

注意事项

本文(并行程序设计导论.pptx)为本站上传会员【w****g】主动上传,咨信网仅是提供信息存储空间和展示预览,仅对用户上传内容的表现方式做保护处理,对上载内容不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知咨信网(发送邮件至1219186828@qq.com、拔打电话4009-655-100或【 微信客服】、【 QQ客服】),核实后会尽快下架及时删除,并可随时和客服了解处理情况,尊重保护知识产权我们共同努力。
温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载【60天内】不扣币。 服务填表

并行程序设计导论.pptx

1、1Copyright 2010,Elsevier Inc.All rights ReservedChapter 1Why Parallel Computing?An Introduction to Parallel ProgrammingPeter Pacheco2Copyright 2010,Elsevier Inc.All rights ReservedRoadmapnWhy we need ever-increasing performance.nWhy were building parallel systems.nWhy we need to write parallel progr

2、ams.nHow do we write parallel programs?nWhat well be doing.nConcurrent,parallel,distributed!#Chapter Subtitle3Changing timesCopyright 2010,Elsevier Inc.All rights ReservednFrom 1986 2002,microprocessors were speeding like a rocket,increasing in performance an average of 50%per year.nSince then,its d

3、ropped to about 20%increase per year.4An intelligent solutionCopyright 2010,Elsevier Inc.All rights ReservednInstead of designing and building faster microprocessors,put multiple processors on a single integrated circuit.5Now its up to the programmersnAdding more processors doesnt help much if progr

4、ammers arent aware of themn or dont know how to use them.nSerial programs dont benefit from this approach(in most cases).Copyright 2010,Elsevier Inc.All rights Reserved6Why we need ever-increasing performancenComputational power is increasing,but so are our computation problems and needs.nProblems w

5、e never dreamed of have been solved because of past increases,such as decoding the human genome.nMore complex problems are still waiting to be solved.Copyright 2010,Elsevier Inc.All rights Reserved7Climate modelingCopyright 2010,Elsevier Inc.All rights Reserved8Protein foldingCopyright 2010,Elsevier

6、 Inc.All rights Reserved9Drug discoveryCopyright 2010,Elsevier Inc.All rights Reserved10Energy researchCopyright 2010,Elsevier Inc.All rights Reserved11Data analysisCopyright 2010,Elsevier Inc.All rights Reserved12Why were building parallel systemsnUp to now,performance increases have been attributa

7、ble to increasing density of transistors.nBut there areinherent problems.Copyright 2010,Elsevier Inc.All rights Reserved13A little physics lessonnSmaller transistors=faster processors.nFaster processors=increased power consumption.nIncreased power consumption=increased heat.nIncreased heat=unreliabl

8、e processors.Copyright 2010,Elsevier Inc.All rights Reserved14Solution nMove away from single-core systems to multicore processors.n“core”=central processing unit(CPU)Copyright 2010,Elsevier Inc.All rights ReservednIntroducing parallelism!15Why we need to write parallel programsnRunning multiple ins

9、tances of a serial program often isnt very useful.nThink of running multiple instances of your favorite game.nWhat you really want is forit to run faster.Copyright 2010,Elsevier Inc.All rights Reserved16Approaches to the serial problemnRewrite serial programs so that theyre parallel.nWrite translati

10、on programs that automatically convert serial programs into parallel programs.nThis is very difficult to do.nSuccess has been limited.Copyright 2010,Elsevier Inc.All rights Reserved17More problemsnSome coding constructs can be recognized by an automatic program generator,and converted to a parallel

11、construct.nHowever,its likely that the result will be a very inefficient program.nSometimes the best parallel solution is to step back and devise an entirely new algorithm.Copyright 2010,Elsevier Inc.All rights Reserved18ExamplenCompute n values and add them together.nSerial solution:Copyright 2010,

12、Elsevier Inc.All rights Reserved19Example(cont.)nWe have p cores,p much smaller than n.nEach core performs a partial sum of approximately n/p values.Copyright 2010,Elsevier Inc.All rights ReservedEach core uses its own private variablesand executes this block of codeindependently of the other cores.

13、20Example(cont.)nAfter each core completes execution of the code,is a private variable my_sum contains the sum of the values computed by its calls to Compute_next_value.nEx.,8 cores,n=24,then the calls to Compute_next_value return:Copyright 2010,Elsevier Inc.All rights Reserved1,4,3,9,2,8,5,1,1,5,2,

14、7,2,5,0,4,1,8,6,5,1,2,3,921Example(cont.)nOnce all the cores are done computing their private my_sum,they form a global sum by sending results to a designated“master”core which adds the final result.Copyright 2010,Elsevier Inc.All rights Reserved22Example(cont.)Copyright 2010,Elsevier Inc.All rights

15、 Reserved23Example(cont.)Copyright 2010,Elsevier Inc.All rights ReservedCore01234567my_sum8197157131214Global sum8+19+7+15+7+13+12+14=95Core01234567my_sum9519715713121424Copyright 2010,Elsevier Inc.All rights ReservedBut wait!Theres a much better wayto compute the global sum.25Better parallel algori

16、thmnDont make the master core do all the work.nShare it among the other cores.nPair the cores so that core 0 adds its result with core 1s result.nCore 2 adds its result with core 3s result,etc.nWork with odd and even numbered pairs of cores.Copyright 2010,Elsevier Inc.All rights Reserved26Better par

17、allel algorithm(cont.)nRepeat the process now with only the evenly ranked cores.nCore 0 adds result from core 2.nCore 4 adds the result from core 6,etc.nNow cores divisible by 4 repeat the process,and so forth,until core 0 has the final result.Copyright 2010,Elsevier Inc.All rights Reserved27Multipl

18、e cores forming a global sumCopyright 2010,Elsevier Inc.All rights Reserved28AnalysisnIn the first example,the master core performs 7 receives and 7 additions.nIn the second example,the master core performs 3 receives and 3 additions.nThe improvement is more than a factor of 2!Copyright 2010,Elsevie

19、r Inc.All rights Reserved29Analysis(cont.)nThe difference is more dramatic with a larger number of cores.nIf we have 1000 cores:nThe first example would require the master to perform 999 receives and 999 additions.nThe second example would only require 10 receives and 10 additions.nThats an improvem

20、ent of almost a factor of 100!Copyright 2010,Elsevier Inc.All rights Reserved30How do we write parallel programs?nTask parallelism nPartition various tasks carried out solving the problem among the cores.nData parallelismnPartition the data used in solving the problem among the cores.nEach core carr

21、ies out similar operations on its part of the data.Copyright 2010,Elsevier Inc.All rights Reserved31Professor PCopyright 2010,Elsevier Inc.All rights Reserved15 questions300 exams32Professor Ps grading assistantsCopyright 2010,Elsevier Inc.All rights ReservedTA#1TA#2TA#333Division of work data paral

22、lelismCopyright 2010,Elsevier Inc.All rights ReservedTA#1TA#2TA#3100 exams100 exams100 exams34Division of work task parallelismCopyright 2010,Elsevier Inc.All rights ReservedTA#1TA#2TA#3Questions 1-5Questions 6-10Questions 11-1535Division of work data parallelismCopyright 2010,Elsevier Inc.All right

23、s Reserved36Division of work task parallelismCopyright 2010,Elsevier Inc.All rights ReservedTasks1)Receiving2)Addition 37CoordinationnCores usually need to coordinate their work.nCommunication one or more cores send their current partial sums to another core.nLoad balancing share the work evenly amo

24、ng the cores so that one is not heavily loaded.nSynchronization because each core works at its own pace,make sure cores do not get too far ahead of the rest.Copyright 2010,Elsevier Inc.All rights Reserved38What well be doingnLearning to write programs that are explicitly parallel.nUsing the C langua

25、ge.nUsing three different extensions to C.nMessage-Passing Interface(MPI)nPosix Threads(Pthreads)nOpenMPCopyright 2010,Elsevier Inc.All rights Reserved39Type of parallel systemsnShared-memorynThe cores can share access to the computers memory.nCoordinate the cores by having them examine and update s

26、hared memory locations.nDistributed-memorynEach core has its own,private memory.nThe cores must communicate explicitly by sending messages across a network.Copyright 2010,Elsevier Inc.All rights Reserved40Type of parallel systemsCopyright 2010,Elsevier Inc.All rights ReservedShared-memoryDistributed

27、memory41Terminology nConcurrent computing a program is one in which multiple tasks can be in progress at any instant.nParallel computing a program is one in which multiple tasks cooperate closely to solve a problemnDistributed computing a program may need to cooperate with other programs to solve a

28、 problem.Copyright 2010,Elsevier Inc.All rights Reserved42Concluding Remarks(1)nThe laws of physics have brought us to the doorstep of multicore technology.nSerial programs typically dont benefit from multiple cores.nAutomatic parallel program generation from serial program code isnt the most effici

29、ent approach to get high performance from multicore computers.Copyright 2010,Elsevier Inc.All rights Reserved43Concluding Remarks(2)nLearning to write parallel programs involves learning how to coordinate the cores.nParallel programs are usually very complex and therefore,require sound program techniques and development.Copyright 2010,Elsevier Inc.All rights Reserved

移动网页_全站_页脚广告1

关于我们      便捷服务       自信AI       AI导航        抽奖活动

©2010-2025 宁波自信网络信息技术有限公司  版权所有

客服电话:4009-655-100  投诉/维权电话:18658249818

gongan.png浙公网安备33021202000488号   

icp.png浙ICP备2021020529号-1  |  浙B2-20240490  

关注我们 :微信公众号    抖音    微博    LOFTER 

客服