ImageVerifierCode 换一换
格式:PPTX , 页数:305 ,大小:23.65MB ,
资源ID:1780042      下载积分:20 金币
验证码下载
登录下载
邮箱/手机:
图形码:
验证码: 获取验证码
温馨提示:
支付成功后,系统会自动生成账号(用户名为邮箱或者手机号,密码是验证码),方便下次登录下载和查询订单;
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

开通VIP
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.zixin.com.cn/docdown/1780042.html】到电脑端继续下载(重复下载【60天内】不扣币)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  

开通VIP折扣优惠下载文档

            查看会员权益                  [ 下载后找不到文档?]

填表反馈(24小时):  下载求助     关注领币    退款申请

开具发票请登录PC端进行申请。


权利声明

1、咨信平台为文档C2C交易模式,即用户上传的文档直接被用户下载,收益归上传人(含作者)所有;本站仅是提供信息存储空间和展示预览,仅对用户上传内容的表现方式做保护处理,对上载内容不做任何修改或编辑。所展示的作品文档包括内容和图片全部来源于网络用户和作者上传投稿,我们不确定上传用户享有完全著作权,根据《信息网络传播权保护条例》,如果侵犯了您的版权、权益或隐私,请联系我们,核实后会尽快下架及时删除,并可随时和客服了解处理情况,尊重保护知识产权我们共同努力。
2、文档的总页数、文档格式和文档大小以系统显示为准(内容中显示的页数不一定正确),网站客服只以系统显示的页数、文件格式、文档大小作为仲裁依据,个别因单元格分列造成显示页码不一将协商解决,平台无法对文档的真实性、完整性、权威性、准确性、专业性及其观点立场做任何保证或承诺,下载前须认真查看,确认无误后再购买,务必慎重购买;若有违法违纪将进行移交司法处理,若涉侵权平台将进行基本处罚并下架。
3、本站所有内容均由用户上传,付费前请自行鉴别,如您付费,意味着您已接受本站规则且自行承担风险,本站不进行额外附加服务,虚拟产品一经售出概不退款(未进行购买下载可退充值款),文档一经付费(服务费)、不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
4、如你看到网页展示的文档有www.zixin.com.cn水印,是因预览和防盗链等技术需要对页面进行转换压缩成图而已,我们并不对上传的文档进行任何编辑或修改,文档下载后都不会有水印标识(原文档上传前个别存留的除外),下载后原文更清晰;试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓;PPT和DOC文档可被视为“模板”,允许上传人保留章节、目录结构的情况下删减部份的内容;PDF文档不管是原文档转换或图片扫描而得,本站不作要求视为允许,下载前可先查看【教您几个在下载文档中可以更好的避免被坑】。
5、本文档所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用;网站提供的党政主题相关内容(国旗、国徽、党徽--等)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。
6、文档遇到问题,请及时联系平台进行协调解决,联系【微信客服】、【QQ客服】,若有其他问题请点击或扫码反馈【服务填表】;文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“【版权申诉】”,意见反馈和侵权处理邮箱:1219186828@qq.com;也可以拔打客服电话:4009-655-100;投诉/维权电话:18658249818。

注意事项

本文(一天搞懂深度学习-台湾大学-李宏毅.pptx)为本站上传会员【w****g】主动上传,咨信网仅是提供信息存储空间和展示预览,仅对用户上传内容的表现方式做保护处理,对上载内容不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知咨信网(发送邮件至1219186828@qq.com、拔打电话4009-655-100或【 微信客服】、【 QQ客服】),核实后会尽快下架及时删除,并可随时和客服了解处理情况,尊重保护知识产权我们共同努力。
温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载【60天内】不扣币。 服务填表

一天搞懂深度学习-台湾大学-李宏毅.pptx

1、Deep Learning Tutorial李宏毅Hung-yiLeeDeep learning attracts lots of attention.Ibelieveyouhaveseenlotsofexcitingresultsbefore.Thistalkfocusesonthebasictechniques.Deep learning trends at Google.Source:SIGMOD/Jeff DeanOutlineLectureIII:BeyondSupervisedLearningLectureII:VariantsofNeuralNetworkLectureI:Int

2、roductionofDeepLearningLecture I:Introduction of Deep LearningOutlineIntroductionofDeepLearning“HelloWorld”forDeepLearningTipsforDeepLearningMachine Learning Looking for a FunctionSpeechRecognitionImageRecognitionPlayingGoDialogueSystem“Cat”“Howareyou”“5-5”“Hello”“Hi”(whattheusersaid)(systemresponse

3、)(nextmove)Framework Asetoffunction“cat”“dog”“money”“snake”Model“cat”ImageRecognition:Framework Asetoffunction“cat”ImageRecognition:ModelTrainingDataGoodnessoffunctionfBetter!“monkey”“cat”“dog”function input:function output:SupervisedLearningFramework Asetoffunction“cat”ImageRecognition:ModelTrainin

4、gDataGoodnessoffunctionf“monkey”“cat”“dog”Pickthe“Best”FunctionUsing“cat”TrainingTestingStep1Step2Step3Three Steps for Deep LearningStep3:pickthebestfunctionStep2:goodnessoffunctionStep1:defineasetoffunctionNeuralNetworkNeural Network biasweightsNeuronAsimplefunctionActivationfunctionNeural Network

5、biasActivationfunctionweightsNeuron1-2-112-114SigmoidFunction0.98Neural Network DifferentconnectionsleadtodifferentnetworkstructuresTheneuronshavedifferentvaluesofweightsandbiases.Fully Connect Feedforward NetworkSigmoidFunction1-11-21-1104-20.980.12Fully Connect Feedforward Network1-21-1104-20.980.

6、122-1-1-23-14-10.860.110.620.8300-221-1Fully Connect Feedforward Network1-21-1100.730.52-1-1-23-14-10.720.120.510.8500-2200Thisisafunction.Inputvector,outputvectorGivennetworkstructure,definea function setOutput LayerHidden LayersInput LayerFully Connect Feedforward NetworkInputOutputLayer1Layer2Lay

7、erLy1y2yMDeepmeansmanyhiddenlayersneuronWhy Deep?Universality TheoremReference forthereason:http:/ two layers of logic gates canrepresentany Boolean function.UsingmultiplelayersoflogicgatestobuildsomefunctionsaremuchsimplerNeuralnetworkconsistsofneuronsA hidden layer network canrepresentany continuo

8、us function.UsingmultiplelayersofneuronstorepresentsomefunctionsaremuchsimplerlessgatesneededLogiccircuitsNeuralnetworklessparameterslessdata?Morereason:https:/ Deep?Analogy8layers19layers22layersAlexNet(2012)VGG(2014)GoogleNet(2014)16.4%7.3%6.7%http:/cs231n.stanford.edu/slides/winter1516_lecture8.p

9、dfDeep=ManyhiddenlayersAlexNet(2012)VGG(2014)GoogleNet(2014)152layers3.57%ResidualNet(2015)Taipei101101layers16.4%7.3%6.7%Deep=ManyhiddenlayersSpecialstructureOutput Layer SoftmaxlayerastheoutputlayerOrdinary LayerIngeneral,theoutputofnetworkcanbeanyvalue.MaynotbeeasytointerpretOutput LayerSoftmaxla

10、yerastheoutputlayerSoftmax Layer3-312.72.720200.050.050.880.120Example ApplicationInputOutput16x16=256Ink1Noink0y1y2y10Eachdimensionrepresentstheconfidenceofadigit.is1is2is00.10.70.2Theimageis“2”Example ApplicationHandwritingDigitRecognitionMachine“2”y1y2y10is1is2is0WhatisneededisafunctionInput:256-

11、dimvectoroutput:10-dimvectorNeuralNetworkOutput LayerHidden LayersInput LayerExample ApplicationInputOutputLayer1Layer2LayerL“2”y1y2y10is1is2is0AfunctionsetcontainingthecandidatesforHandwritingDigitRecognitionYouneedtodecidethenetworkstructuretoletagoodfunctioninyourfunctionset.FAQQ:Howmanylayers?Ho

12、wmanyneuronsforeachlayer?Q:Canwedesignthenetworkstructure?Q:Canthestructurebeautomaticallydetermined?Yes,butnotwidelystudiedyet.TrialandErrorIntuition+ConvolutionalNeuralNetwork(CNN)inthenextlectureHighway NetworkResidual NetworkHighway NetworkDeep Residual Learning for Image Recognitionhttp:/arxiv.

13、org/abs/1512.03385Training Very Deep Networkshttps:/arxiv.org/pdf/1507.06228v2.pdf+copycopyGatecontrollerInputlayeroutputlayerInputlayeroutputlayerInputlayeroutputlayerHighwayNetworkautomaticallydeterminesthelayersneeded!Three Steps for Deep LearningStep3:pickthebestfunctionStep2:goodnessoffunctionS

14、tep1:defineasetoffunctionTraining DataPreparingtrainingdata:imagesandtheirlabelsThelearningtargetisdefinedonthetrainingdata.“5”“0”“4”“1”“3”“1”“2”“9”Learning Target16x16=256Ink1Noink0y1y2y10y1hasthemaximumvalueThelearningtargetisInput:y2hasthemaximumvalueInput:is1is2is0SoftmaxLossy1y2y10“1”100Losscan

15、besquare error orcross entropy betweenthenetworkoutputandtargettargetSoftmaxAscloseaspossibleAgoodfunctionshouldmakethelossofallexamplesassmallaspossible.GivenasetofparametersTotal Lossx1x2xRNNNNNNy1y2yR1x3NNy3ForalltrainingdataTotalLoss:23AssmallaspossibleFinda function in function set thatminimize

16、stotallossLThree Steps for Deep LearningStep3:pickthebestfunctionStep2:goodnessoffunctionStep1:defineasetoffunctionHow to pick the best functionEnumerateallpossiblevaluesLayerlLayerl+1E.g.speechrecognition:8layersand1000neuronseachlayer1000neurons1000neurons106weightsMillionsofparametersGradient Des

17、centRandom,RBMpre-trainUsuallygoodenoughPickaninitialvalueforwGradient DescentPickaninitialvalueforwPositiveNegativeDecreasewIncreasewhttp:/ DescentPickaninitialvalueforwiscalled“learning rate”RepeatGradient DescentPickaninitialvalueforwRepeat(whenupdateislittle)12Gradient DescentColor:ValueofTotalL

18、ossLRandomlypickastartingpoint12Gradient DescentHopfully,wewouldreachaminima.Color:ValueofTotalLossLLocal MinimaTotalLossThevalueofanetworkparameterwVeryslowattheplateauStuckatlocalminima =0Stuckatsaddlepoint =0 0Local MinimaGradientdescentneverguaranteeglobalminima12DifferentinitialpointReachdiffer

19、entminima,sodifferentresultsGradient DescentThisisthe“learning”ofmachinesindeeplearningEvenalphagousingthisapproach.Ihopeyouarenottoodisappointed:pPeopleimageActually.Backpropagationlibdnn台大周伯威同學開發Ref:https:/ Steps for Deep LearningDeepLearningissosimpleNowIfyouwanttofindafunctionIfyouhavelotsoffunc

20、tioninput/output(?)astrainingdataYoucanusedeeplearningFor example,you can do.Image RecognitionNetwork“monkey”“cat”“dog”“monkey”“cat”“dog”For example,you can do.Spam filtering(http:/spam-filter- example,you can do.http:/top-breaking- Keras 心得感謝沈昇勳同學提供圖檔Example ApplicationHandwritingDigitRecognitionMa

21、chine“1”“Helloworld”fordeeplearningMNISTData:http:/ Steps for Deep LearningDeepLearningissosimpleOutlineIntroductionofDeepLearning“HelloWorld”forDeepLearningTipsforDeepLearningNeuralNetworkGoodResultsonTestingData?GoodResultsonTrainingData?Step3:pickthebestfunctionStep2:goodnessoffunctionStep1:defin

22、easetoffunctionYESYESNONOOverfitting!Recipe of Deep LearningDo not always blame OverfittingDeep Residual Learning for Image Recognitionhttp:/arxiv.org/abs/1512.03385TestingDataOverfitting?TrainingDataNotwelltrainedNeuralNetworkGoodResultsonTestingData?GoodResultsonTrainingData?YESYESRecipe of Deep L

23、earningDifferentapproachesfordifferentproblems.e.g.dropoutforgoodresultsontestingdataGoodResultsonTestingData?GoodResultsonTrainingData?YESYESRecipe of Deep LearningChoosingproperlossMini-batchNewactivationfunctionAdaptiveLearningRateMomentumChoosing Proper Lossy1y2y10loss“1”100targetSoftmaxSquareEr

24、rorCrossEntropyWhichoneisbetter?100=0=0DemoSquareErrorCrossEntropySeveralalternatives:https:/keras.io/objectives/DemoChoosing Proper LossTotalLossw1w2CrossEntropySquareErrorWhenusingsoftmaxoutputlayer,choosecrossentropyhttp:/jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdfGoodResultsonTestingD

25、ata?GoodResultsonTrainingData?YESYESRecipe of Deep LearningChoosingproperlossMini-batchNewactivationfunctionAdaptiveLearningRateMomentumMini-batchx1NNy11x31NNy3131x2NNy22x16NNy1616Pickthe1stbatchRandomlyinitializenetworkparametersPickthe2ndbatchMini-batchMini-batchUpdateparametersonceUpdateparameter

26、sonceUntilallmini-batcheshavebeenpickedoneepochRepeattheaboveprocessWedonotreallyminimizetotalloss!Mini-batchx1NNy11x31NNy3131Mini-batchPickthe1stbatchPickthe2ndbatchUpdateparametersonceUpdateparametersonceUntilallmini-batcheshavebeenpickedoneepoch100examplesinamini-batchRepeat20timesMini-batchOrigi

27、nal Gradient DescentWith Mini-batchUnstable!Thecolorsrepresentthetotalloss.Mini-batch is Faster1epochSeeallexamplesSeeonlyonebatchUpdateafterseeingallexamplesIfthereare20batches,update20timesinoneepoch.Original Gradient DescentWith Mini-batchNotalwaystruewithparallelcomputing.Canhavethesamespeed(not

28、superlargedataset)Mini-batchhasbetterperformance!Demox1NNy11x31NNy3131x2NNy22x16NNy1616Mini-batchMini-batchShuffle the training examples for each epochEpoch 1x1NNy11x17NNy1717x2NNy22x26NNy2626Mini-batchMini-batchEpoch 2Dontworry.ThisisthedefaultofKeras.GoodResultsonTestingData?GoodResultsonTrainingD

29、ata?YESYESRecipe of Deep LearningChoosingproperlossMini-batchNewactivationfunctionAdaptiveLearningRateMomentumHard to get the power of Deep Deeperusuallydoesnotimplybetter.ResultsonTrainingDataDemoVanishing Gradient ProblemLargergradientsAlmostrandomAlreadyconvergebasedonrandom!?LearnveryslowLearnve

30、ryfasty1y2yMSmallergradientsVanishing Gradient Problem12Intuitivewaytocomputethederivatives+SmallergradientsLargeinputSmalloutputHard to get the power of Deep In2006,peopleusedRBMpre-training.In2015,peopleuseReLU.ReLURectifiedLinearUnit(ReLU)Reason:1.Fasttocompute2.Biologicalreason3.Infinitesigmoidw

31、ithdifferentbiases4.Vanishinggradientproblem=0XavierGlorot,AISTATS11AndrewL.Maas,ICML13KaimingHe,arXiv15ReLU0000=0ReLUAThinnerlinearnetworkDonothavesmallergradients=0DemoReLU-variant=0.01 =alsolearnedbygradientdescentMaxout LearnableactivationfunctionIanJ.Goodfellow,ICML13MaxInputMax+5+7+1+171MaxMax

32、1+2+4+324ReLUisaspecialcasesofMaxoutYoucanhavemorethan2elementsinagroup.neuronMaxout LearnableactivationfunctionIanJ.Goodfellow,ICML13ActivationfunctioninmaxoutnetworkcanbeanypiecewiselinearconvexfunctionHowmanypiecesdependingonhowmanyelementsinagroupReLUisaspecialcasesofMaxout2elementsinagroup3ele

33、mentsinagroupGoodResultsonTestingData?GoodResultsonTrainingData?YESYESRecipe of Deep LearningChoosingproperlossMini-batchNewactivationfunctionAdaptiveLearningRateMomentum12Learning RatesIflearningrateistoolargeTotallossmaynotdecreaseaftereachupdateSetthelearningratecarefully12Learning RatesIflearnin

34、grateistoolargeSetthelearningratecarefullyIflearningrateistoosmallTrainingwouldbetooslowTotallossmaynotdecreaseaftereachupdateLearning RatesAdagradParameterdependentlearningratew constantSummationofthesquareofthepreviousderivatives Original:Adagrad:Adagradg0g10.10.2g0g120.010.0Observation:1.Learning

35、rateissmallerandsmallerforallparameters2.Smallerderivatives,largerlearningrate,andviceversa=20=22Why?Learningrate:Learningrate:12SmallerDerivativesLargerLearningRate2.Smallerderivatives,largerlearningrate,andviceversaWhy?SmallerLearningRateLargerderivativesNot the whole story AdagradJohnDuchi,JMLR11

36、RMSprophttps:/ of Deep LearningChoosingproperlossMini-batchNewactivationfunctionAdaptiveLearningRateMomentumHard to find optimal network parametersTotalLossThevalueofanetworkparameterwVeryslowattheplateauStuckatlocalminima =0Stuckatsaddlepoint =0 0In physical world MomentumHowaboutputthisphenomenoni

37、ngradientdescent?Movement=Negativeof +MomentumMomentumcost =0Stillnotguaranteereachingglobalminima,butgivesomehopeMomentumRealMovementAdamRMSProp(AdvancedAdagrad)+MomentumDemoGoodResultsonTestingData?GoodResultsonTrainingData?YESYESRecipe of Deep LearningEarlyStoppingRegularizationDropoutNetworkStru

38、cturePanacea for Overfitting HavemoretrainingdataCreatemoretrainingdata(?)OriginalTrainingData:CreatedTrainingData:Shift15。Handwritingrecognition:GoodResultsonTestingData?GoodResultsonTrainingData?YESYESRecipe of Deep LearningEarlyStoppingRegularizationDropoutNetworkStructureDropoutTraining:Each tim

39、e before updating the parameterslEachneuronhasp%todropoutDropoutTraining:Each time before updating the parameterslEachneuronhasp%todropoutlUsingthenewnetworkfortrainingThe structure of the network is changed.Thinner!Foreachmini-batch,weresamplethedropoutneuronsDropoutTesting:No dropoutlIfthedropoutr

40、ateattrainingisp%,alltheweightstimes1-p%Dropout-Intuitive ReasonTraining TestingDropout(腳上綁重物)Nodropout(拿下重物後就變很強)Dropout-Intuitive ReasonWhytheweightsshouldmultiply(1-p)%(dropoutrate)whentesting?Training of DropoutTesting of Dropout12341234Assumedropoutrateis50%0.50.50.50.5NodropoutWeightsfromtrain

41、ing 2 Weightsmultiply1-p%Dropout is a kind of ensemble.EnsembleNetwork1Network2Network3Network4TrainabunchofnetworkswithdifferentstructuresTrainingSetSet1Set2Set3Set4Dropout is a kind of ensemble.Ensembley1Network1Network2Network3Network4Testingdataxy2y3y4averageDropout is a kind of ensemble.Trainin

42、g of Dropoutminibatch1Usingonemini-batchtotrainonenetworkSomeparametersinthenetworkaresharedminibatch2minibatch3minibatch4Mneurons2MpossiblenetworksDropout is a kind of ensemble.testingdataxTesting of Dropoutaveragey1y2y3Alltheweightsmultiply1-p%y?More about dropoutMorereferencefordropoutNitishSriva

43、stava,JMLR14PierreBaldi,NIPS13GeoffreyE.Hinton,arXiv12DropoutworksbetterwithMaxoutIanJ.Goodfellow,ICML13DropconnectLiWan,ICML13DropoutdeleteneuronsDropconnectdeletestheconnectionbetweenneuronsAnnealeddropoutS.J.Rennie,SLT14DropoutratedecreasesbyepochsStandoutJ.Ba,NISP13Eachneuralhasdifferentdropoutr

44、ateDemoy1y2y10Softmax500500model.add(dropout(0.8)model.add(dropout(0.8)DemoGoodResultsonTestingData?GoodResultsonTrainingData?YESYESRecipe of Deep LearningEarlyStoppingRegularizationDropoutNetworkStructureCNNisaverygoodexample!(nextlecture)Concluding Remarks Recipe of Deep LearningNeuralNetworkGoodR

45、esultsonTestingData?GoodResultsonTrainingData?Step3:pickthebestfunctionStep2:goodnessoffunctionStep1:defineasetoffunctionYESYESNONOLecture II:Variants of Neural NetworksVariants of Neural NetworksConvolutionalNeuralNetwork(CNN)RecurrentNeuralNetwork(RNN)WidelyusedinimageprocessingWhy CNN for Image?C

46、anthenetworkbesimplifiedbyconsideringthepropertiesofimages?ThemostbasicclassifiersUse1stlayerasmoduletobuildclassifiersUse2ndlayerasmoduleZeiler,M.D.,ECCV 2014RepresentedaspixelsWhy CNN for ImageSomepatternsaremuchsmallerthanthewholeimageAneurondoesnothavetoseethewholeimagetodiscoverthepattern.“beak

47、detectorConnectingtosmallregionwithlessparametersWhy CNN for ImageThesamepatternsappearindifferentregions.“upper-leftbeak”detector“middlebeak”detectorTheycanusethesamesetofparameters.DoalmostthesamethingWhy CNN for ImageSubsampling thepixelswillnotchangetheobjectsubsamplingbirdbirdWecansubsamplethe

48、pixelstomakeimagesmallerLessparametersforthenetworktoprocesstheimageStep1:defineasetoffunctionStep2:goodnessoffunctionStep3:pickthebestfunctionThree Steps for Deep LearningDeepLearningissosimpleConvolutionalNeuralNetworkThe whole CNNFullyConnectedFeedforwardnetworkcatdogConvolutionMaxPoolingConvolut

49、ionMaxPoolingFlattenCanrepeatmanytimesThe whole CNNConvolutionMaxPoolingConvolutionMaxPoolingFlattenCanrepeatmanytimesSomepatternsaremuchsmallerthanthewholeimageThesamepatternsappearindifferentregions.Subsampling thepixelswillnotchangetheobjectProperty1Property2Property3The whole CNNFullyConnectedFe

50、edforwardnetworkcatdogConvolutionMaxPoolingConvolutionMaxPoolingFlattenCanrepeatmanytimesCNN Convolution1000010100100011001000100100100010106x6image1-1-1-11-1-1-11Filter1-11-1-11-1-11-1Filter2Those are the network parameters to be learned.MatrixMatrixEachfilterdetectsasmallpattern(3x3).Property1CNN

移动网页_全站_页脚广告1

关于我们      便捷服务       自信AI       AI导航        抽奖活动

©2010-2025 宁波自信网络信息技术有限公司  版权所有

客服电话:4009-655-100  投诉/维权电话:18658249818

gongan.png浙公网安备33021202000488号   

icp.png浙ICP备2021020529号-1  |  浙B2-20240490  

关注我们 :微信公众号    抖音    微博    LOFTER 

客服