收藏 分销(赏)

一天搞懂深度学习-台湾大学-李宏毅.pptx

上传人:w****g 文档编号:1780042 上传时间:2024-05-09 格式:PPTX 页数:305 大小:23.65MB
下载 相关 举报
一天搞懂深度学习-台湾大学-李宏毅.pptx_第1页
第1页 / 共305页
一天搞懂深度学习-台湾大学-李宏毅.pptx_第2页
第2页 / 共305页
一天搞懂深度学习-台湾大学-李宏毅.pptx_第3页
第3页 / 共305页
一天搞懂深度学习-台湾大学-李宏毅.pptx_第4页
第4页 / 共305页
一天搞懂深度学习-台湾大学-李宏毅.pptx_第5页
第5页 / 共305页
点击查看更多>>
资源描述

1、Deep Learning Tutorial李宏毅Hung-yiLeeDeep learning attracts lots of attention.Ibelieveyouhaveseenlotsofexcitingresultsbefore.Thistalkfocusesonthebasictechniques.Deep learning trends at Google.Source:SIGMOD/Jeff DeanOutlineLectureIII:BeyondSupervisedLearningLectureII:VariantsofNeuralNetworkLectureI:Int

2、roductionofDeepLearningLecture I:Introduction of Deep LearningOutlineIntroductionofDeepLearning“HelloWorld”forDeepLearningTipsforDeepLearningMachine Learning Looking for a FunctionSpeechRecognitionImageRecognitionPlayingGoDialogueSystem“Cat”“Howareyou”“5-5”“Hello”“Hi”(whattheusersaid)(systemresponse

3、)(nextmove)Framework Asetoffunction“cat”“dog”“money”“snake”Model“cat”ImageRecognition:Framework Asetoffunction“cat”ImageRecognition:ModelTrainingDataGoodnessoffunctionfBetter!“monkey”“cat”“dog”function input:function output:SupervisedLearningFramework Asetoffunction“cat”ImageRecognition:ModelTrainin

4、gDataGoodnessoffunctionf“monkey”“cat”“dog”Pickthe“Best”FunctionUsing“cat”TrainingTestingStep1Step2Step3Three Steps for Deep LearningStep3:pickthebestfunctionStep2:goodnessoffunctionStep1:defineasetoffunctionNeuralNetworkNeural Network biasweightsNeuronAsimplefunctionActivationfunctionNeural Network

5、biasActivationfunctionweightsNeuron1-2-112-114SigmoidFunction0.98Neural Network DifferentconnectionsleadtodifferentnetworkstructuresTheneuronshavedifferentvaluesofweightsandbiases.Fully Connect Feedforward NetworkSigmoidFunction1-11-21-1104-20.980.12Fully Connect Feedforward Network1-21-1104-20.980.

6、122-1-1-23-14-10.860.110.620.8300-221-1Fully Connect Feedforward Network1-21-1100.730.52-1-1-23-14-10.720.120.510.8500-2200Thisisafunction.Inputvector,outputvectorGivennetworkstructure,definea function setOutput LayerHidden LayersInput LayerFully Connect Feedforward NetworkInputOutputLayer1Layer2Lay

7、erLy1y2yMDeepmeansmanyhiddenlayersneuronWhy Deep?Universality TheoremReference forthereason:http:/ two layers of logic gates canrepresentany Boolean function.UsingmultiplelayersoflogicgatestobuildsomefunctionsaremuchsimplerNeuralnetworkconsistsofneuronsA hidden layer network canrepresentany continuo

8、us function.UsingmultiplelayersofneuronstorepresentsomefunctionsaremuchsimplerlessgatesneededLogiccircuitsNeuralnetworklessparameterslessdata?Morereason:https:/ Deep?Analogy8layers19layers22layersAlexNet(2012)VGG(2014)GoogleNet(2014)16.4%7.3%6.7%http:/cs231n.stanford.edu/slides/winter1516_lecture8.p

9、dfDeep=ManyhiddenlayersAlexNet(2012)VGG(2014)GoogleNet(2014)152layers3.57%ResidualNet(2015)Taipei101101layers16.4%7.3%6.7%Deep=ManyhiddenlayersSpecialstructureOutput Layer SoftmaxlayerastheoutputlayerOrdinary LayerIngeneral,theoutputofnetworkcanbeanyvalue.MaynotbeeasytointerpretOutput LayerSoftmaxla

10、yerastheoutputlayerSoftmax Layer3-312.72.720200.050.050.880.120Example ApplicationInputOutput16x16=256Ink1Noink0y1y2y10Eachdimensionrepresentstheconfidenceofadigit.is1is2is00.10.70.2Theimageis“2”Example ApplicationHandwritingDigitRecognitionMachine“2”y1y2y10is1is2is0WhatisneededisafunctionInput:256-

11、dimvectoroutput:10-dimvectorNeuralNetworkOutput LayerHidden LayersInput LayerExample ApplicationInputOutputLayer1Layer2LayerL“2”y1y2y10is1is2is0AfunctionsetcontainingthecandidatesforHandwritingDigitRecognitionYouneedtodecidethenetworkstructuretoletagoodfunctioninyourfunctionset.FAQQ:Howmanylayers?Ho

12、wmanyneuronsforeachlayer?Q:Canwedesignthenetworkstructure?Q:Canthestructurebeautomaticallydetermined?Yes,butnotwidelystudiedyet.TrialandErrorIntuition+ConvolutionalNeuralNetwork(CNN)inthenextlectureHighway NetworkResidual NetworkHighway NetworkDeep Residual Learning for Image Recognitionhttp:/arxiv.

13、org/abs/1512.03385Training Very Deep Networkshttps:/arxiv.org/pdf/1507.06228v2.pdf+copycopyGatecontrollerInputlayeroutputlayerInputlayeroutputlayerInputlayeroutputlayerHighwayNetworkautomaticallydeterminesthelayersneeded!Three Steps for Deep LearningStep3:pickthebestfunctionStep2:goodnessoffunctionS

14、tep1:defineasetoffunctionTraining DataPreparingtrainingdata:imagesandtheirlabelsThelearningtargetisdefinedonthetrainingdata.“5”“0”“4”“1”“3”“1”“2”“9”Learning Target16x16=256Ink1Noink0y1y2y10y1hasthemaximumvalueThelearningtargetisInput:y2hasthemaximumvalueInput:is1is2is0SoftmaxLossy1y2y10“1”100Losscan

15、besquare error orcross entropy betweenthenetworkoutputandtargettargetSoftmaxAscloseaspossibleAgoodfunctionshouldmakethelossofallexamplesassmallaspossible.GivenasetofparametersTotal Lossx1x2xRNNNNNNy1y2yR1x3NNy3ForalltrainingdataTotalLoss:23AssmallaspossibleFinda function in function set thatminimize

16、stotallossLThree Steps for Deep LearningStep3:pickthebestfunctionStep2:goodnessoffunctionStep1:defineasetoffunctionHow to pick the best functionEnumerateallpossiblevaluesLayerlLayerl+1E.g.speechrecognition:8layersand1000neuronseachlayer1000neurons1000neurons106weightsMillionsofparametersGradient Des

17、centRandom,RBMpre-trainUsuallygoodenoughPickaninitialvalueforwGradient DescentPickaninitialvalueforwPositiveNegativeDecreasewIncreasewhttp:/ DescentPickaninitialvalueforwiscalled“learning rate”RepeatGradient DescentPickaninitialvalueforwRepeat(whenupdateislittle)12Gradient DescentColor:ValueofTotalL

18、ossLRandomlypickastartingpoint12Gradient DescentHopfully,wewouldreachaminima.Color:ValueofTotalLossLLocal MinimaTotalLossThevalueofanetworkparameterwVeryslowattheplateauStuckatlocalminima =0Stuckatsaddlepoint =0 0Local MinimaGradientdescentneverguaranteeglobalminima12DifferentinitialpointReachdiffer

19、entminima,sodifferentresultsGradient DescentThisisthe“learning”ofmachinesindeeplearningEvenalphagousingthisapproach.Ihopeyouarenottoodisappointed:pPeopleimageActually.Backpropagationlibdnn台大周伯威同學開發Ref:https:/ Steps for Deep LearningDeepLearningissosimpleNowIfyouwanttofindafunctionIfyouhavelotsoffunc

20、tioninput/output(?)astrainingdataYoucanusedeeplearningFor example,you can do.Image RecognitionNetwork“monkey”“cat”“dog”“monkey”“cat”“dog”For example,you can do.Spam filtering(http:/spam-filter- example,you can do.http:/top-breaking- Keras 心得感謝沈昇勳同學提供圖檔Example ApplicationHandwritingDigitRecognitionMa

21、chine“1”“Helloworld”fordeeplearningMNISTData:http:/ Steps for Deep LearningDeepLearningissosimpleOutlineIntroductionofDeepLearning“HelloWorld”forDeepLearningTipsforDeepLearningNeuralNetworkGoodResultsonTestingData?GoodResultsonTrainingData?Step3:pickthebestfunctionStep2:goodnessoffunctionStep1:defin

22、easetoffunctionYESYESNONOOverfitting!Recipe of Deep LearningDo not always blame OverfittingDeep Residual Learning for Image Recognitionhttp:/arxiv.org/abs/1512.03385TestingDataOverfitting?TrainingDataNotwelltrainedNeuralNetworkGoodResultsonTestingData?GoodResultsonTrainingData?YESYESRecipe of Deep L

23、earningDifferentapproachesfordifferentproblems.e.g.dropoutforgoodresultsontestingdataGoodResultsonTestingData?GoodResultsonTrainingData?YESYESRecipe of Deep LearningChoosingproperlossMini-batchNewactivationfunctionAdaptiveLearningRateMomentumChoosing Proper Lossy1y2y10loss“1”100targetSoftmaxSquareEr

24、rorCrossEntropyWhichoneisbetter?100=0=0DemoSquareErrorCrossEntropySeveralalternatives:https:/keras.io/objectives/DemoChoosing Proper LossTotalLossw1w2CrossEntropySquareErrorWhenusingsoftmaxoutputlayer,choosecrossentropyhttp:/jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdfGoodResultsonTestingD

25、ata?GoodResultsonTrainingData?YESYESRecipe of Deep LearningChoosingproperlossMini-batchNewactivationfunctionAdaptiveLearningRateMomentumMini-batchx1NNy11x31NNy3131x2NNy22x16NNy1616Pickthe1stbatchRandomlyinitializenetworkparametersPickthe2ndbatchMini-batchMini-batchUpdateparametersonceUpdateparameter

26、sonceUntilallmini-batcheshavebeenpickedoneepochRepeattheaboveprocessWedonotreallyminimizetotalloss!Mini-batchx1NNy11x31NNy3131Mini-batchPickthe1stbatchPickthe2ndbatchUpdateparametersonceUpdateparametersonceUntilallmini-batcheshavebeenpickedoneepoch100examplesinamini-batchRepeat20timesMini-batchOrigi

27、nal Gradient DescentWith Mini-batchUnstable!Thecolorsrepresentthetotalloss.Mini-batch is Faster1epochSeeallexamplesSeeonlyonebatchUpdateafterseeingallexamplesIfthereare20batches,update20timesinoneepoch.Original Gradient DescentWith Mini-batchNotalwaystruewithparallelcomputing.Canhavethesamespeed(not

28、superlargedataset)Mini-batchhasbetterperformance!Demox1NNy11x31NNy3131x2NNy22x16NNy1616Mini-batchMini-batchShuffle the training examples for each epochEpoch 1x1NNy11x17NNy1717x2NNy22x26NNy2626Mini-batchMini-batchEpoch 2Dontworry.ThisisthedefaultofKeras.GoodResultsonTestingData?GoodResultsonTrainingD

29、ata?YESYESRecipe of Deep LearningChoosingproperlossMini-batchNewactivationfunctionAdaptiveLearningRateMomentumHard to get the power of Deep Deeperusuallydoesnotimplybetter.ResultsonTrainingDataDemoVanishing Gradient ProblemLargergradientsAlmostrandomAlreadyconvergebasedonrandom!?LearnveryslowLearnve

30、ryfasty1y2yMSmallergradientsVanishing Gradient Problem12Intuitivewaytocomputethederivatives+SmallergradientsLargeinputSmalloutputHard to get the power of Deep In2006,peopleusedRBMpre-training.In2015,peopleuseReLU.ReLURectifiedLinearUnit(ReLU)Reason:1.Fasttocompute2.Biologicalreason3.Infinitesigmoidw

31、ithdifferentbiases4.Vanishinggradientproblem=0XavierGlorot,AISTATS11AndrewL.Maas,ICML13KaimingHe,arXiv15ReLU0000=0ReLUAThinnerlinearnetworkDonothavesmallergradients=0DemoReLU-variant=0.01 =alsolearnedbygradientdescentMaxout LearnableactivationfunctionIanJ.Goodfellow,ICML13MaxInputMax+5+7+1+171MaxMax

32、+1+2+4+324ReLUisaspecialcasesofMaxoutYoucanhavemorethan2elementsinagroup.neuronMaxout LearnableactivationfunctionIanJ.Goodfellow,ICML13ActivationfunctioninmaxoutnetworkcanbeanypiecewiselinearconvexfunctionHowmanypiecesdependingonhowmanyelementsinagroupReLUisaspecialcasesofMaxout2elementsinagroup3ele

33、mentsinagroupGoodResultsonTestingData?GoodResultsonTrainingData?YESYESRecipe of Deep LearningChoosingproperlossMini-batchNewactivationfunctionAdaptiveLearningRateMomentum12Learning RatesIflearningrateistoolargeTotallossmaynotdecreaseaftereachupdateSetthelearningratecarefully12Learning RatesIflearnin

34、grateistoolargeSetthelearningratecarefullyIflearningrateistoosmallTrainingwouldbetooslowTotallossmaynotdecreaseaftereachupdateLearning RatesAdagradParameterdependentlearningratew constantSummationofthesquareofthepreviousderivatives Original:Adagrad:Adagradg0g10.10.2g0g120.010.0Observation:1.Learning

35、rateissmallerandsmallerforallparameters2.Smallerderivatives,largerlearningrate,andviceversa=20=22Why?Learningrate:Learningrate:12SmallerDerivativesLargerLearningRate2.Smallerderivatives,largerlearningrate,andviceversaWhy?SmallerLearningRateLargerderivativesNot the whole story AdagradJohnDuchi,JMLR11

36、RMSprophttps:/ of Deep LearningChoosingproperlossMini-batchNewactivationfunctionAdaptiveLearningRateMomentumHard to find optimal network parametersTotalLossThevalueofanetworkparameterwVeryslowattheplateauStuckatlocalminima =0Stuckatsaddlepoint =0 0In physical world MomentumHowaboutputthisphenomenoni

37、ngradientdescent?Movement=Negativeof +MomentumMomentumcost =0Stillnotguaranteereachingglobalminima,butgivesomehopeMomentumRealMovementAdamRMSProp(AdvancedAdagrad)+MomentumDemoGoodResultsonTestingData?GoodResultsonTrainingData?YESYESRecipe of Deep LearningEarlyStoppingRegularizationDropoutNetworkStru

38、cturePanacea for Overfitting HavemoretrainingdataCreatemoretrainingdata(?)OriginalTrainingData:CreatedTrainingData:Shift15。Handwritingrecognition:GoodResultsonTestingData?GoodResultsonTrainingData?YESYESRecipe of Deep LearningEarlyStoppingRegularizationDropoutNetworkStructureDropoutTraining:Each tim

39、e before updating the parameterslEachneuronhasp%todropoutDropoutTraining:Each time before updating the parameterslEachneuronhasp%todropoutlUsingthenewnetworkfortrainingThe structure of the network is changed.Thinner!Foreachmini-batch,weresamplethedropoutneuronsDropoutTesting:No dropoutlIfthedropoutr

40、ateattrainingisp%,alltheweightstimes1-p%Dropout-Intuitive ReasonTraining TestingDropout(腳上綁重物)Nodropout(拿下重物後就變很強)Dropout-Intuitive ReasonWhytheweightsshouldmultiply(1-p)%(dropoutrate)whentesting?Training of DropoutTesting of Dropout12341234Assumedropoutrateis50%0.50.50.50.5NodropoutWeightsfromtrain

41、ing 2 Weightsmultiply1-p%Dropout is a kind of ensemble.EnsembleNetwork1Network2Network3Network4TrainabunchofnetworkswithdifferentstructuresTrainingSetSet1Set2Set3Set4Dropout is a kind of ensemble.Ensembley1Network1Network2Network3Network4Testingdataxy2y3y4averageDropout is a kind of ensemble.Trainin

42、g of Dropoutminibatch1Usingonemini-batchtotrainonenetworkSomeparametersinthenetworkaresharedminibatch2minibatch3minibatch4Mneurons2MpossiblenetworksDropout is a kind of ensemble.testingdataxTesting of Dropoutaveragey1y2y3Alltheweightsmultiply1-p%y?More about dropoutMorereferencefordropoutNitishSriva

43、stava,JMLR14PierreBaldi,NIPS13GeoffreyE.Hinton,arXiv12DropoutworksbetterwithMaxoutIanJ.Goodfellow,ICML13DropconnectLiWan,ICML13DropoutdeleteneuronsDropconnectdeletestheconnectionbetweenneuronsAnnealeddropoutS.J.Rennie,SLT14DropoutratedecreasesbyepochsStandoutJ.Ba,NISP13Eachneuralhasdifferentdropoutr

44、ateDemoy1y2y10Softmax500500model.add(dropout(0.8)model.add(dropout(0.8)DemoGoodResultsonTestingData?GoodResultsonTrainingData?YESYESRecipe of Deep LearningEarlyStoppingRegularizationDropoutNetworkStructureCNNisaverygoodexample!(nextlecture)Concluding Remarks Recipe of Deep LearningNeuralNetworkGoodR

45、esultsonTestingData?GoodResultsonTrainingData?Step3:pickthebestfunctionStep2:goodnessoffunctionStep1:defineasetoffunctionYESYESNONOLecture II:Variants of Neural NetworksVariants of Neural NetworksConvolutionalNeuralNetwork(CNN)RecurrentNeuralNetwork(RNN)WidelyusedinimageprocessingWhy CNN for Image?C

46、anthenetworkbesimplifiedbyconsideringthepropertiesofimages?ThemostbasicclassifiersUse1stlayerasmoduletobuildclassifiersUse2ndlayerasmoduleZeiler,M.D.,ECCV 2014RepresentedaspixelsWhy CNN for ImageSomepatternsaremuchsmallerthanthewholeimageAneurondoesnothavetoseethewholeimagetodiscoverthepattern.“beak

47、”detectorConnectingtosmallregionwithlessparametersWhy CNN for ImageThesamepatternsappearindifferentregions.“upper-leftbeak”detector“middlebeak”detectorTheycanusethesamesetofparameters.DoalmostthesamethingWhy CNN for ImageSubsampling thepixelswillnotchangetheobjectsubsamplingbirdbirdWecansubsamplethe

48、pixelstomakeimagesmallerLessparametersforthenetworktoprocesstheimageStep1:defineasetoffunctionStep2:goodnessoffunctionStep3:pickthebestfunctionThree Steps for Deep LearningDeepLearningissosimpleConvolutionalNeuralNetworkThe whole CNNFullyConnectedFeedforwardnetworkcatdogConvolutionMaxPoolingConvolut

49、ionMaxPoolingFlattenCanrepeatmanytimesThe whole CNNConvolutionMaxPoolingConvolutionMaxPoolingFlattenCanrepeatmanytimesSomepatternsaremuchsmallerthanthewholeimageThesamepatternsappearindifferentregions.Subsampling thepixelswillnotchangetheobjectProperty1Property2Property3The whole CNNFullyConnectedFe

50、edforwardnetworkcatdogConvolutionMaxPoolingConvolutionMaxPoolingFlattenCanrepeatmanytimesCNN Convolution1000010100100011001000100100100010106x6image1-1-1-11-1-1-11Filter1-11-1-11-1-11-1Filter2Those are the network parameters to be learned.MatrixMatrixEachfilterdetectsasmallpattern(3x3).Property1CNN

展开阅读全文
相似文档                                   自信AI助手自信AI助手
猜你喜欢                                   自信AI导航自信AI导航
搜索标签

当前位置:首页 > 包罗万象 > 大杂烩

移动网页_全站_页脚广告1

关于我们      便捷服务       自信AI       AI导航        获赠5币

©2010-2024 宁波自信网络信息技术有限公司  版权所有

客服电话:4008-655-100  投诉/维权电话:4009-655-100

gongan.png浙公网安备33021202000488号   

icp.png浙ICP备2021020529号-1  |  浙B2-20240490  

关注我们 :gzh.png    weibo.png    LOFTER.png 

客服