资源描述
Deep Learning Tutorial李宏毅Hung-yiLeeDeep learning attracts lots of attention.Ibelieveyouhaveseenlotsofexcitingresultsbefore.Thistalkfocusesonthebasictechniques.Deep learning trends at Google.Source:SIGMOD/Jeff DeanOutlineLectureIII:BeyondSupervisedLearningLectureII:VariantsofNeuralNetworkLectureI:IntroductionofDeepLearningLecture I:Introduction of Deep LearningOutlineIntroductionofDeepLearning“HelloWorld”forDeepLearningTipsforDeepLearningMachine Learning Looking for a FunctionSpeechRecognitionImageRecognitionPlayingGoDialogueSystem“Cat”“Howareyou”“5-5”“Hello”“Hi”(whattheusersaid)(systemresponse)(nextmove)Framework Asetoffunction“cat”“dog”“money”“snake”Model“cat”ImageRecognition:Framework Asetoffunction“cat”ImageRecognition:ModelTrainingDataGoodnessoffunctionfBetter!“monkey”“cat”“dog”function input:function output:SupervisedLearningFramework Asetoffunction“cat”ImageRecognition:ModelTrainingDataGoodnessoffunctionf“monkey”“cat”“dog”Pickthe“Best”FunctionUsing“cat”TrainingTestingStep1Step2Step3Three Steps for Deep LearningStep3:pickthebestfunctionStep2:goodnessoffunctionStep1:defineasetoffunctionNeuralNetworkNeural Network biasweightsNeuronAsimplefunctionActivationfunctionNeural Network biasActivationfunctionweightsNeuron1-2-112-114SigmoidFunction0.98Neural Network DifferentconnectionsleadtodifferentnetworkstructuresTheneuronshavedifferentvaluesofweightsandbiases.Fully Connect Feedforward NetworkSigmoidFunction1-11-21-1104-20.980.12Fully Connect Feedforward Network1-21-1104-20.980.122-1-1-23-14-10.860.110.620.8300-221-1Fully Connect Feedforward Network1-21-1100.730.52-1-1-23-14-10.720.120.510.8500-2200Thisisafunction.Inputvector,outputvectorGivennetworkstructure,definea function setOutput LayerHidden LayersInput LayerFully Connect Feedforward NetworkInputOutputLayer1Layer2LayerLy1y2yMDeepmeansmanyhiddenlayersneuronWhy Deep?Universality TheoremReference forthereason:http:/ two layers of logic gates canrepresentany Boolean function.UsingmultiplelayersoflogicgatestobuildsomefunctionsaremuchsimplerNeuralnetworkconsistsofneuronsA hidden layer network canrepresentany continuous function.UsingmultiplelayersofneuronstorepresentsomefunctionsaremuchsimplerlessgatesneededLogiccircuitsNeuralnetworklessparameterslessdata?Morereason:https:/ Deep?Analogy8layers19layers22layersAlexNet(2012)VGG(2014)GoogleNet(2014)16.4%7.3%6.7%http:/cs231n.stanford.edu/slides/winter1516_lecture8.pdfDeep=ManyhiddenlayersAlexNet(2012)VGG(2014)GoogleNet(2014)152layers3.57%ResidualNet(2015)Taipei101101layers16.4%7.3%6.7%Deep=ManyhiddenlayersSpecialstructureOutput Layer SoftmaxlayerastheoutputlayerOrdinary LayerIngeneral,theoutputofnetworkcanbeanyvalue.MaynotbeeasytointerpretOutput LayerSoftmaxlayerastheoutputlayerSoftmax Layer3-312.72.720200.050.050.880.120Example ApplicationInputOutput16x16=256Ink1Noink0y1y2y10Eachdimensionrepresentstheconfidenceofadigit.is1is2is00.10.70.2Theimageis“2”Example ApplicationHandwritingDigitRecognitionMachine“2”y1y2y10is1is2is0WhatisneededisafunctionInput:256-dimvectoroutput:10-dimvectorNeuralNetworkOutput LayerHidden LayersInput LayerExample ApplicationInputOutputLayer1Layer2LayerL“2”y1y2y10is1is2is0AfunctionsetcontainingthecandidatesforHandwritingDigitRecognitionYouneedtodecidethenetworkstructuretoletagoodfunctioninyourfunctionset.FAQQ:Howmanylayers?Howmanyneuronsforeachlayer?Q:Canwedesignthenetworkstructure?Q:Canthestructurebeautomaticallydetermined?Yes,butnotwidelystudiedyet.TrialandErrorIntuition+ConvolutionalNeuralNetwork(CNN)inthenextlectureHighway NetworkResidual NetworkHighway NetworkDeep Residual Learning for Image Recognitionhttp:/arxiv.org/abs/1512.03385Training Very Deep Networkshttps:/arxiv.org/pdf/1507.06228v2.pdf+copycopyGatecontrollerInputlayeroutputlayerInputlayeroutputlayerInputlayeroutputlayerHighwayNetworkautomaticallydeterminesthelayersneeded!Three Steps for Deep LearningStep3:pickthebestfunctionStep2:goodnessoffunctionStep1:defineasetoffunctionTraining DataPreparingtrainingdata:imagesandtheirlabelsThelearningtargetisdefinedonthetrainingdata.“5”“0”“4”“1”“3”“1”“2”“9”Learning Target16x16=256Ink1Noink0y1y2y10y1hasthemaximumvalueThelearningtargetisInput:y2hasthemaximumvalueInput:is1is2is0SoftmaxLossy1y2y10“1”100Losscanbesquare error orcross entropy betweenthenetworkoutputandtargettargetSoftmaxAscloseaspossibleAgoodfunctionshouldmakethelossofallexamplesassmallaspossible.GivenasetofparametersTotal Lossx1x2xRNNNNNNy1y2yR1x3NNy3ForalltrainingdataTotalLoss:23AssmallaspossibleFinda function in function set thatminimizestotallossLThree Steps for Deep LearningStep3:pickthebestfunctionStep2:goodnessoffunctionStep1:defineasetoffunctionHow to pick the best functionEnumerateallpossiblevaluesLayerlLayerl+1E.g.speechrecognition:8layersand1000neuronseachlayer1000neurons1000neurons106weightsMillionsofparametersGradient DescentRandom,RBMpre-trainUsuallygoodenoughPickaninitialvalueforwGradient DescentPickaninitialvalueforwPositiveNegativeDecreasewIncreasewhttp:/ DescentPickaninitialvalueforwiscalled“learning rate”RepeatGradient DescentPickaninitialvalueforwRepeat(whenupdateislittle)12Gradient DescentColor:ValueofTotalLossLRandomlypickastartingpoint12Gradient DescentHopfully,wewouldreachaminima.Color:ValueofTotalLossLLocal MinimaTotalLossThevalueofanetworkparameterwVeryslowattheplateauStuckatlocalminima =0Stuckatsaddlepoint =0 0Local MinimaGradientdescentneverguaranteeglobalminima12DifferentinitialpointReachdifferentminima,sodifferentresultsGradient DescentThisisthe“learning”ofmachinesindeeplearningEvenalphagousingthisapproach.Ihopeyouarenottoodisappointed:pPeopleimageActually.Backpropagationlibdnn台大周伯威同學開發Ref:https:/ Steps for Deep LearningDeepLearningissosimpleNowIfyouwanttofindafunctionIfyouhavelotsoffunctioninput/output(?)astrainingdataYoucanusedeeplearningFor example,you can do.Image RecognitionNetwork“monkey”“cat”“dog”“monkey”“cat”“dog”For example,you can do.Spam filtering(http:/spam-filter- example,you can do.http:/top-breaking- Keras 心得感謝沈昇勳同學提供圖檔Example ApplicationHandwritingDigitRecognitionMachine“1”“Helloworld”fordeeplearningMNISTData:http:/ Steps for Deep LearningDeepLearningissosimpleOutlineIntroductionofDeepLearning“HelloWorld”forDeepLearningTipsforDeepLearningNeuralNetworkGoodResultsonTestingData?GoodResultsonTrainingData?Step3:pickthebestfunctionStep2:goodnessoffunctionStep1:defineasetoffunctionYESYESNONOOverfitting!Recipe of Deep LearningDo not always blame OverfittingDeep Residual Learning for Image Recognitionhttp:/arxiv.org/abs/1512.03385TestingDataOverfitting?TrainingDataNotwelltrainedNeuralNetworkGoodResultsonTestingData?GoodResultsonTrainingData?YESYESRecipe of Deep LearningDifferentapproachesfordifferentproblems.e.g.dropoutforgoodresultsontestingdataGoodResultsonTestingData?GoodResultsonTrainingData?YESYESRecipe of Deep LearningChoosingproperlossMini-batchNewactivationfunctionAdaptiveLearningRateMomentumChoosing Proper Lossy1y2y10loss“1”100targetSoftmaxSquareErrorCrossEntropyWhichoneisbetter?100=0=0DemoSquareErrorCrossEntropySeveralalternatives:https:/keras.io/objectives/DemoChoosing Proper LossTotalLossw1w2CrossEntropySquareErrorWhenusingsoftmaxoutputlayer,choosecrossentropyhttp:/jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdfGoodResultsonTestingData?GoodResultsonTrainingData?YESYESRecipe of Deep LearningChoosingproperlossMini-batchNewactivationfunctionAdaptiveLearningRateMomentumMini-batchx1NNy11x31NNy3131x2NNy22x16NNy1616Pickthe1stbatchRandomlyinitializenetworkparametersPickthe2ndbatchMini-batchMini-batchUpdateparametersonceUpdateparametersonceUntilallmini-batcheshavebeenpickedoneepochRepeattheaboveprocessWedonotreallyminimizetotalloss!Mini-batchx1NNy11x31NNy3131Mini-batchPickthe1stbatchPickthe2ndbatchUpdateparametersonceUpdateparametersonceUntilallmini-batcheshavebeenpickedoneepoch100examplesinamini-batchRepeat20timesMini-batchOriginal Gradient DescentWith Mini-batchUnstable!Thecolorsrepresentthetotalloss.Mini-batch is Faster1epochSeeallexamplesSeeonlyonebatchUpdateafterseeingallexamplesIfthereare20batches,update20timesinoneepoch.Original Gradient DescentWith Mini-batchNotalwaystruewithparallelcomputing.Canhavethesamespeed(notsuperlargedataset)Mini-batchhasbetterperformance!Demox1NNy11x31NNy3131x2NNy22x16NNy1616Mini-batchMini-batchShuffle the training examples for each epochEpoch 1x1NNy11x17NNy1717x2NNy22x26NNy2626Mini-batchMini-batchEpoch 2Dontworry.ThisisthedefaultofKeras.GoodResultsonTestingData?GoodResultsonTrainingData?YESYESRecipe of Deep LearningChoosingproperlossMini-batchNewactivationfunctionAdaptiveLearningRateMomentumHard to get the power of Deep Deeperusuallydoesnotimplybetter.ResultsonTrainingDataDemoVanishing Gradient ProblemLargergradientsAlmostrandomAlreadyconvergebasedonrandom!?LearnveryslowLearnveryfasty1y2yMSmallergradientsVanishing Gradient Problem12Intuitivewaytocomputethederivatives+SmallergradientsLargeinputSmalloutputHard to get the power of Deep In2006,peopleusedRBMpre-training.In2015,peopleuseReLU.ReLURectifiedLinearUnit(ReLU)Reason:1.Fasttocompute2.Biologicalreason3.Infinitesigmoidwithdifferentbiases4.Vanishinggradientproblem=0XavierGlorot,AISTATS11AndrewL.Maas,ICML13KaimingHe,arXiv15ReLU0000=0ReLUAThinnerlinearnetworkDonothavesmallergradients=0DemoReLU-variant=0.01 =alsolearnedbygradientdescentMaxout LearnableactivationfunctionIanJ.Goodfellow,ICML13MaxInputMax+5+7+1+171MaxMax+1+2+4+324ReLUisaspecialcasesofMaxoutYoucanhavemorethan2elementsinagroup.neuronMaxout LearnableactivationfunctionIanJ.Goodfellow,ICML13ActivationfunctioninmaxoutnetworkcanbeanypiecewiselinearconvexfunctionHowmanypiecesdependingonhowmanyelementsinagroupReLUisaspecialcasesofMaxout2elementsinagroup3elementsinagroupGoodResultsonTestingData?GoodResultsonTrainingData?YESYESRecipe of Deep LearningChoosingproperlossMini-batchNewactivationfunctionAdaptiveLearningRateMomentum12Learning RatesIflearningrateistoolargeTotallossmaynotdecreaseaftereachupdateSetthelearningratecarefully12Learning RatesIflearningrateistoolargeSetthelearningratecarefullyIflearningrateistoosmallTrainingwouldbetooslowTotallossmaynotdecreaseaftereachupdateLearning RatesAdagradParameterdependentlearningratew constantSummationofthesquareofthepreviousderivatives Original:Adagrad:Adagradg0g10.10.2g0g120.010.0Observation:1.Learningrateissmallerandsmallerforallparameters2.Smallerderivatives,largerlearningrate,andviceversa=20=22Why?Learningrate:Learningrate:12SmallerDerivativesLargerLearningRate2.Smallerderivatives,largerlearningrate,andviceversaWhy?SmallerLearningRateLargerderivativesNot the whole story AdagradJohnDuchi,JMLR11RMSprophttps:/ of Deep LearningChoosingproperlossMini-batchNewactivationfunctionAdaptiveLearningRateMomentumHard to find optimal network parametersTotalLossThevalueofanetworkparameterwVeryslowattheplateauStuckatlocalminima =0Stuckatsaddlepoint =0 0In physical world MomentumHowaboutputthisphenomenoningradientdescent?Movement=Negativeof +MomentumMomentumcost =0Stillnotguaranteereachingglobalminima,butgivesomehopeMomentumRealMovementAdamRMSProp(AdvancedAdagrad)+MomentumDemoGoodResultsonTestingData?GoodResultsonTrainingData?YESYESRecipe of Deep LearningEarlyStoppingRegularizationDropoutNetworkStructurePanacea for Overfitting HavemoretrainingdataCreatemoretrainingdata(?)OriginalTrainingData:CreatedTrainingData:Shift15。Handwritingrecognition:GoodResultsonTestingData?GoodResultsonTrainingData?YESYESRecipe of Deep LearningEarlyStoppingRegularizationDropoutNetworkStructureDropoutTraining:Each time before updating the parameterslEachneuronhasp%todropoutDropoutTraining:Each time before updating the parameterslEachneuronhasp%todropoutlUsingthenewnetworkfortrainingThe structure of the network is changed.Thinner!Foreachmini-batch,weresamplethedropoutneuronsDropoutTesting:No dropoutlIfthedropoutrateattrainingisp%,alltheweightstimes1-p%Dropout-Intuitive ReasonTraining TestingDropout(腳上綁重物)Nodropout(拿下重物後就變很強)Dropout-Intuitive ReasonWhytheweightsshouldmultiply(1-p)%(dropoutrate)whentesting?Training of DropoutTesting of Dropout12341234Assumedropoutrateis50%0.50.50.50.5NodropoutWeightsfromtraining 2 Weightsmultiply1-p%Dropout is a kind of ensemble.EnsembleNetwork1Network2Network3Network4TrainabunchofnetworkswithdifferentstructuresTrainingSetSet1Set2Set3Set4Dropout is a kind of ensemble.Ensembley1Network1Network2Network3Network4Testingdataxy2y3y4averageDropout is a kind of ensemble.Training of Dropoutminibatch1Usingonemini-batchtotrainonenetworkSomeparametersinthenetworkaresharedminibatch2minibatch3minibatch4Mneurons2MpossiblenetworksDropout is a kind of ensemble.testingdataxTesting of Dropoutaveragey1y2y3Alltheweightsmultiply1-p%y?More about dropoutMorereferencefordropoutNitishSrivastava,JMLR14PierreBaldi,NIPS13GeoffreyE.Hinton,arXiv12DropoutworksbetterwithMaxoutIanJ.Goodfellow,ICML13DropconnectLiWan,ICML13DropoutdeleteneuronsDropconnectdeletestheconnectionbetweenneuronsAnnealeddropoutS.J.Rennie,SLT14DropoutratedecreasesbyepochsStandoutJ.Ba,NISP13EachneuralhasdifferentdropoutrateDemoy1y2y10Softmax500500model.add(dropout(0.8)model.add(dropout(0.8)DemoGoodResultsonTestingData?GoodResultsonTrainingData?YESYESRecipe of Deep LearningEarlyStoppingRegularizationDropoutNetworkStructureCNNisaverygoodexample!(nextlecture)Concluding Remarks Recipe of Deep LearningNeuralNetworkGoodResultsonTestingData?GoodResultsonTrainingData?Step3:pickthebestfunctionStep2:goodnessoffunctionStep1:defineasetoffunctionYESYESNONOLecture II:Variants of Neural NetworksVariants of Neural NetworksConvolutionalNeuralNetwork(CNN)RecurrentNeuralNetwork(RNN)WidelyusedinimageprocessingWhy CNN for Image?Canthenetworkbesimplifiedbyconsideringthepropertiesofimages?ThemostbasicclassifiersUse1stlayerasmoduletobuildclassifiersUse2ndlayerasmoduleZeiler,M.D.,ECCV 2014RepresentedaspixelsWhy CNN for ImageSomepatternsaremuchsmallerthanthewholeimageAneurondoesnothavetoseethewholeimagetodiscoverthepattern.“beak”detectorConnectingtosmallregionwithlessparametersWhy CNN for ImageThesamepatternsappearindifferentregions.“upper-leftbeak”detector“middlebeak”detectorTheycanusethesamesetofparameters.DoalmostthesamethingWhy CNN for ImageSubsampling thepixelswillnotchangetheobjectsubsamplingbirdbirdWecansubsamplethepixelstomakeimagesmallerLessparametersforthenetworktoprocesstheimageStep1:defineasetoffunctionStep2:goodnessoffunctionStep3:pickthebestfunctionThree Steps for Deep LearningDeepLearningissosimpleConvolutionalNeuralNetworkThe whole CNNFullyConnectedFeedforwardnetworkcatdogConvolutionMaxPoolingConvolutionMaxPoolingFlattenCanrepeatmanytimesThe whole CNNConvolutionMaxPoolingConvolutionMaxPoolingFlattenCanrepeatmanytimesSomepatternsaremuchsmallerthanthewholeimageThesamepatternsappearindifferentregions.Subsampling thepixelswillnotchangetheobjectProperty1Property2Property3The whole CNNFullyConnectedFeedforwardnetworkcatdogConvolutionMaxPoolingConvolutionMaxPoolingFlattenCanrepeatmanytimesCNN Convolution1000010100100011001000100100100010106x6image1-1-1-11-1-1-11Filter1-11-1-11-1-11-1Filter2Those are the network parameters to be learned.MatrixMatrixEachfilterdetectsasmallpattern(3x3).Property1CNN
展开阅读全文