收藏 分销(赏)

卷积神经网络CNN培训课件.ppt

上传人:快乐****生活 文档编号:12239204 上传时间:2025-09-28 格式:PPT 页数:40 大小:3.13MB 下载积分:12 金币
下载 相关 举报
卷积神经网络CNN培训课件.ppt_第1页
第1页 / 共40页
卷积神经网络CNN培训课件.ppt_第2页
第2页 / 共40页


点击查看更多>>
资源描述
,*,单击此处编辑母版标题样式,单击此处编辑母版文本样式,第二级,第三级,第四级,第五级,Institute of Computing Technology,Chinese Academy of Sciences,本文档所提供的信息仅供参考之用,不能作为科学依据,请勿模仿;如有不当之处,请联系网站或本人删除。,DL,时代的,CNN,扩展,A Krizhevsky,I Sutskever,GE Hinton,.ImageNet classification with deep convolutional neural networks.NIPS2012,Y.Jia et al.Caffe:Convolutional Architecture for Fast Feature Embedding.ACM MM2014,K.Simonyan,A.Zisserman.Very deep convolutional networks for large-scale image recognition.arXiv preprint arXiv:1409.1556,2014,C.Szegedy,W.Liu,Y.Jia,P.Sermanet,S.Reed,D.Anguelov,D.Erhan,V.Vanhoucke,A.Rabinovich.Going deeper with convolutions.CVPR2015(&arXiv:1409.4842,2014),1,卷积,示例,2,卷积,形式化,3,卷积,why?,1.sparse interactions,有限连接,,Kernel,比输入小,连接数少很多,学习难度小,计算复杂度低,m,个节点与,n,个节点相连,O(mn),限定,k(m),个节点与,n,个节点相连,则为,O(kn),4,卷积,why?,1.sparse interactions,有限连接,,Kernel,比输入小,连接数少很多,学习难度小,计算复杂度低,m,个节点与,n,个节点相连,O(mn),限定,k(m),个节点与,n,个节点相连,则为,O(kn),5,卷积,why?,1.sparse interactions,有限,(,稀疏,),连接,Kernel,比输入小,局部连接,连接数少很多,学习难度小,计算复杂度低,层级感受野(生物启发),越高层的神经元,感受野越大,6,卷积,why?,2.Parameter Sharing,(参数共享),Tied weights,进一步极大的缩减参数数量,3.Equivariant representations,等变性,配合,Pooling,可以获得平移不变性,对,scale,和,rotation,不具有此属性,7,CNN,的基本结构,三个步骤,卷积,突触前激活,,net,非线性激活,Detector,Pooling,Layer,的两种定义,复杂定义,简单定义,有些层没有参数,8,Pooling,定义(没有需要学习的参数),replaces the output of the net at a certain location with a,summary statistic,of the nearby outputs,种类,max pooling,(weighted)average pooling,9,Why Pooling?,获取不变性,小的平移不变性:有即可,不管在哪里,很强的先验假设,The function the layer learns must be invariant to small translations,10,Why Pooling?,获取不变性,小的平移不变性:有即可,不管在哪里,旋转不变性?,9,个不同朝向的,kernels,(模板),0.2,0.6,1,0.1,0.5,0.3,0.02,0.05,0.1,11,Why Pooling?,获取不变性,小的平移不变性:有即可,不管在哪里,旋转不变性?,9,个不同朝向的,kernels,(模板),0.5,0.3,0.02,1,0.4,0.3,0.6,0.3,0.1,12,Pooling,与下采样结合,更好的获取平移不变性,更高的计算效率(减少了神经元数),13,从全连接到有限连接,部分链接权重被强制设置为,0,通常:非邻接神经元,仅保留相邻的神经元,全连接网络的特例,大量连接权重为,0,14,Why Convolution&Pooling,?,a prior probability distribution over the parameters of a model that encodes,our beliefs,about what models are,reasonable,before we have seen any data,.,模型参数的先验概率分布,(,No free lunch,),在见到任何数据之前,我们的信念(经验)告诉我们,什么样的模型参数是合理的,Local connections,;对平移的不变性;,tied weigts,来自生物神经系统的启发,15,源起:,Neocognitron(1980),Simple,complex,Lower orderhigh order,K.,Fukushima,“Neocognitron:A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position,”,Biological,Cybernetics,vol.36,pp.193202,1980,Local Connection,16,源起:,Neocognitron(1980),17,源起:,Neocognitron(1980),训练方法,分层,自组织,competitive learning,无监督,输出层,独立训练,有监督,18,LeCun-CNN1989,用于字符识别,简化了,Neocognitron,的结构,训练方法,监督训练,BP,算法,正切函数收敛更快,,Sigmoid Loss,,,SGD,用于邮编识别,大量应用,19,LeCun-CNN1989,用于字符识别,输入,16x16,图像,L1H1,12,个,5x5 kernel,8x8,个神经元,L2-H2,12,个,5x5x8 kernel,4x4,个神经元,L3H3,30,个神经元,L4,输出层,10,个神经元,总连接数,5*5*12*64+5*5*8*12*16+192*30,,约,66,000,个,20,LeCun-CNN1989,用于字符识别,Tied weights,对同一个,feature map,,,kernel,对不同位置是相同的!,21,LeCun-CNN1989,用于字符识别,22,1998,年,LeNet,数字,/,字符识别,LeNet-5,Feature map,a set of units whose weighs are constrained to be identical.,23,1998,年,LeNet,数字,/,字符识别,例如:,C3,层参数个数,(3*6+4*9+6*1)*25+16=1516,24,后续:,CNN,用于目标检测与识别,25,AlexNet for ImageNet(2012),大规模,CNN,网络,650K,神经元,60M,参数,使用了各种技巧,Dropout,Data augment,ReLU,Local Response Normalization,Contrast normalization,.,Krizhevsky,Alex,Ilya Sutskever,and Geoffrey E.Hinton.Imagenet classification with deep convolutional neural networks.,Advances in neural information processing systems,.2012.,26,AlexNet for ImageNet(2012),ReLU,激活函数,27,AlexNet for ImageNet(2012),实现,2,块,GPU,卡,输入层,150,528,其它层,253,440,186,624,64,896,64,896,43,264,4096,4096,1000.,Krizhevsky,Alex,Ilya Sutskever,and Geoffrey E.Hinton.Imagenet classification with deep convolutional neural networks.,Advances in neural information processing systems,.2012.,28,AlexNet for ImageNet(2012),ImageNet,物体分类任务上,1000,类,,1,431,167,幅图像,Rank,Name,Error rates(TOP5),Description,1,U.Toronto,0.153,Deep learning,2,U.Tokyo,0.261,Hand-crafted,features and learning models.,Bottleneck.,3,U.,Oxford,0.270,4,Xerox/INRIA,0.271,Krizhevsky,Alex,Ilya Sutskever,and Geoffrey E.Hinton.Imagenet classification with deep convolutional neural networks.,Advances in neural information processing systems,.2012.,29,AlexNet for ImageNet,深度的重要性,网络深度,8,7,6,6,4,参数数量,60M,44M,10M,59M,10M,性能损失,0%,1.1%,5.7%,3.0%,33.5%,Krizhevsky,Alex,Ilya Sutskever,and Geoffrey E.Hinton.Imagenet classification with deep convolutional neural networks.,Advances in neural information processing systems,.2012.,30,VGG Net(2014),多个,stage,每个,stage,多个卷积层,卷积采样间隔,1x1,卷积核大小,3x3,1,个,Pooling,层,(2x2),16-19,层,多尺度融合,K.Simonyan,A.Zisserman.Very deep convolutional networks for large-scale image recognition.arXiv preprint arXiv:1409.1556,2014,VGG Net(2014),几种配置,Cov3-64:,3x3,感受野,64,个,channel,K.Simonyan,A.Zisserman.Very deep convolutional networks for large-scale image recognition.arXiv preprint arXiv:1409.1556,2014,32,VGG Net(2014),K.Simonyan,A.Zisserman.Very deep convolutional networks for large-scale image recognition.arXiv preprint arXiv:1409.1556,2014,33,GoogLeNet(2014),超大规模,22,个卷积层的网络,计算复杂度是,AlexNet,的,4,倍左右,C.Szegedy,W.Liu,Y.Jia,P.Sermanet,S.Reed,D.Anguelov,D.Erhan,V.Vanhoucke,A.Rabinovich.Going deeper with convolutions.CVPR2015(&arXiv:1409.4842,2014),GoogLeNet(2014),超大规模,24,层网络,Inception,结构,提取不同,scale,的特征,然后串接起来,1x1 convolutions,3x3 convolutions,5x5 convolutions,Filter concatenation,Previous layer,C.Szegedy,W.Liu,Y.Jia,P.Sermanet,S.Reed,D.Anguelov,D.Erhan,V.Vanhoucke,A.Rabinovich.Going deeper with convolutions.CVPR2015(&arXiv:1409.4842,2014),GoogLeNet(2014),超大规模,24,层网络,Inception,结构,提取不同,scale,的特征,然后串接起来,增加,1x1,的卷积:把响应图的数量缩小了,1x1 convolutions,3x3 convolutions,5x5 convolutions,Filter concatenation,Previous layer,3x3 max pooling,1x1 convolutions,1x1 convolutions,1x1 convolutions,C.Szegedy,W.Liu,Y.Jia,P.Sermanet,S.Reed,D.Anguelov,D.Erhan,V.Vanhoucke,A.Rabinovich.Going deeper with convolutions.CVPR2015(&arXiv:1409.4842,2014),GoogLeNet(2014),C.Szegedy,W.Liu,Y.Jia,P.Sermanet,S.Reed,D.Anguelov,D.Erhan,V.Vanhoucke,A.Rabinovich.Going deeper with convolutions.CVPR2015(&arXiv:1409.4842,2014),37,GoogLeNet(2014),在,ImageNet,上,1000,类物体分类上的性能,C.Szegedy,W.Liu,Y.Jia,P.Sermanet,S.Reed,D.Anguelov,D.Erhan,V.Vanhoucke,A.Rabinovich.Going deeper with convolutions.CVPR2015(&arXiv:1409.4842,2014),38,39,谢谢!,40,
展开阅读全文

开通  VIP会员、SVIP会员  优惠大
下载10份以上建议开通VIP会员
下载20份以上建议开通SVIP会员


开通VIP      成为共赢上传

当前位置:首页 > 行业资料 > 医学/心理学

移动网页_全站_页脚广告1

关于我们      便捷服务       自信AI       AI导航        抽奖活动

©2010-2025 宁波自信网络信息技术有限公司  版权所有

客服电话:0574-28810668  投诉电话:18658249818

gongan.png浙公网安备33021202000488号   

icp.png浙ICP备2021020529号-1  |  浙B2-20240490  

关注我们 :微信公众号    抖音    微博    LOFTER 

客服