收藏 分销(赏)

第2章 神经网络(2).doc

上传人:pc****0 文档编号:5906009 上传时间:2024-11-23 格式:DOC 页数:11 大小:1.57MB
下载 相关 举报
第2章 神经网络(2).doc_第1页
第1页 / 共11页
第2章 神经网络(2).doc_第2页
第2页 / 共11页
点击查看更多>>
资源描述
第2章神经网络(2) 内容:感知器、多层前馈型神经、反向传播算法(BP算法)、神经网络应用。 重点、难点:感知器、反向传播算法(BP算法)。 1.引例 1981年生物学家格若根(W.Grogan)和维什(W.Wirth)发现了两类蚊子(或飞蠓midges).他们测量了这两类蚊子每个个体的翼长和触角长,数据如下: • 翼长 触角长 类别 • 1.78 1.14 Apf • 1.96 1.18 Apf • 1.86 1.20 Apf • 1.72 1.24 Af • 2.00 1.26 Apf • 2.00 1.28 Apf • 1.96 1.30 Apf • 1.74 1.36 Af 问:如果抓到三只新的蚊子,它们的触角长和翼长分别为(l.24,1.80); (l.28,1.84);(1.40,2.04).问它们应分别属于哪一个种类? 解法: • 把翼长作纵坐标,触角长作横坐标;那么每个蚊子的翼长和触角决定了坐标平面的一个点.其中 6个蚊子属于 APf类;用黑点“·”表示;9个蚊子属 Af类;用小圆圈“。”表示. • 得到的结果见图1 思路:作一直线将两类飞蠓分开 • 例如;取A=(1.44,2.10)和 B=(1.10,1.16),过A B两点作一条直线: • y= 1.47x - 0.017 • 其中X表示触角长;y表示翼长. • 分类规则:设一个蚊子的数据为(x, y) • 如果y≥1.47x - 0.017,则判断蚊子属Apf类; • 如果y<1.47x - 0.017;则判断蚊子属Af类. • 分类结果:(1.24,1.80),(1.28,1.84)属于Af类;(1.40,2.04)属于 Apf类. 分类直线图 如下的情形已经不能用分类直线的办法: 新思路:将问题看作一个系统,飞蠓的数据作为输入,飞蠓的类型作为输出,研究输入与输出的关系。 2. Perceptron Model l Model is an assembly of inter-connected nodes and weighted links l Output node sums up each of its input value according to the weights of its links l Compare output node against some threshold t · First neural network with the ability to learn · Made up of only input neurons and output neurons · Input neurons typically have two states: ON and OFF · Output neurons use a simple threshold activation function · In basic form, can only solve linear problems ( limited applications) (Can only express linear decision surfaces) How Do Perceptrons Learn? · Uses Supervised training (training means learning the weights of the neurons) · If the output is not correct, the weights are adjusted according to the formula: 3. Back Propagation Networks The following diagram shows a Back Propagation NN: This NN consists of three layers: 1. Input layer with three neurons. 2. Hidden layer with two neurons. 3. Output layer with two neurons. Generally, Back Propagation NN · Most common neural network · An extension of the perceptron (1) Multiple layers The addition of one or more “hidden” layers in between the input and output layers (2) Activation function is not simply a threshold Usually a sigmoid function (3) A general function approximator Not limited to linear problems For example, a typical multilayer network and decision surface is depicted in Figure: · Information flows in one direction (1) The outputs of one layer act as inputs to the layer Note that: 1. The output of a neuron in a layer goes to all neurons in the following layer. 2. Each neuron has its own input weights. 3. The weights for the input layer are assumed to be 1 for each input. In other words, input values are not changed. 4. The output of the NN is reached by applying input values to the input layer, passing the output of each neuron to the following layer as input. 5. The Back Propagation NN must have at least an input layer and an output layer. It could have zero or more hidden layers. The number of neurons in the input layer depends on the number of possible inputs we have, while the number of neurons in the output layer depends on the number of desired outputs. The number of hidden layers and how many neurons in each hidden layer cannot be well defined in advance, and could change per network configuration and type of data. In general the addition of a hidden layer could allow the network to learn more complex patterns, but at the same time decreases its performance. You could start a network configuration using a single hidden layer, and add more hidden layers if you notice that the network is not learning as well as you like. For example, suppose we have a bank credit application with ten questions, which based on their answers, will determine the credit amount and the interest rate. To use a Back Propagation NN, the network will have ten neurons in the input layer and two neurons in the output layer. 4.The Back Propagation Algorithm The Back Propagation NN uses a supervised training mode. The algorithm learns the weights for multilayer network. The training can be summarized as follows: 步骤1:初始化权重 每二个神经元之间的网络连接权重被初始化为一个很小的随机数,同时每个神经元有一个偏置也被初始化为一个随机数。对每个输入样本按步骤2进行处理。 步骤2:向前传播输入 根据训练样本提供网络的输人层,通过计算得到每个神经元的输出。都由其输入的线性组合得到,具体公式为: 步骤3:反向误差传播 由步骤2一路向前,最终在输出层得到实际输出,可以通过与预期输出相比较得到每个输出单元的误差, 如公式(for each output unit calculate its error term)所示,是输出单元的预期输出。得到的误差需要从后向前传播,前面一层单元的误差可以通过和它连接的后面一层的所有单元k的误差计算所得,用公式:(for each hidden unit )依次得到最后一个隐含层到第一个隐含层每个神经元的误差。 步骤4:网络权重与神经元偏置调整 计算得到的所有神经元的误差,然后统一调整网络权重和神经元的阈值。 调整网络权重的方法是从输入层与第一隐含层的连接权重开始,依次向后进行,每个连接权重用公式=+=+()进行调整。 神经元偏置的调整方法是对每个神经元用公式:=+=+()更新。 步骤5:判断结束 对于每个样本,如果最终的输出误差小于可接受的范围或者迭代次数t达到了一定的阈值,则选取下一个样本,转到步骤2重新继续执行。否则,迭代次数加1,然后转向步骤2继续使用当前样本进行训练。 5.应用举例: 已知一个前馈型神经网络例子如下图所示。设学习率l为0.9,当前的训练样本为x={1,0,1},而且预期分类标号为1,同时,下表给出了当前该网络的各个连接权值和神经元偏置。求该网络在当前训练样本下的训练过程 。
展开阅读全文

开通  VIP会员、SVIP会员  优惠大
下载10份以上建议开通VIP会员
下载20份以上建议开通SVIP会员


开通VIP      成为共赢上传
相似文档                                   自信AI助手自信AI助手

当前位置:首页 > 行业资料 > 医学/心理学

移动网页_全站_页脚广告1

关于我们      便捷服务       自信AI       AI导航        抽奖活动

©2010-2025 宁波自信网络信息技术有限公司  版权所有

客服电话:4009-655-100  投诉/维权电话:18658249818

gongan.png浙公网安备33021202000488号   

icp.png浙ICP备2021020529号-1  |  浙B2-20240490  

关注我们 :微信公众号    抖音    微博    LOFTER 

客服