收藏 分销(赏)

维特比算法和软硬判决译码的差错控制.doc

上传人:天**** 文档编号:2418204 上传时间:2024-05-29 格式:DOC 页数:23 大小:342KB
下载 相关 举报
维特比算法和软硬判决译码的差错控制.doc_第1页
第1页 / 共23页
维特比算法和软硬判决译码的差错控制.doc_第2页
第2页 / 共23页
维特比算法和软硬判决译码的差错控制.doc_第3页
第3页 / 共23页
维特比算法和软硬判决译码的差错控制.doc_第4页
第4页 / 共23页
维特比算法和软硬判决译码的差错控制.doc_第5页
第5页 / 共23页
点击查看更多>>
资源描述

1、中文4650字毕业设计(英文翻译)原文题目:The Viterbi Algorithm and Probability of Error for Soft and Hard-Decision Decoding 译文题目:维特比算法和软硬判决译码的差错控制 二零一四年三月摘 要卷积码可以用维特比算法作为译码算法,由于维特比译码器复杂度随着反馈深度的增长成指数倍增长,因而译码反馈深度对译码器的复杂度影响很大甚至可能无法使用,目前有些文献中仅给出了反馈深度的大致范围,但在硬件实现和性能仿真时无法确定一个具体的数值。它是一个最大似然序列估计器(MLSE),所以,卷积码的译码就是搜遍网格图找出最可能的序

2、列,搜索网格图时所用的量度可以是汉明距离,也可以是欧氏距离。由于卷积码没有固定长度,可以利用网格图某给定节点上第一次与全零序列汇合的序列的差错概率推导其性能。把节点B上与全零路径汇合的路径的量度首次超过全零路径量度的概率定义为首次差错事件概率。关键字:卷积码;维特比译码;网格图;差错概率1. 卷积码的最佳译码维特比算法在无记忆信道分组码的译码中,我们需计算接受码字与个可能发送码字之间的距离(硬判决译码时时汉明译码距离,软判决译码时是欧氏(Euclidean)距离),选择一个离接受码字最近的码字作为译码输出。这种判决法则需要计算个距离量度(metrics)。在加性高斯白噪声、p1/2的二进制对称

3、信道中,该算法的差错率最小,从这个意义丄说它是最优的。 不像分组码那样有固定的长度n,卷积码基本上是一个有限状态机,因此它的最佳译码器与5.1.4节所属的有记忆信号(如NIZI和CPM)属同一类型,是一个最大似然序列估计器(MLSE,Maximum Likelihood Sequence Estimator)。所以,卷积码的译码就是搜索网络图找出最可能的序列。根据解调器后的译码器执行软判决和硬判决,搜索网络图时所用的量度可以是汉明距离,也可以是欧氏距离。下面,我们针对图8-2-2所示卷积码的网格图图8-2-5来详细说明。观察网格图中的两条路径,它们从初始状态a经过3次状态转移(3个分支)又回到

4、状态a,这两条路径对应的信息序列分别是 000和 100,对应的发送序列分别是000 000 000和111 001 011。用, j=1, 2, 3; m=1, 2, 3,表示发送比特,其中下标j表示第j个分支,下标m表示该分支的第m个比特。同样的,用, j=1, 2, 3; m=1, 2, 3表示解调器的输出。如果采用硬判决译码,则调制器输出的发送比特不是0就是 1.另一方面,如果用软判决译码,且编码序列二进制相干PSK传输,则译码器的输出为 (8.2-9)式中表示加性噪声,是发送每个编码比特所用的信号能量。穿过网格图的第i条路径之第j分支的量度定义为在第i条路径上发送序列为, m=1,

5、2, 3而接受序列是, m=1, 2, 3的联合条件概率的对数,即 , j=1, 2, 3, (8.2-10)把穿过网格由B个分支组成的第i条路径的量度定义为 (8.2-11)在穿过网格图的两条路径之间进行判决的准则是选取量度较大的一条路径。这个准则使正确判决的概率最大,或等效于使信息比特序列的差错概率最小。比如,准备执行硬判决译码的解调器输出一个接受序列101 000 100。令i=0代表分支组成的全零路径;令i=1代表第二个三分支组成的路径,他从初始状态a开始,经过3次转移之后和全零路径在状态a合并。这两条路径的量度分别为 (8.2-12)式中,p是比特差错概率。假定p,那么对于任何起源于

6、a节点的路径,将仍然大于。这意味着从此以后可以不再考虑与对应的路径。与量度对应的路径叫做留存路径(Survivor)。同理,根据两个量度的大小,在b状态汇合的两条路径也可以去除其中之一。对c状态和d状态也可以同样重复这种步骤。结果,经过开头3次状态转移之后,只剩下4条路径,每个状态作为其中一条的终点,并且每条留存路径有相应的量度。随着以后每一时间间隔中新信号的接受,在网格图中的每一级都重复这样的步骤。 一般来说,如果用维特比算法对一个k=1且约束长度为K的二进制卷积码译码,应有个状态。因此每级有条留存路径,每条留存路径有一个路径量度,共个。更一般地,一个二进制卷积码若一次能让k个信息比特输入到

7、由K个k比特移存器构成的编码器,这样卷积码将产生个状态的网络图。若想用维特比算法对这种码进行译码,要保存条留存路径和个路径量度。在网格图每一级的每一个节点,有条路径汇合于该点。由于汇合于同一节点的每一条路径都要计算其量度,因此每个节点要计算个量度。在汇合于每个节点的条路径中,留存路径只有一条,它就是最可能(最小距离)的路径。这样,在执行每一级的译码时,计算量将随k和K呈指数增加,这就将维特比算法的应用局限于k和K值较小的场合。 在对一个卷积码的长序列进行译码时,译码延时对大多数实际应用场合来说太长,用来存储留存序列全部长度的存储器太大太贵。解决这个问题的办法如5.1.4节所述,设法修改维特比算

8、法,是修改后既能保持一个固定的译码延时,有对算法的最佳性能没有显著影响。修改办法是在任意给定时间t内保留每个留存序列中最新的个译码信息比特(符号)。当收到每一个新的信息比特(符号)时,译码器对留存序列量度的大小作比较,找出具有最大量度的留存序列,再在网格图(时间)上回退个分支,将该留存序列上该时刻的比特判决为接受比特(符号)的译码输出。如果选的足够大,在时间上回退个分支后,所有留存序列将包含相同的译码输出比特(符号)。这就是说,时刻t的所有留存序列极有可能是起源于时刻t-的同一节点。实验(计算机模拟法)证明,当延时时,与最佳维特比算法性能相比其性能的下降忽略不计。2. 软判决译码的差错控制本节

9、的主题是讨论加性高斯白噪声信道中使用软判决译码时维特比算法的差错概率性能。再推导卷积码的差错概率时,利用这类编码的线性特性可使推导简化,也就是假定发送的是全零序列,求误判成另一序列的差错概率。假定传输采用的是二厢PSK(或四相PSK),解调器使用相干检测,所得的卷积码第j个分支上的二进制编码数字如8.2.2节定义,用, m=1, 2,n表示。解调器的输出,即维特比译码器的输入是序列, m=1, 2 n; j=1, 2,其中如式(8-2-9)定义。维特比软判决译码器计算的分支量度由式(8-2-14)定义,以此可计算路径量度为 (8.2-16)这里,i表示各节点的任意一条待选路径,B是一条路径上的

10、分支(信息符号)数。例如,全零路径用i=0表示,具有路径量度 (8.2-17)由于卷积码没有固定长度,可以利用网格图某给定节点上第一次与全零序列汇合的序列的差错概率推导其性能。把在节点B上与全零路径汇合的量度首次超过全零路径量度路径的概率定为首次差错事件概率。假设和全零路径汇合的编号为i=1的不正确路径与全零路径有d比特的差别,即编号i=1的路径上有d个“1”而其余为“0”。两条路径为一对,成对比较量度和 ,得差错概率 (8.2-18)由于两条路径的编码比特除了在d个位置上不同外,其他都相同,所以式(8-2-18)可以简化为 (8.2-19)式中,下表1代表两条路径中不同的d个比特,集合表示和

11、该d比特对应的译码器输入。是独立等概的高斯随机变量,均值为,方差为。因此,两条相差d比特的路径成对比较时的差错概率为 (8.2-20)式中,是接收端的每比特信噪比,是码率。尽管我们推出了与全零路径为d的错误路径的首次差错事件概率,但在给定节点B上可能有很多路径以不同的距离和全零路径汇合。实际上,转移函数T(D)为所有以不同距离与全零路径汇合于B节点的路径提供了最完整的描述。于是可以用式(8-2-20)将所有可能距离的路径的差错概率统统加起来,通过求和运算,得到首次差错事件概率的上边界为 (8.2-21)式中,表示与全零路径首次汇合且距离为d的路径的数目。有两个理由说明为什么式(8-2-21)是

12、首次差错事件概率的上边界。一是引起差错概率的事件不是独立的,这可以从网格图看出来。二是对所有可能的求和时,默认卷积码是无限长的。如果卷积码在B节点之后周期的截短,式(8-2-21)的上边界可以通过求间的差错事件总和而得到改善。这个改善对于确定短卷积码的性能有利,但当B很大时,它对性能的影响可以忽略。如果Q函数被一个指数框出上限,即 (8.2.22)那么式(8-2-21)的上边界可以用略有不同的另一种形式表示。将式(8-2-22)代入式(8-2-21),首次差错事件概率的上边界可以表示为 (8.2-23)虽然首次差错事件概率提供了一种用来衡量卷积码性能的方法,但衡量性能跟有用的办法是比特差错概率

13、。用确定首次差错事件概率边界的方法同样可确定比特差错概率的上边界。一旦选择一条差错路径,其信息比特和正确路径信息比特的差异将造成译码的不正确。转移函数T(D,N)中因子N的指数代表与全零路径汇合于某节点B的所选错误路径中的差错信息比特数(即“1”的个数),如果把成对差错概率乘以路径汇合处由错误路径造成的译码信息比特的差错个数,可得到该差错路径的比特差错率。若令每一对差错概率都乘以相应错误路径(这些错误路径与正确路径在节点B处汇合)中的译码信息比特的差错个数,并对所有d求和,可获得平均比特差错概率的上边界。让T(D,N)对N求微分,得到与每一条差错路径中的信息比特差错个数相对应的非常合适的乘法因

14、子。一般的,T(D,N)可表示为 (8.2-24)式中,f(d) 表示N的指数,它是d的函数。求T(D,N)对N的导数并令N=1,得 (8.2-25)式中,。于是,k=1时的比特差错概率上边界为 (8.2-26)如果Q函数像式(8-2-22)那样由一个指数框出上边界,式(8-2-26)简化为 (8.2-27)如果k1,相应的比特差错概率可以由式(8-2-26)和式(8-2-27)除以k求得。以上差错概率表达式是在假设编码比特用二相相干PSK传输的前提下得到的。这个结果对四相相干PSK也同样成立,因为四相调制/解调技术等效于两个独立(相位正交)的二相PSK系统。在其他调制/解调技术,如相干和非相

15、干二进制FSK时也能使用,但需重新计算成对差错概率。也就是说,发送编码信息序列的调制/解调技术的变化只影响的计算,的推导不变。虽然上述卷积码维特比译码时差错概率的推导是应用与二进制卷积码的,但很容易推广到非二进制卷积码,此时,每个二进制符号映射到不同的波形上。关键地,式(8-2-25)中T(D,N)导数展开式的系数表示两条距离(用符号数衡量)相隔d符号的路径中的符号差错个数。用表示两条成对比较的间隔为d符号的路径的差错概率,k比特符号的符号差错概率的上边界应为 此符号差错概率可以转换成等效的比特差错概率。例如,用个正交波形发送k比特符号时,等效比特差错概率是乘以 ,如第五章所述。3. 硬判决译

16、码的差错概率正如处理软判决译码的那样,从确定首次差错事件概率开始。假设发送的是全零路径,并假定某节点B上准备与全零路径比较的路径和全零路径相距d。若d是奇数,那么当接收序列的差错数小于时,会正确的选择全零路径;反之,选择错误路径。所以,选择错误路径的概率为 (8-2-28)式中,p是二进制对称信道的比特差错概率。若d为偶数,差错数超过时将选择错误路径。如果差错数等于,两条路径的量度一样,随便选择两条路径之一,这时有一半差错机会。于是,选择差错路径的总概率是 (8-2-29)正如8.2.3节所述,在给定的节点上有许多具有不同距离的路径与全零路径汇合,所以写不出既简单又精确的表达式。但是,把该节点

17、上与全零路径汇合的所有可能路径的成对差错概率加起来,可以求出首次差错事件概率的上边界。于是得到联合边界如下 (8-2-30)式中,表示与不同距离d相对应的路径数目。这些系数正是转移函数或 展开式中的系数。如果不用式(8-2-28)和式(8-2-29)中的表达式,可用其上边界 (8-2-31)该式在8.1.5节定义。利用式(8-2-30)的边界,可以得到首次差错事件概率的较松的上边界如下 (8-2-32)AbstractThe decoder is largely affected by the decoder traceback, because the decoder complexity

18、increases exponentially with the decoder traceback depth. There are lots of references which are analyzed theoretically the performance caused by traceback depth, but the range is so wide that it is difficult to choose precise value. It is a maximum-likelihood sequence estimator (MLSE). Therefore, o

19、ptimum decoding of a convolutional code involves a search through the trellis for the most probable sequence. Depending on whether the detector following the demodulator performs hard or soft decisions, the corresponding metric in the trellis search may be either a Hamming metric or a Euclidean metr

20、ic, respectively. Since the convolutional code does not necessarily have a fixed length, we derive its performance from the probability of error for sequences that merge with all-zero sequence foe the first time at a given node in the trellis. In particular, we define the first-event error probabili

21、ty as the probability that another path that merges with the all-zero path at node B has s metric that exceeds the metric of the all-zero path for the first time.Keywords: Optimum Decoding; The Viterbi Algorithm; the trellis; Probability of Error1. Optimum Decoding of Convolutional Codes-The Viterbi

22、 AlgorithmIn the decoding of a block code for a memoryless channel, we computed the distances (Hamming distance for hard-decision decoding and Euclidean distance for soft-decision decoding) between the received code word and the possible transmitted code words. Then we selected the code word that wa

23、s closest in distance to the received code word .This decision rule, which requires the computation of metrics, is optimum in the sense that it results in a minimum probability of error for the binary symmetric channel with p1/2 and the additive white Gaussian noise channel.Unlike a block code which

24、 has a fixed length n , a convolutional encoder is basically a finite-state machine. Hence the optimum decoder is a maximum-likelihood sequence estimator (MLSE) of the type described in Section 5.1.4 for signals with memory, such as NRZI and CPM. Therefore, optimum decoding of a convolutional code i

25、nvolves a search through the trellis for the most probable sequence. Depending on whether the detector following the demodulator performs hard or soft decisions, the corresponding metric in the trellis search may be either a Hamming metric or a Euclidean metric, respectively. We elaborate below, usi

26、ng the trellis in Figure 8.2-5 for the convolutional code shown in Figure 8.2-2.Consider the two paths in the trellis that begin at the initial state a and remerge at state a after three state transitions (three branches), corresponding to the two information sequences 000 and 100 and the transmitte

27、d sequences 000 000 000 and 111 001 011, respectively. We denote the transmitted bits by , j=1, 2, 3; m=1, 2, 3, where the index j indicates the jth branch and index m the mth bit in that branch. Correspondingly, we define , j=1, 2, 3; m=1, 2, 3 as the output of the demodulator. If the decoder perfo

28、rms hard-decision decoding, the detector output for each transmitted bit is either 0 or 1. On the other hand, if soft-decision decoding is employed and the coded sequence is transmitted by binary coherent PSK, the input to the decoder is (8.2-9)where represents the additive noise and is the transmit

29、ted signal energy for each code bit.A metric is defined for the jth branch of the ith path through the trellis as the logarithm of the joint probability of the sequence , m=1, 2, 3 conditioned on the transmitted sequence , m=1, 2, 3 for the ith path. That is, , j=1, 2, 3, (8.2-10)Furthermore, a metr

30、ic for the ith path consisting of B branches through the trellis is defined as (8.2-11)The criterion for deciding between two paths through the trellis is to select the one having the larger metric. This rule maximizes the probability of a correct decision or, equivalently, it minimizes the probabil

31、ity of error for the sequence of information bits. For example, suppose that hard-decision decoding is performed by the demodulator, yielding the received sequence 101 000 100. Let i=0 denote the three-branch all-zero path and i=1 the second three-branch path that begins in the initial state a and r

32、emerges with the all-zero path at state a after three transitions. The metrics for these two paths are (8.2-12)where p is the probability of a bit error. Assuming that p at the merged node a after three transition, will continue to be larger than for any path that stems from node a. This means that

33、the path corresponding to can be discarded from further consideration. The path corresponding to the metric is the survivor. Similarly, one of the two paths that merge at state b can be eliminated on the basis of the two corresponding metrics. This procedure is repeated at state c and d. As a result

34、, after the first three transitions, there are four surviving paths, one terminating at each state, and a corresponding metric for each survivor. This procedure is repeated at each state of the trellis as new signals are received in subsequent time intervals.In general, when a binary convolutional c

35、ode with k=1 and constraint length K is decoded by means of the Viterbi algorithm, there are states. Hence, there are surviving paths at each state and metrics, one for each surviving path. Furthermore, a binary convolutional code in which k bits at a time are shifted into an encoder that consists o

36、f K (k-bit) shift-register stages generates a trellis that has states. Consequently, the decoding of such a code by means of the Viterbi algorithm requires keeping track of surviving paths and metrics. At each state of the trellis, there are paths that merge at each node. Since each path that conver

37、ges at a common node requires the computation of a metric, there are metrics computed foe each node. Of the paths and that merge at each node, only one survives, and this is the most-probable (minimum-distance) path. Thus , the number of computations in decoding performed at each state increases exp

38、onentially with k and K. the exponential increase in computational burden limits the use of the Viterbi algorithm to relatively small values of K and k.The decoding delay in decoding a long information sequence that has been convolutionally encoded is usually too long for most practical applications

39、. Moreover, the memory required to store the entire length of surviving sequences is large and expensive. As indicated in Section 5.1.4 a solution to this problem is to modify the Viterbi algorithm in a way which results in a fixed decoding delay without significantly affecting the optimal performance of the algorithm. Recall that the modification is to retain at any given time t only the most recent decoded information bites (symbols) in each surviving sequence. At each new information bit (symbol) is received, a final de

展开阅读全文
相似文档                                   自信AI助手自信AI助手
猜你喜欢                                   自信AI导航自信AI导航
搜索标签

当前位置:首页 > 包罗万象 > 大杂烩

移动网页_全站_页脚广告1

关于我们      便捷服务       自信AI       AI导航        获赠5币

©2010-2024 宁波自信网络信息技术有限公司  版权所有

客服电话:4008-655-100  投诉/维权电话:4009-655-100

gongan.png浙公网安备33021202000488号   

icp.png浙ICP备2021020529号-1  |  浙B2-20240490  

关注我们 :gzh.png    weibo.png    LOFTER.png 

客服