ImageVerifierCode 换一换
格式:DOC , 页数:16 ,大小:1.10MB ,
资源ID:7775699      下载积分:10 金币
快捷注册下载
登录下载
邮箱/手机:
温馨提示:
快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。 如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

开通VIP
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.zixin.com.cn/docdown/7775699.html】到电脑端继续下载(重复下载【60天内】不扣币)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  

开通VIP折扣优惠下载文档

            查看会员权益                  [ 下载后找不到文档?]

填表反馈(24小时):  下载求助     关注领币    退款申请

开具发票请登录PC端进行申请

   平台协调中心        【在线客服】        免费申请共赢上传

权利声明

1、咨信平台为文档C2C交易模式,即用户上传的文档直接被用户下载,收益归上传人(含作者)所有;本站仅是提供信息存储空间和展示预览,仅对用户上传内容的表现方式做保护处理,对上载内容不做任何修改或编辑。所展示的作品文档包括内容和图片全部来源于网络用户和作者上传投稿,我们不确定上传用户享有完全著作权,根据《信息网络传播权保护条例》,如果侵犯了您的版权、权益或隐私,请联系我们,核实后会尽快下架及时删除,并可随时和客服了解处理情况,尊重保护知识产权我们共同努力。
2、文档的总页数、文档格式和文档大小以系统显示为准(内容中显示的页数不一定正确),网站客服只以系统显示的页数、文件格式、文档大小作为仲裁依据,个别因单元格分列造成显示页码不一将协商解决,平台无法对文档的真实性、完整性、权威性、准确性、专业性及其观点立场做任何保证或承诺,下载前须认真查看,确认无误后再购买,务必慎重购买;若有违法违纪将进行移交司法处理,若涉侵权平台将进行基本处罚并下架。
3、本站所有内容均由用户上传,付费前请自行鉴别,如您付费,意味着您已接受本站规则且自行承担风险,本站不进行额外附加服务,虚拟产品一经售出概不退款(未进行购买下载可退充值款),文档一经付费(服务费)、不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
4、如你看到网页展示的文档有www.zixin.com.cn水印,是因预览和防盗链等技术需要对页面进行转换压缩成图而已,我们并不对上传的文档进行任何编辑或修改,文档下载后都不会有水印标识(原文档上传前个别存留的除外),下载后原文更清晰;试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓;PPT和DOC文档可被视为“模板”,允许上传人保留章节、目录结构的情况下删减部份的内容;PDF文档不管是原文档转换或图片扫描而得,本站不作要求视为允许,下载前可先查看【教您几个在下载文档中可以更好的避免被坑】。
5、本文档所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用;网站提供的党政主题相关内容(国旗、国徽、党徽--等)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。
6、文档遇到问题,请及时联系平台进行协调解决,联系【微信客服】、【QQ客服】,若有其他问题请点击或扫码反馈【服务填表】;文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“【版权申诉】”,意见反馈和侵权处理邮箱:1219186828@qq.com;也可以拔打客服电话:0574-28810668;投诉电话:18658249818。

注意事项

本文(Thomas_M.Cover信息论英文教材课后题答案.doc)为本站上传会员【pc****0】主动上传,咨信网仅是提供信息存储空间和展示预览,仅对用户上传内容的表现方式做保护处理,对上载内容不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知咨信网(发送邮件至1219186828@qq.com、拔打电话4009-655-100或【 微信客服】、【 QQ客服】),核实后会尽快下架及时删除,并可随时和客服了解处理情况,尊重保护知识产权我们共同努力。
温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载【60天内】不扣币。 服务填表

Thomas_M.Cover信息论英文教材课后题答案.doc

1、 2.2、Entropy of functions. Let be a random variable taking on a finite number of values. What is the (general) inequality relationship of and if (a) ? (b) ? Solution: Let . Then . Consider any set of ’s that map onto a single . For this set , Since is a monotone increasing function a

2、nd . Extending this argument to the entire range of (and ), we obtain , with equality iff if one-to-one with probability one. (a) is one-to-one and hence the entropy, which is just a function of the probabilities does not change, i.e., . (b) is not necessarily one-to-o

3、ne. Hence all that we can say is that , which equality if cosine is one-to-one on the range of . 2.16. Example of joint entropy. Let be given by 0 1 0 1/3 1/3 1 0 1/3 Find (a) ,. (b) ,. (c) (d) . (e) (f) Draw a Venn diagram for the quantities in (a) through (e). Solution:

4、 Fig. 1 Venn diagram (a) . (b) ()() (c) (d) (e) (f) See Figure 1. 2.29 Inequalities. Let , and be joint random variables. Prove the following inequalities and find conditions for equality. (a) (b) (c) (d) Solution: (a) Using the chain rule for conditional entropy, With equali

5、ty iff ,that is, when is a function of and . (b) Using the chain rule for mutual information, , With equality iff , that is, when and are conditionally independent given . (c) Using first the chain rule for entropy and then definition of conditional mutual information, , With eq

6、uality iff , that is, when and are conditionally independent given . (d) Using the chain rule for mutual information, And therefore this inequality is actually an equality in all cases. 4.5 Entropy rates of Markov chains. (a) Find the entropy rate of the two-state Markov chain with transiti

7、on matrix (b) What values of ,maximize the rate of part (a)? (c) Find the entropy rate of the two-state Markov chain with transition matrix (d) Find the maximum value of the entropy rate of the Markov chain of part (c). We expect that the maximizing value of should be less than, since the 0

8、state permits more information to be generated than the 1 state. Solution: (a) The stationary distribution is easily calculated. Therefore the entropy rate is (b) The entropy rate is at most 1 bit because the process has only two states. This rate can be achieved if( and only if) , in w

9、hich case the process is actually i.i.d. with . (c) As a special case of the general two-state Markov chain, the entropy rate is . (d) By straightforward calculus, we find that the maximum value of of part (c) occurs for . The maximum value is (wrong!) 5.4 Huffman coding. Consider the rando

10、m variable (a) Find a binary Huffman code for . (b) Find the expected codelength for this encoding. (c) Find a ternary Huffman code for . Solution: (a) The Huffman tree for this distribution is (b) The expected length of the codewords for the binary Huffman code is 2.02 bits.( ) (c) The te

11、rnary Huffman tree is 5.9 Optimal code lengths that require one bit above entropy. The source coding theorem shows that the optimal code for a random variable has an expected length less than . Given an example of a random variable for which the expected length of the optimal code is close to ,

12、i.e., for any , construct a distribution for which the optimal code has . Solution: there is a trivial example that requires almost 1 bit above its entropy. Let be a binary random variable with probability of close to 1. Then entropy of is close to 0, but the length of its optimal code is 1 bit,

13、which is almost 1 bit above its entropy. 5.25 Shannon code. Consider the following method for generating a code for a random variable which takes on values with probabilities . Assume that the probabilities are ordered so that . Define , the sum of the probabilities of all symbols less than . The

14、n the codeword for is the number rounded off to bits, where . (a) Show that the code constructed by this process is prefix-free and the average length satisfies . (b) Construct the code for the probability distribution (0.5, 0.25, 0.125, 0.125). Solution: (a) Since , we have Which impli

15、es that . By the choice of , we have . Thus , differs from by at least , and will therefore differ from is at least one place in the first bits of the binary expansion of . Thus the codeword for , , which has length , differs from the codeword for at least once in the first places. Thus no cod

16、eword is a prefix of any other codeword. (b) We build the following table Symbol Probability in decimal in binary Codeword 1 0.5 0.0 0.0 1 0 2 0.25 0.5 0.10 2 10 3 0.125 0.75 0.110 3 110 4 0.125 0.875 0.111 3 111 3.5 AEP. Let be independent identically distri

17、buted random variables drawn according to the probability mass function. Thus . We know that in probability. Let , where q is another probability mass function on . (a) Evaluate , where are i.i.d. ~ . Solution: Since the are i.i.d., so are ,,…,,and hence we can apply the strong law of large number

18、s to obtain 8.1 Preprocessing the output. One is given a communication channel with transition probabilities and channel capacity . A helpful statistician preprocesses the output by forming . He claims th

19、at this will strictly improve the capacity. (a) Show that he is wrong. (b) Under what condition does he not strictly decrease the capacity? Solution: (a) The statistician calculates . Since forms a Markov chain, we can apply the data processing inequality. Hence for every distribution on , .

20、 Let be the distribution on that maximizes . Then . Thus, the statistician is wrong and processing the output does not increase capacity. (b) We have equality in the above sequence of inequalities only if we have equality in data processing inequality, i.e., for the distribution that maximizes

21、 we have forming a Markov chain. 8.3 An addition noise channel. Find the channel capacity of the following discrete memoryless channel: Where . The alphabet for is . Assume that is independent of . Observe that the channel capacity depends on the value of . Solution: A sum channel. , We

22、have to distinguish various cases depending on the values of . In this case, ,and . Hence the capacity is 1 bit per transmission. In this case, has four possible values . Knowing ,we know the which was sent, and hence . Hence the capacity is also 1 bit per transmission. In this case has three

23、possible output values, 0,1,2, the channel is identical to the binary erasure channel, with . The capacity of this channel is bit per transmission. This is similar to the case when and the capacity is also 1/2 bit per transmission. 8.5 Channel capacity. Consider the discrete memoryless channel

24、 whereand . Assume that is independent of . (a) Find the capacity. (b) What is the maximizing ? Solution: The capacity of the channel is , which is obtained when has an uniform distribution, which occurs when has an uniform distribution. (a) The capacity of the channel is /transmission. (

25、b) The capacity is achieved by an uniform distribution on the inputs. 8.12 Time-varying channels. Consider a time-varying discrete memoryless channel. Let be conditionally independent given , with conditional distribution given by . Let , . Find . Solution: With equlity if is chosen i.i.d.

26、Hence . 10.2 A channel with two independent looks at . Let and be conditionally independent and conditionally identically distributed given . (a) Show . (b) Conclude that the capacity of the channel is less than twice the capacity of the channel Solution: (a) (b) The capacity of t

27、he single look channel is . The capacity of the channel is 10.3 The two-look Gaussian channel. Consider the ordinary Shannon Gaussian channel with two correlated looks at , i.e., , where with a power constraint on , and , where . Find the capacity for (a) (b) (c) Solution: It is clear

28、that the two input distribution that maximizes the capacity is . Evaluating the mutual information for this distribution, Now since, we have . Since , and , we have, And . Hence (a) . In this case, , which is the capacity of a single look channel. (b) . In this case, , which correspond

29、s to using twice the power in a single look. The capacity is the same as the capacity of the channel . (c) . In this case, , which is not surprising since if we add and , we can recover exactly. 10.4 Parallel channels and waterfilling. Consider a pair of parallel Gaussian channels, i.e., , where

30、 And there is a power constraint . Assume that . At what power does the channel stop behaving like a single channel with noise variance , and begin behaving like a pair of channels? Solution: We will put all the signal power into the channel with less noise until the total power of noise+signal i

31、n that channel equals the noise power in the other channel. After that, we will split any additional power evenly between the two channels. Thus the combined channel begins to behave like a pair of parallel channels when the signal power is equal to the difference of the two noise powers, i.e., when .

移动网页_全站_页脚广告1

关于我们      便捷服务       自信AI       AI导航        抽奖活动

©2010-2025 宁波自信网络信息技术有限公司  版权所有

客服电话:0574-28810668  投诉电话:18658249818

gongan.png浙公网安备33021202000488号   

icp.png浙ICP备2021020529号-1  |  浙B2-20240490  

关注我们 :微信公众号    抖音    微博    LOFTER 

客服