ImageVerifierCode 换一换
格式:DOC , 页数:4 ,大小:156.50KB ,
资源ID:2601794      下载积分:10 金币
验证码下载
登录下载
邮箱/手机:
验证码: 获取验证码
温馨提示:
支付成功后,系统会自动生成账号(用户名为邮箱或者手机号,密码是验证码),方便下次登录下载和查询订单;
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

开通VIP
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.zixin.com.cn/docdown/2601794.html】到电脑端继续下载(重复下载【60天内】不扣币)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  
声明  |  会员权益     获赠5币     写作写作

1、填表:    下载求助     留言反馈    退款申请
2、咨信平台为文档C2C交易模式,即用户上传的文档直接被用户下载,收益归上传人(含作者)所有;本站仅是提供信息存储空间和展示预览,仅对用户上传内容的表现方式做保护处理,对上载内容不做任何修改或编辑。所展示的作品文档包括内容和图片全部来源于网络用户和作者上传投稿,我们不确定上传用户享有完全著作权,根据《信息网络传播权保护条例》,如果侵犯了您的版权、权益或隐私,请联系我们,核实后会尽快下架及时删除,并可随时和客服了解处理情况,尊重保护知识产权我们共同努力。
3、文档的总页数、文档格式和文档大小以系统显示为准(内容中显示的页数不一定正确),网站客服只以系统显示的页数、文件格式、文档大小作为仲裁依据,个别因单元格分列造成显示页码不一将协商解决,平台无法对文档的真实性、完整性、权威性、准确性、专业性及其观点立场做任何保证或承诺,下载前须认真查看,确认无误后再购买,务必慎重购买;若有违法违纪将进行移交司法处理,若涉侵权平台将进行基本处罚并下架。
4、本站所有内容均由用户上传,付费前请自行鉴别,如您付费,意味着您已接受本站规则且自行承担风险,本站不进行额外附加服务,虚拟产品一经售出概不退款(未进行购买下载可退充值款),文档一经付费(服务费)、不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
5、如你看到网页展示的文档有www.zixin.com.cn水印,是因预览和防盗链等技术需要对页面进行转换压缩成图而已,我们并不对上传的文档进行任何编辑或修改,文档下载后都不会有水印标识(原文档上传前个别存留的除外),下载后原文更清晰;试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓;PPT和DOC文档可被视为“模板”,允许上传人保留章节、目录结构的情况下删减部份的内容;PDF文档不管是原文档转换或图片扫描而得,本站不作要求视为允许,下载前自行私信或留言给上传者【可****】。
6、本文档所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用;网站提供的党政主题相关内容(国旗、国徽、党徽--等)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。
7、本文档遇到问题,请及时私信或留言给本站上传会员【可****】,需本站解决可联系【 微信客服】、【 QQ客服】,若有其他问题请点击或扫码反馈【 服务填表】;文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“【 版权申诉】”(推荐),意见反馈和侵权处理邮箱:1219186828@qq.com;也可以拔打客服电话:4008-655-100;投诉/维权电话:4009-655-100。

注意事项

本文(论文翻译-传感器融合的汽车应用.doc)为本站上传会员【可****】主动上传,咨信网仅是提供信息存储空间和展示预览,仅对用户上传内容的表现方式做保护处理,对上载内容不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知咨信网(发送邮件至1219186828@qq.com、拔打电话4008-655-100或【 微信客服】、【 QQ客服】),核实后会尽快下架及时删除,并可随时和客服了解处理情况,尊重保护知识产权我们共同努力。
温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载【60天内】不扣币。 服务填表

论文翻译-传感器融合的汽车应用.doc

1、Sensor Fusion for Automobile ApplicationsPersonnel:Y. Fang, (I. Masaki, B.K.P.Horn)Sponsorship:Intelligent Transportation Research Center at MITs MTLIntroductionTo increase the safety and efficiency for transportation systems, many automobile applications need to detect detail obstacle information.

2、Highway environment interpretation is important in intelligent transportation systems (ITS). It is expect to provide 3D segmentation information for the current road situation, i.e., the X, Y position of objects in images, and the distance Z information. The needs of dynamic scene processing in real

3、 time bring high requirements on sensors in intelligent transportation systems. In complicated driving environment, typically a single sensor is not enough to meet all these high requirements because of limitations in reliability, weather and ambient lighting. Radar provides high distance resolution

4、 while it is limited in horizontal resolution. Binocular vision system can provide better horizontal resolution, while the miscorrespondence problem makes it hard to detect accurate and robust Z distance information. Furthermore, video cameras could not behave well in bad weather. Instead of develop

5、ing specialized image radar to meet the high ITS requirements, sensor fusion system is composed of several low cost, low performance sensors, i.e., radar and stereo cameras, which can take advantage of the benefit of both sensors. Typical 2D segmentation algorithms for vision systems are challenged

6、by noisy static background and the variation of object positions and object size, which leads to false segmentation or segmentation errors. Typical tracking algorithms cannot help to remove the errors of initial static segmentation since there are significant changes between successive video frames.

7、 In order to provide accurate 3D segmentation information, we should not simply associate distance information for radar and 2D segmentation information from video camera. It is expected that the performance of each sensor in the fusion system would be better than being used alone.AlgorithmOur fusio

8、n system introduces the distance information into the 2D segmentation process to improve its target segmentation performance. The relationship between the object distance and the stereo disparity of the object can be used to separate original edge map of stereo images into several distance-based edg

9、e layers in which we further detect whether there is any object and where the object is by segmenting clustered image pixels with similar ranges. To guarantee robustness, a special morphological closing operation is introduced to delineate vertical edges of candidate objects. We first dilate the edg

10、e to elongate the edge length so that the boundaries of target objects will be longer than that of noisy edges. Then an erosion operation gets rid of short edges. Typically the longest vertical edges are located at objects boundary, the new distance-range-based segmentation method can detect targets

11、 with high accuracy and robustness, especially for the vehicles in highway driving scenarios. For urban-driving situations, heavy background noise, such as trees etc., usually cause miscorrespondence, leading to edge-separation errors. The false boundary edge lines in the background area can be even

12、 longer than the boundary edge lines. Thus it is hard to eliminate false bounding boxes in background areas without eliminating foreground objects. The noisy background adds difficulties in segmenting objects of different sizes. To enhance the segmentation performance, background removal procedure i

13、s proposed. Without losing generality, objects beyond some distance range are treated as background. The pixels with small disparity represent the characteristics of the background. Sometimes, in assigning edge pixels to different edge layers, there exists ambiguity. Without further information it i

14、s hard to decide among multiple choices. Some algorithms simply pick one randomly, which might not be true in many situations. Typically, to avoid losing potential foreground pixels, edge pixels are assigned to all distance layers and edge-length filters can suppress ambiguity noise. However, when b

15、ackground noise is serious, algorithm picks only edge pixels without multiple choices. Eliminating pixels from the background in this way will lose significant pixels of target objects, making segmented region smaller than its real size. Thus, motion-based segmentation region expansion is needed to

16、compensate for performance degradation. The original segmentation result can be used as initial object segmentation seeds, from which larger segmentation boundary boxes will expand. The enlarging process is controlled by the similarity of segmentation seed boxes and surrounding edge pixels. With suc

17、h region growing operations, the accurate target sizes are captured. The proposed depth/motion-based segmentation procedure successfully removes the impact of background noise and captures objects of different sizes.SummaryWe presented a new sensor-fusion-based 3D segmentation algorithm to detect ta

18、rget distance and 2D location. The system consists of following components: “distance-based edge layer separation,” “background edge pixel removal,” “target position detection,” and “motion-based object expansion.” The system firstly detects the rough depth range of all targets of interest. Then, we

19、 propose a new object segmentation method based on both motion and distance information. The segmentation algorithm is composed of two phases. “Distance-based edge-layer separation” and “background detection” are the first phase, which capture significant edge pixels of objects in interested distanc

20、e layers while rejecting the noise from either background or other distance-based edge layers. Thus, original image edge map will be decomposed into several distance-based edge maps and heavy background noise can be removed. The advantage of this phase is that detecting targets sequentially in diffe

21、rent edge maps is easier than segmenting all targets simultaneously in one busy edge map. The second phase is a new depth/motion-based segmentation/expansion that can accurately capture objects of different sizes. With motion information for decomposed edge layers (“motion-based region expansion”),

22、it further differentiates the target objects from noises in other distance layers, thus helping to detect objects of different sizes or to identify moving objects. The algorithm successfully increases the accuracy and reliability of object segmentation and motion detection under the impact of heavy

23、background noise. The algorithm can offers precise segmentation in detecting multiple objects of different sizes and non-rigid targets, such as pedestrians. The performance is satisfying and robust while computational load is low. This algorithm not only improves the performance of static image segm

24、entation, but also sets up a good basis for further information tracking in video sequences. It shows that fusing stereo-vision and motion-vision algorithm helps to achieves high accuracy and reliability under the impact of heavy background noise.(a) (b) (c)Figure: (a) Highway environment interpreta

25、tion. (b) Segmentation Result for Highway Environment. (c) Segmentation Result for Urban Driving Environment.传感器融合的汽车应用 工作人员: 元方, (一正树,邦康体育非洲之角) 赞助商: 智能交通研究中心在麻省理工学院的现代货箱码头 导言 为了提高安全性和效率的运输系统,许多汽车应用需要检测障碍详细信息。高速公路环境的解释是很重要的智能交通系统( ITS ) 。这是希望提供三维分割信息当前路面情况,即X和Y位置的物体的图像,以及距离信息。需要的动态场景实时处理带来高要求传感器在智能交

26、通系统。在复杂的驾驶环境,通常是一个单一的传感器是不够的,满足所有这些要求高,因为限制可靠性,天气和环境照明。雷达提供高第虽然距离是有限的水平分辨率。双目视觉系统可以提供更好的水平分辨率,而miscorrespondence问题,很难检测准确,稳健距离信息。此外,摄像机无法表现以及在恶劣天气。而不是发展中国家专门雷达图像,以满足它的要求高,传感器融合系统由几个低成本,低性能的传感器,即雷达和立体相机,它可以利用对双方都有好处传感器。 典型的二维分割算法的视觉系统是挑战静态背景噪声的变化对象的立场和物体大小,从而导致错误的分割或分割错误。典型的跟踪算法不能帮助消除错误的初始静态分割,因为有重大变

27、化的连续视频帧。为了提供准确的三维分割的信息,我们不应该只是准距离的雷达资料和二维分割资料摄像机。预计的业绩在每个传感器融合系统将优于目前单独使用。 算法 我们的融合系统介绍了远程信息转化为二维分割进程,以改善其目标分割性能。之间的关系,物体的距离和立体声悬殊的对象可用于单独的原始边缘图的立体图像分成若干距离的边缘层,使我们进一步检测是否有任何物体,而物体的图像分割集群像素类似范围。为了保证可靠性,一个特殊形态闭运算介绍划定垂直边缘的候选对象。我们第一次扩张优势延伸长度的边缘,这样的界限,目标对象将长于嘈杂边缘。然后侵蚀操作摆脱短期边缘。通常情况下最长的垂直边缘位于物体的边界,新的距离范围内的

28、分割方法可以检测目标精度高和鲁棒性,特别是对车辆在公路驾驶的情况。 城市驾驶的情况下,沉重的背景噪声,如树木等,通常会造成miscorrespondence ,导致边分离错误。虚假的边界线的边缘地区的背景可以甚至超过了边界边缘线。因此很难消除虚假包围盒领域的背景下消除前景物体。喧闹的背景增加了困难群体的不同大小的物体。为了提高分割的性能,背景移除程序的建议。而不丧失概括性,物体距离范围之外的一些被视为背景。像素代表小差距的特点,背景。 有时,在分配边缘像素不同的边缘层,存在歧义。如果没有进一步的信息,很难决定多个选择。一些算法只是一个随机挑选,这可能不是真的在许多情况下。通常情况下,以避免失去

29、潜在前景像素,先进的像素分配给所有距离层和边缘长度过滤器能抑制噪声模棱两可。然而,当背景噪声是严重的,算法的选择不只是边缘像素多种选择。消除像素的背景,这样会失去重要的像素的目标对象,使零散的区域小于其实际规模。因此,基于运动分割区域扩张需要,以补偿性能。原来的分割结果可作为初步对象分割种子,其中较大的细分边界盒将扩大。在扩大的过程控制的相似性分割种子箱及周围边缘像素。有了这样的区域越来越多的行动,准确的目标大小抓获。 拟议的深度/运动为基础的分割程序,成功地消除了背景噪声的影响,并捕捉物体的大小不同。 摘要 我们提出了新的传感器融合为基础的三维分割算法来检测目标距离和2D位置。该系统由下列部

30、分组成: “基于距离的边缘层分离” , “背景边缘像素清除” , “目标位置检测, ”和“基于运动对象的扩展。 ”该系统首先检测粗深度范围的所有目标兴趣。然后,我们提出了一种新的对象分割方法基于运动和距离信息。该分割算法组成的两个阶段。 “基于距离的边缘层分离”和“背景探测”是第一阶段,即获取大量先进的像素中物体距离层感兴趣,而拒绝接受任何背景噪音或其他基于距离的边缘层。因此,原始图像边缘图将被分解成若干距离为基础的先进的地图和沉重的背景噪声可以被删除。这样做的好处是,侦查阶段的目标顺序在不同的边缘Maps是容易分割所有目标同时在一个繁忙的边缘图。第二阶段是一个新的深度/基于运动分割/扩展,可以准确地捕捉物体的大小不同。与Motion的信息分解边缘层( “议案为基础的区域扩大” ) ,进一步区分目标对象从其他距离噪声层,从而帮助检测不同大小的物体或移动物体识别。 该算法成功地提高了准确性和可靠性的对象分割和运动检测的影响下,大量的背景噪声。该算法可以提供精确的分割检测多个对象的不同大小和非刚性目标,如行人。业绩是令人满意的,坚固而计算量低。该算法不仅提高了性能的静态图像分割,而且还树立了良好的基础进一步信息跟踪视频序列中。这表明,融合立体视觉和运动视觉算法可以达到较高的精度和可靠性的影响下,大量的背景噪声。

移动网页_全站_页脚广告1

关于我们      便捷服务       自信AI       AI导航        获赠5币

©2010-2024 宁波自信网络信息技术有限公司  版权所有

客服电话:4008-655-100  投诉/维权电话:4009-655-100

gongan.png浙公网安备33021202000488号   

icp.png浙ICP备2021020529号-1  |  浙B2-20240490  

关注我们 :gzh.png    weibo.png    LOFTER.png 

客服