收藏 分销(赏)

论文翻译-传感器融合的汽车应用-毕业论文.doc

上传人:胜**** 文档编号:2727387 上传时间:2024-06-05 格式:DOC 页数:4 大小:156.50KB
下载 相关 举报
论文翻译-传感器融合的汽车应用-毕业论文.doc_第1页
第1页 / 共4页
论文翻译-传感器融合的汽车应用-毕业论文.doc_第2页
第2页 / 共4页
论文翻译-传感器融合的汽车应用-毕业论文.doc_第3页
第3页 / 共4页
论文翻译-传感器融合的汽车应用-毕业论文.doc_第4页
第4页 / 共4页
亲,该文档总共4页,全部预览完了,如果喜欢就下载吧!
资源描述

1、Sensor Fusion for Automobile ApplicationsPersonnel:Y. Fang, (I. Masaki, B.K.P.Horn)Sponsorship:Intelligent Transportation Research Center at MITs MTLIntroductionTo increase the safety and efficiency for transportation systems, many automobile applications need to detect detail obstacle information.

2、Highway environment interpretation is important in intelligent transportation systems (ITS). It is expect to provide 3D segmentation information for the current road situation, i.e., the X, Y position of objects in images, and the distance Z information. The needs of dynamic scene processing in real

3、 time bring high requirements on sensors in intelligent transportation systems. In complicated driving environment, typically a single sensor is not enough to meet all these high requirements because of limitations in reliability, weather and ambient lighting. Radar provides high distance resolution

4、 while it is limited in horizontal resolution. Binocular vision system can provide better horizontal resolution, while the miscorrespondence problem makes it hard to detect accurate and robust Z distance information. Furthermore, video cameras could not behave well in bad weather. Instead of develop

5、ing specialized image radar to meet the high ITS requirements, sensor fusion system is composed of several low cost, low performance sensors, i.e., radar and stereo cameras, which can take advantage of the benefit of both sensors. Typical 2D segmentation algorithms for vision systems are challenged

6、by noisy static background and the variation of object positions and object size, which leads to false segmentation or segmentation errors. Typical tracking algorithms cannot help to remove the errors of initial static segmentation since there are significant changes between successive video frames.

7、 In order to provide accurate 3D segmentation information, we should not simply associate distance information for radar and 2D segmentation information from video camera. It is expected that the performance of each sensor in the fusion system would be better than being used alone.AlgorithmOur fusio

8、n system introduces the distance information into the 2D segmentation process to improve its target segmentation performance. The relationship between the object distance and the stereo disparity of the object can be used to separate original edge map of stereo images into several distance-based edg

9、e layers in which we further detect whether there is any object and where the object is by segmenting clustered image pixels with similar ranges. To guarantee robustness, a special morphological closing operation is introduced to delineate vertical edges of candidate objects. We first dilate the edg

10、e to elongate the edge length so that the boundaries of target objects will be longer than that of noisy edges. Then an erosion operation gets rid of short edges. Typically the longest vertical edges are located at objects boundary, the new distance-range-based segmentation method can detect targets

11、 with high accuracy and robustness, especially for the vehicles in highway driving scenarios. For urban-driving situations, heavy background noise, such as trees etc., usually cause miscorrespondence, leading to edge-separation errors. The false boundary edge lines in the background area can be even

12、 longer than the boundary edge lines. Thus it is hard to eliminate false bounding boxes in background areas without eliminating foreground objects. The noisy background adds difficulties in segmenting objects of different sizes. To enhance the segmentation performance, background removal procedure i

13、s proposed. Without losing generality, objects beyond some distance range are treated as background. The pixels with small disparity represent the characteristics of the background. Sometimes, in assigning edge pixels to different edge layers, there exists ambiguity. Without further information it i

14、s hard to decide among multiple choices. Some algorithms simply pick one randomly, which might not be true in many situations. Typically, to avoid losing potential foreground pixels, edge pixels are assigned to all distance layers and edge-length filters can suppress ambiguity noise. However, when b

15、ackground noise is serious, algorithm picks only edge pixels without multiple choices. Eliminating pixels from the background in this way will lose significant pixels of target objects, making segmented region smaller than its real size. Thus, motion-based segmentation region expansion is needed to

16、compensate for performance degradation. The original segmentation result can be used as initial object segmentation seeds, from which larger segmentation boundary boxes will expand. The enlarging process is controlled by the similarity of segmentation seed boxes and surrounding edge pixels. With suc

17、h region growing operations, the accurate target sizes are captured. The proposed depth/motion-based segmentation procedure successfully removes the impact of background noise and captures objects of different sizes.SummaryWe presented a new sensor-fusion-based 3D segmentation algorithm to detect ta

18、rget distance and 2D location. The system consists of following components: “distance-based edge layer separation,” “background edge pixel removal,” “target position detection,” and “motion-based object expansion.” The system firstly detects the rough depth range of all targets of interest. Then, we

19、 propose a new object segmentation method based on both motion and distance information. The segmentation algorithm is composed of two phases. “Distance-based edge-layer separation” and “background detection” are the first phase, which capture significant edge pixels of objects in interested distanc

20、e layers while rejecting the noise from either background or other distance-based edge layers. Thus, original image edge map will be decomposed into several distance-based edge maps and heavy background noise can be removed. The advantage of this phase is that detecting targets sequentially in diffe

21、rent edge maps is easier than segmenting all targets simultaneously in one busy edge map. The second phase is a new depth/motion-based segmentation/expansion that can accurately capture objects of different sizes. With motion information for decomposed edge layers (“motion-based region expansion”),

22、it further differentiates the target objects from noises in other distance layers, thus helping to detect objects of different sizes or to identify moving objects. The algorithm successfully increases the accuracy and reliability of object segmentation and motion detection under the impact of heavy

23、background noise. The algorithm can offers precise segmentation in detecting multiple objects of different sizes and non-rigid targets, such as pedestrians. The performance is satisfying and robust while computational load is low. This algorithm not only improves the performance of static image segm

24、entation, but also sets up a good basis for further information tracking in video sequences. It shows that fusing stereo-vision and motion-vision algorithm helps to achieves high accuracy and reliability under the impact of heavy background noise.(a) (b) (c)Figure: (a) Highway environment interpreta

25、tion. (b) Segmentation Result for Highway Environment. (c) Segmentation Result for Urban Driving Environment.传感器融合的汽车应用 工作人员: 元方, (一正树,邦康体育非洲之角) 赞助商: 智能交通研究中心在麻省理工学院的现代货箱码头 导言 为了提高安全性和效率的运输系统,许多汽车应用需要检测障碍详细信息。高速公路环境的解释是很重要的智能交通系统( ITS ) 。这是希望提供三维分割信息当前路面情况,即X和Y位置的物体的图像,以及距离信息。需要的动态场景实时处理带来高要求传感器在智能交

26、通系统。在复杂的驾驶环境,通常是一个单一的传感器是不够的,满足所有这些要求高,因为限制可靠性,天气和环境照明。雷达提供高第虽然距离是有限的水平分辨率。双目视觉系统可以提供更好的水平分辨率,而miscorrespondence问题,很难检测准确,稳健距离信息。此外,摄像机无法表现以及在恶劣天气。而不是发展中国家专门雷达图像,以满足它的要求高,传感器融合系统由几个低成本,低性能的传感器,即雷达和立体相机,它可以利用对双方都有好处传感器。 典型的二维分割算法的视觉系统是挑战静态背景噪声的变化对象的立场和物体大小,从而导致错误的分割或分割错误。典型的跟踪算法不能帮助消除错误的初始静态分割,因为有重大变

27、化的连续视频帧。为了提供准确的三维分割的信息,我们不应该只是准距离的雷达资料和二维分割资料摄像机。预计的业绩在每个传感器融合系统将优于目前单独使用。 算法 我们的融合系统介绍了远程信息转化为二维分割进程,以改善其目标分割性能。之间的关系,物体的距离和立体声悬殊的对象可用于单独的原始边缘图的立体图像分成若干距离的边缘层,使我们进一步检测是否有任何物体,而物体的图像分割集群像素类似范围。为了保证可靠性,一个特殊形态闭运算介绍划定垂直边缘的候选对象。我们第一次扩张优势延伸长度的边缘,这样的界限,目标对象将长于嘈杂边缘。然后侵蚀操作摆脱短期边缘。通常情况下最长的垂直边缘位于物体的边界,新的距离范围内的

28、分割方法可以检测目标精度高和鲁棒性,特别是对车辆在公路驾驶的情况。 城市驾驶的情况下,沉重的背景噪声,如树木等,通常会造成miscorrespondence ,导致边分离错误。虚假的边界线的边缘地区的背景可以甚至超过了边界边缘线。因此很难消除虚假包围盒领域的背景下消除前景物体。喧闹的背景增加了困难群体的不同大小的物体。为了提高分割的性能,背景移除程序的建议。而不丧失概括性,物体距离范围之外的一些被视为背景。像素代表小差距的特点,背景。 有时,在分配边缘像素不同的边缘层,存在歧义。如果没有进一步的信息,很难决定多个选择。一些算法只是一个随机挑选,这可能不是真的在许多情况下。通常情况下,以避免失去

29、潜在前景像素,先进的像素分配给所有距离层和边缘长度过滤器能抑制噪声模棱两可。然而,当背景噪声是严重的,算法的选择不只是边缘像素多种选择。消除像素的背景,这样会失去重要的像素的目标对象,使零散的区域小于其实际规模。因此,基于运动分割区域扩张需要,以补偿性能。原来的分割结果可作为初步对象分割种子,其中较大的细分边界盒将扩大。在扩大的过程控制的相似性分割种子箱及周围边缘像素。有了这样的区域越来越多的行动,准确的目标大小抓获。 拟议的深度/运动为基础的分割程序,成功地消除了背景噪声的影响,并捕捉物体的大小不同。 摘要 我们提出了新的传感器融合为基础的三维分割算法来检测目标距离和2D位置。该系统由下列部

30、分组成: “基于距离的边缘层分离” , “背景边缘像素清除” , “目标位置检测, ”和“基于运动对象的扩展。 ”该系统首先检测粗深度范围的所有目标兴趣。然后,我们提出了一种新的对象分割方法基于运动和距离信息。该分割算法组成的两个阶段。 “基于距离的边缘层分离”和“背景探测”是第一阶段,即获取大量先进的像素中物体距离层感兴趣,而拒绝接受任何背景噪音或其他基于距离的边缘层。因此,原始图像边缘图将被分解成若干距离为基础的先进的地图和沉重的背景噪声可以被删除。这样做的好处是,侦查阶段的目标顺序在不同的边缘Maps是容易分割所有目标同时在一个繁忙的边缘图。第二阶段是一个新的深度/基于运动分割/扩展,可以准确地捕捉物体的大小不同。与Motion的信息分解边缘层( “议案为基础的区域扩大” ) ,进一步区分目标对象从其他距离噪声层,从而帮助检测不同大小的物体或移动物体识别。 该算法成功地提高了准确性和可靠性的对象分割和运动检测的影响下,大量的背景噪声。该算法可以提供精确的分割检测多个对象的不同大小和非刚性目标,如行人。业绩是令人满意的,坚固而计算量低。该算法不仅提高了性能的静态图像分割,而且还树立了良好的基础进一步信息跟踪视频序列中。这表明,融合立体视觉和运动视觉算法可以达到较高的精度和可靠性的影响下,大量的背景噪声。

展开阅读全文
相似文档                                   自信AI助手自信AI助手
猜你喜欢                                   自信AI导航自信AI导航
搜索标签

当前位置:首页 > 学术论文 > 毕业论文/毕业设计

移动网页_全站_页脚广告1

关于我们      便捷服务       自信AI       AI导航        获赠5币

©2010-2024 宁波自信网络信息技术有限公司  版权所有

客服电话:4008-655-100  投诉/维权电话:4009-655-100

gongan.png浙公网安备33021202000488号   

icp.png浙ICP备2021020529号-1  |  浙B2-20240490  

关注我们 :gzh.png    weibo.png    LOFTER.png 

客服