收藏 分销(赏)

原创版图像增强外文文献及翻译.doc

上传人:可**** 文档编号:1464624 上传时间:2024-04-28 格式:DOC 页数:11 大小:362KB 下载积分:10 金币
下载 相关 举报
原创版图像增强外文文献及翻译.doc_第1页
第1页 / 共11页
原创版图像增强外文文献及翻译.doc_第2页
第2页 / 共11页


点击查看更多>>
资源描述
附录A:外文文献 An Effective Automatic Image Enhancement Method ABSTRACT Otsu method is proper to deal with two conditions: (1) two or more classes with distintive gray-values respectively; (2) classes without distinctive gray-values, but with similar areas. However, when the gray-value differences among classes are not so distinct, and the object is small relative to backgroud, the separabilities among classes are insufficient. In order to overcome the above problem, this paper presents an improved spatial low-pass filter with a parameter and presents an unsupervised method of automatic parameter selection for image enhancement based on Otsu method. This method combines image enhancement with image segmentation as one procedure through a discriminant criterion. The optimal parameter of the filter is selected by the discriminant criterion given to maximize the separability between object and background. The optimal threshold for image segmentation is computed simultaneously. The method is used to detect the surface defect of container. Experiments illustrate the validity of the method. KEYWORDS image processing; automated image enhancement; image segmentation; automated visual inspection 1 Introduction Automated visual inspection of cracked container (AVICC) is a practical application of machine vision technology. To realize our goal, four essential operations must be dealt with – image preprocessing, object detection, feature description and final cracked object classification. Image enhancement is to provide a result more suitable than original image for specific applications. In this paper the objective of enhancement, followed by image segmentation, is to obtain an image with a higher content about the object interesting with less content about noise and background. Gonzalez [1] discusses that image enhancement approaches fall into two main categories, in that spatial domain and frequency domain methods. Burton [2] applies image averaging technique to face recognition system, making it able to recognise familiar faces easily across large variations in image quality. Centeno [3] proposes an adaptive image enhancement algorithm, which reverse the processing order of image enhancement and segmentation in order to avoid sharpening noise and blurring borders. Munteanu [4] applies artificial intelligence technology to image enhancement providing denoising function. In addition to spatial domain methods, frequency domain processing techniques are based on modifying the Fourier transform of an image. Bakir [5] discusses image enhancement used for medical image processing in frequency space. Besides, Wang [6] presents a global multiscale analysis of images based on Haar wavelet technique for image denoising. Recently, Agaian [7] proposes image enhancement methods based on the properties of the logarithmic transform domain histogram and histogram equalization. We apply spatial processing here in order to guarantee the real-time and sufficient accuracy property of the system. Segmentation is discussed in [8]. The most simplest, represented by Otsu [9], is method using only the gray level histogram analysis to maximize the separability of the resultant classes. Kuntimad [10] describes a method for segmenting digital images using pulse coupled neural networks (PCNN). Salzenstein [11] deals with a comparison of recent statistical models on fuzzy Markov random fields and chains for multispectral image segmentation. Due to ill-defined, there is no unique segmentation of an image. Evaluation of segmentation algorithms thus far has been largely subjective. Ranjith [12] demonstrates how a recently proposed measureof similarity can be used to perform a quantitative comparison among image segmentation algorithms. In this paper, we present an improved spatial low-pass filter with a tunable parameter in the mask making all elements no longer sum to unity. The optimal parameter for the filter can be determined by the improved discriminant criterion based on the one mentioned in [9]. Convolving images with this mask, the background uninteresting can be removed easily leaving the object intact to some extent. The remainder of the paper is organized as follows: Sect.2 presents how to enhance an input image in theory and presents the algorithm. Sect.3 illustrates the validity of the method in Sect.2. Finally, conclusion and discussion are presented in Sect.4. 2 Image Enhancement 2.1 Analysis of Prior Knowledge The preprocessing quality influences the latter work directly, in that, feature description. Therefore, analysis for the characteristics related to input images should be presented. A standard image of cracked container is shown as Fig.1 (a). From the image, we see the cracked part occupies small region. Much noise, such as rust, shadow, smear etc, appears within the background. At a coarse glance, however, we find gray level of the hole is less than the other parts distinctly. Further study shows gray level of pixels, around the edge of the hole, is the minimal. Fig.1(b) displays the histogram of Fig.1(a) and edge of the hole is marked. Fig.1 (a) is a standard gray level image of a cracked container(b) is the histogram of Fig.1 (a), indicating gray level region of the hole’s edge. 2.2 Formulation This section discusses the principal content in the paper. Traditional spatial filter uses a 3×3 mask, the elements of which sum to unity, to convolve with the input image. This method can deal with some cases shown in equation (1): (1) where, I is image interested, N is Gaussian white noise, (x,y) denotes each pair of coordinates. N can be deliminated by blurring G. Our objective, however, is to deliminate not only white noise, but any other background uninteresting. Thus equation (1) is improved by equation (2): (2) where, I' is the object, N' consists of white noise and the other parts except I'. Fig.2 (c) displays an improved mask with a parameter Para. We will later illustrate that tuning Para properly is to facilitate object segmentation. The smoothing function used is shown in equation (3): (3) where, F(x,y) denotes the smoothing filter, in that, the mask shown as Fig.2 (c). Now, we only consider gray-level images, and define Mg as the maximum gray level of an image. Then the following equations are set to distinguish the object of interest and the non-object : (4) In essence, convolution operator is a low-pass filtering process, which blurs an image by sliding a mask through the image and leaves the filtering response at the position corresponding to central location of the mask. One question occurs that, why not enhance value of each pixel by the same scale directly for the distinct gray levels between the object and background. The reason is that it doesn’t consider the relationship of adjacent pixels. When individual noise point occur, enhancing its gray value directly will preserve the noise point. Experiments illustrate the latter method will leave lots of noise points can’t be removed, but the former method will not. Now, we will search the optimal parameter Para so as to maximize the separability between object and background. Let a given image be represented in L gray levels. The number of pixels at level i is denoted by ni and the total number of pixels by N. The probability of each level is denoted by Pi as follow [9]: (5) Suppose that we partition the pixels into two classes C0 and C1 (object and background) by a threshold at level k; C0 denotes pixels with levels [1, … , k], and C1 denotes pixels with levels [k+1, … , L]. Then the probabilities of class occurrence w0,w1 and the class mean levels u0,u1 respectively,are given by (6) (7) (8) (9) (10) (11) (12) The procedure of obtaining optimal para is based on obtaining optimal threshold for every filtered image. The optimal threshold is determined by maximizing the separability between object and background using the following discriminant criterion measure as mentioned in [9] : (13) where (14) andare the between class variance and the total variance of levels,respectively. (15) The optimal threshold k* that maximizes n is selected in the following sequential search by using equation (5)-(14): (16) Equation (16) is a discriminant criterion to select the gray level to maximize the separability between object and background for a given picture. In this paper, a parameter Para is introduced, so the equations (6)~(9), (11)~(14), (16) is parameterized by Para and k and equations (10), (15) is parameterized by Para. Equation (13) can be rewritten as: (17) Where is not a constant any more and is not negligible, but some computation reduction can be operated onand Here, what we want to acquire is the proper filtered picture including vivid object by searching parameter Para, the discriminant criterion used is improved as follow : (18) In the above representation, parameter Para plays an important role, because optimal Para makes the separability between object and background maximal, and make Otsu segmentation method effective to segment small object from large background without distinctive gray-value between them, which can be observed later from image histogram after image enhancement 2.3 Existence Discussion of Para and k* The problem above is reduced to search for a threshold k* under the condition of Para which maximizes the discriminant criterion in equation (18). The condition discussed is the image with two class at least. Subsequently, the following two cases don’t occur, in that,(1) w0 or w1 is zero originally without setting Para,in which there is only one class;(2) w0 or w1 is zero with certain increasing Para,in which there is also one class finally; The above two cases are decribed as: The case concerned is A,Thus,there is certain Para with proper k to make discriminant criterion maximal. 3 Experiments This paper aims at monochrome images. First, the initial values are presented. Several values should be set: Para = 1/9 (beginning with an averaging filter for 3*3 mask),Mg=L=256 (the range of gray-level, shown in equation (4) and (5)). Using the algorithm above, we can compute each value of discriminant criterion (k*) computed, in that image I 'f that is most proper to be segmented is obtained. Here, we take images of cracked container for example. Fig.3 and Fig.4 show the experimental process, in which the first rows show the filtered pictures, the second rows show the corresponding histograms and the third rows show curves of corresponding discriminant criterion. The last columns are the optimal results of image enhancement, from which we can observe all the noise such as rust, shadow, smear, etc is almost removed, leaving the cracked parts intact. Tab.1 and Tab.2 present the varying course of corresponding to each parameter Para. Subsequently, the optimal Para is obtain by comparing all n(k*) computed, in that image I 'f that is most proper to be segmented is obtained. Here, we take images of cracked container for example. Fig.3 and Fig.4 show the experimental process, in which the first rows show the filtered pictures, the second rows show the corresponding histograms and the third rows show curves of corresponding discriminant criterion. The last columns are the optimal results of image enhancement, from which we can observe all the noise such as rust, shadow, smear, etc is almost removed, leaving the cracked parts intact. Tab.1 and Tab.2 present the varying course of n(k*) along with Para respectively in terms of Fig.3 and Fig.4. When Para increases to 5/9 for both examples, n(k*) will reach their maximum and the most proper filtered image are obtained. When Para continue to increase, n(k*) will decrease and the integrity of the crack part will be destroyed seriously as last two columns in Fig.5. 4 Conclusion This paper is to overcome the disadvantage of Otsu method in dealing with the condition: when the gray-value differences among classes are not so distinct, and the object is small relative to backgroud, the separabilities among classes are not sufficient. This paper proposes an effective image enhancement method in spatial domain. We define all the non-objects as noise, which urges us to design an effective filter to remove noise at one time. We propose an improved mask, according to the characteristic of gray level of cracked container, to make gray value of non-objects above a threshold and leave the object below it. The filtered image, most proper to be segmented, is computed automaticly by using the improved discriminant criterion in terms of the principle of maximize the separability between object interesting and background uninteresting. After the proposed image enhancement, subsequent operations can be carried on easily. Experiments illustrate the proposed method is valid and effective. 译文: 一种有效地自动图像增强方法 1.简介 基于集装箱裂纹的自动视觉检测(AVICC)是一个应用机器视觉技术。要实现我们的目标,必须使用四种基本的操作-图像预处理、目标检测、特征描述和破裂对象最终分类。图像增强是为了提供一个比原始图像中的具体应用更合适的结果。这篇关于图像增强文章的主要目的是获得较高质量的感兴趣的内容而最大的减少噪声。冈萨雷斯所论述图像增强方法分为两大类:空间域和频率域的方法。伯顿适的人脸识别系统图像应用技术,使它能够识别变化大面孔。Centeno提出了一种自适应图像增强算法,该算法改变了分割图像增强和锐化的缺点,避免了噪音和模糊边界。Munteanu应用人工智能技术,提供去噪图像增强的功能。除了空间域方法、频域处理的相似度也可以用来做定量比较图像分割算法。 在本文中,我们提出了一种改进的空间低通滤波器。最优参数滤波的判别准则确定见参考书[9]。这款面膜Convolving图像,能够简易地去除不感兴趣的背景而使感兴趣的部分被保留。其余的组织提出如下:Sect.2提出了如何提高一个输入图像在理论的基础的算法。Sect.3证明了Sect.2方法的有效性。最后,在Sect.4提出了相关结论。 2.图像增强 2.1对所学知识回顾 预处理后图像的质量直接影响之后的工作。因此,应该给出输入图片的相关特性。一个标准的容器破裂图如(A)所示。从图像中,我们看到破裂的部分只占一个小区域。在图片中出现了如铁锈、阴影、涂片等许多噪声。然而,在粗糙的反光下我们发现灰度洞口比其他部分模糊。进一步研究图像的灰度像素可以看出边上的孔洞像素最小。 a)是有裂纹容器的标准灰度图像。 b)是图一的直方图 2.2分析 本节主要是介绍基本的内容。传统的空间过滤器使用一个3×3的模板与输入图像进行卷积。该方法可以处理一些适用方程(1)的图像: (1) I是我们感兴趣的部分,N是高斯白噪声,(x,y)表示一对坐标。通过得到G我们可以消除N .但我们的目的不仅是消除白噪声,而且要消除其他不相关的背景噪声。因此通过方程(2)改善方程(1): (2) 在这方程式里面I’是我们想要得到的,N '是噪声。图2(c)显示一个改进的模板参数。我们稍后会说明适当调整是为了促进对象的分割。光滑函数可以用方程(3)来表示: (3) F(x,y)表示平滑滤波,模板显示如图2(c)。现在,我们只考虑灰度图像,并定义Mg为一个最大灰度级。下列方程是用来区分感兴趣和不敢兴趣部分: (4) 本质上,卷积算子是一个低通滤波过程,通过一个模板与图像卷积使图像模糊。但为什么会使每一个
展开阅读全文

开通  VIP会员、SVIP会员  优惠大
下载10份以上建议开通VIP会员
下载20份以上建议开通SVIP会员


开通VIP      成为共赢上传

当前位置:首页 > 教育专区 > 其他

移动网页_全站_页脚广告1

关于我们      便捷服务       自信AI       AI导航        抽奖活动

©2010-2025 宁波自信网络信息技术有限公司  版权所有

客服电话:4009-655-100  投诉/维权电话:18658249818

gongan.png浙公网安备33021202000488号   

icp.png浙ICP备2021020529号-1  |  浙B2-20240490  

关注我们 :微信公众号    抖音    微博    LOFTER 

客服