Binarization Based on Maximum and Average Gray Values

*Saúl Manuel Domínguez Nicolás*

#### **Abstract**

Many image processing techniques use binarization for object detection in images, where the objects and background are well distinct by their brightness values, where, the threshold level is globally assigned, on the other hand, if it's adaptive, the threshold level is locally calculated. In order to determine the optimal binarization threshold, from an image with the mean gray values and extreme gray values, exchanging the mean gray values relating to automatic analisis for a standard histogram equalization, which can evaluate a wide range of image features, even when the gray values in both the object of interest and background of the image are not uniform.

**Keywords:** image processing, image are not uniform, mean gray values, extreme gray values, histogram equalization

### **1. Introduction**

In image processing, when one aims to obtain information of interest from image, in order to achieve robust and reliable descriptors. In many image processing algorithms the segmentation technique is widely used to identify regions of input images, which is very important as it may be necessary in the required analysis. Thresholding is the most commonly used technique in image segmentation, and is a binarization method that is used for object detection if background and objects differ by their brightness values. Thresholds values used in a binarization can be chosen manually or automatically. In manual form to find appropriate threshold values it is necessary to perform trial experiments. Automatically selection, combines the image information to get the optimal threshold value. Otsu's algorithm [1] uses image histogram to get the threshold values. There are algorithms based on edges, regions and hybrids, so according to the information used, they define their threshold values. Canny edge detection [2], Sobel edge detection and Laplacian edge detection [3] are algorithms based in the edge information as structures are depicted by edge points. Algorithms suppress the noice in the image to try to find edge pixels. For example, the second derivation information of the image intensity is used by Laplacian edge detection. The gradient magnitude is used in Canny edge detector to find the edge pixels. The pixel intensities are fundamental operations of these algorithms, so discrete pixels make up the detected boundary, hence it can be incomplete or discontinuous. Thus, post-processing techniques like morphological operations is applied to connect generated discontinuities. However, the edges of organs in medical images are not clearly defined, due to noise influence and partial

volume effect. Therefore, a pre-processing step is used for the later algorithm base on threshold [4, 5].

Region growing algorithms [6–10] are algorithms that quantify features inside a structure tend to be homogeneous. The grouping is initiated by the similarity of seeds in the desired regions, growing throughout the image considering the properties present in neighboring pixels. Using a seed in the desired región and local criterion an increase in the regions of the input image can be obtained, or through the distribution of seeds in different regions and a global criterion. Nevertheless, due to their reliance on intensity, these algorithms present problems to undo the influence of partial volume effect.

Hybrid algorithms to complete the segmentation use different image properties. Hybrid algorithms are the watershed, which to complete segmentation, use morphological filter, gradient information and image intesity [11–13]. In these algorithms, the gradient magnitude is seen as elevation and as reliefs are considered the gray scale images. Pixels with local maximum gradient define to the watershed lines, which encloses the pixels that define a region of the image. The complete segmentation of an image can be successfully produced through the watershed algorithms. However, when the images are noisy, these algorithms tend to present over-segmentation problems. In knee cartilage image have been reported successful experiments on the segmentation using the marker imposition technique [11]. C-means algorithm [13] is used to avoid the over segmentation problems to improve the performance of watershed algorithm.

At the end of the 90s, some algorithms based in binarization via thresholding [14–17] have been used to obtain the basic mechanical propiertes of materials using the Vickers hardness testing [18–22]. In addition, to eliminate speckles in the segmantation morphological filters have been employed [14–17]. Other segmentation methods have considered to obtain mechanical propiertes in the Vickers hardness testing, such as: template edge matching [23, 24] and dual resolution active contours segmentation [25]. These methods are suitable for indentation images with high contrast and straight edges. Moreover, high computational complexity, multiple parameters specified by the user, and contour detection that may collapse in images with low contrast are challenges in algorithms based on edge and line oriented contour detection [26]. However, in the last three years, have been reported algortihms [27, 28] to detect objects where its edges are not exactly straight lines in low contrast image. These algorithms use thresholding based on the extreme and gray values as binarization creteria, which are distinct from other binarization techniques [14, 17, 24] used to segment similar images.

The purpose of this chapter is to show the reader thresholdization algorithms based on the extreme and gray values as binarization creteria, which are applied to detect objects of interes where theirs edges are not exactly straight lines, and where the gray values in both the object and background of the image are not uniform. In addition to being applied in images with very low contrast, in comparison with the limited capabilities of other algorithms [23, 24, 27–31] to detect objects in this type of images.

This chapter is organized as follows: Section 2 describes image segmentation semi-automatically evaluating maximum and average gray values like binarization creteria. Next, Section 3 describes image segmentation automatically evaluating maximum and average gray values like binarization creteria. Section 4 includes examples of the techniques. Finally, the conclutions are reported in the Section 5.

### **2. Image segmentation semi-automatically evaluating maximum and average gray values like binarization criteria**

At the end of 2018 was reported an algorithm to segment images using maximum and average gray values like binarization criteria [27]. The algorithm's aim *Binarization Based on Maximum and Average Gray Values DOI: http://dx.doi.org/10.5772/intechopen.99932*

was to detect corners, so as to locate the vertices of the object of interest, which is called indentation. Each input image was treated as 2D monochromatic digital image with gray values between 0 and *Max*, high values corresponding to bright pixels (*Max* ¼ *white*), and dark pixels having low values (0 ¼ *black*). Each image contains exactly one indentation which is of approximately rhombic shape (see **Figure 1**) whose size, position, and exact orientation in the image are unknown. The indentation is asseumed as a dark region on a brighter background.

Image segmentation of the semiautomatic algorithm, starts with binarization, using the average gray value of the input image as threshold and the difference to the maximum gray value as discriminant criterion. Both the average and the maximum gray value are global characteristics determined from the input image. Denoting the input image by *F* ¼ *f x*ð Þ , *y* , *x* ¼ *x*1, ⋯, *x*max; *y* ¼ *y*1, ⋯*y*max, see **Figure 1**, its average gray value is given by *<sup>f</sup> mean* <sup>¼</sup> <sup>1</sup> *x*max*y*max P *<sup>x</sup>*; *<sup>y</sup>f x*ð Þ , *y* , and its maximum gray value by *f* max ¼ max *<sup>x</sup>*; *<sup>y</sup>*½ � *f x*ð Þ , *y* .

Thus, the binarization criterion for every input pixel *p* ¼ ð Þ *x*, *y* : *p* is considered a pixel of interest whenever *f x*ð Þ� , *y f* max � � � �> *f mean*.

The result is a binary image *G* ¼ *g x*ð Þ , *y* with *x* ¼ *x*1, ⋯, *x*max, *y* ¼ *y*1, ⋯, *y*max where each pixel of interes is represented as black, and other pixels are white. Therefore, the indentation region will be a subset of the black pixels. In this type of images to represent the indentation region as black is in coincidence with the fact that this region is dark in the original image.

The first image of the **Figure 2** has an value *f*mean = 0.3350 and *f* max ¼ 0*:*5843, with which is evaluated the binarization criterion for every input pixel of the image. Similar, the second image of the same **Figure 2** presents values of *f mean* ¼ 0*:*3202 and *f* max ¼ 0*:*6157. Finally, the values *f mean* ¼ 0*:*3102 and *f* max ¼ 0*:*6057 are obtained from the third image of **Figure 2**. As a result is the binary image shown in the second column corresponding to each image of the **Figure 2**. However, the binary image there can be many pixels are detected as false positives of the region of interes, as shown in **Figure 2**. Thus, morphological filter is applied to delete these black pixels not belonging to the indentation.

**Figure 1.** *Example of Indentation image.*

**Figure 2.** *Indentation images and their binary images.*
