**3. Feature extraction procedure from the image**

Image processing belongs to the field of signal processing in which input and output signals are both images.

Feature extraction tends to simplify the amount of property required to represent a large set of data correctly. A feature can be defined as a function concerning measurements which represent a property of a considered object (Choras, 2007). Features can be classified as *lowlevel features* and *high level feature*.

The *low-level features* are the features which can be extracted automatically from image without any information about the shape. A widely used approach is the so called *edge detection*, which is adopted in order to identify points in a digital image at which the image brightness changes brusquely, also edge detection highlights image contrast. The boundary of features within an image can be discover detecting contrast as the difference in intensity. Trucco & Verri (Trucco & Verri, 1998) identified three main steps to perform edge detection: noise smoothing, edge enhancement and edge localization. Noise smoothing, also called noise reduction, eliminates noise as much as possible without destroying the edges of the image. Edge enhancement produces images with large intensity values at edge pixels and low intensity levels elsewhere. Finally edge localization is used to decide which local maxima among the filter outputs are effectively edges and are not produced by noise (Roque et al., 2010).

The Sobel edge detection operator (Sobel, 1970) has been the most popular operator until the improvement of the edge detection techniques having a theoretical basis. It consists of two masks in order to identify the edges under a vector form. The inputs of the Sobel approach include an image *I* and a threshold *t.* Once the noise smoothing filters have been applied, the corresponding linear filter is carried out to the new smoothed image by using a pair of 3x3 convolution masks, one estimating the gradient in the *x*-direction (columns) and other estimating the gradient in the *y*-direction (rows).

$$
\begin{bmatrix}
0 & 0 & 0 \\
1 & 2 & 1
\end{bmatrix}
\begin{bmatrix}
\end{bmatrix}
\tag{1}
$$

The output of the two above defined masks is represented by two images *I1* and *I2*. Through equation (2) the degree of the intensity gradient is estimated for each pixel *I(i,j).* 

$$p(l,f) = \sqrt{l\_1(l,f)^2 + l\_2(l,f)^2} \tag{2}$$

Finally the pixels *p(i,j)* which are greater than the threshold *t* are identified as edges.

Canny edge detection operator (Canny, 1986) is probably the most popular edge detection technique at the moment. It is created by taking into account three main purposes:


The first requirement reduces the response to noise through an optimal smoothing. Canny demonstrated that Gaussian filtering is optimal concerning edge detection. The second

Image processing belongs to the field of signal processing in which input and output signals

Feature extraction tends to simplify the amount of property required to represent a large set of data correctly. A feature can be defined as a function concerning measurements which represent a property of a considered object (Choras, 2007). Features can be classified as *low-*

The *low-level features* are the features which can be extracted automatically from image without any information about the shape. A widely used approach is the so called *edge detection*, which is adopted in order to identify points in a digital image at which the image brightness changes brusquely, also edge detection highlights image contrast. The boundary of features within an image can be discover detecting contrast as the difference in intensity. Trucco & Verri (Trucco & Verri, 1998) identified three main steps to perform edge detection: noise smoothing, edge enhancement and edge localization. Noise smoothing, also called noise reduction, eliminates noise as much as possible without destroying the edges of the image. Edge enhancement produces images with large intensity values at edge pixels and low intensity levels elsewhere. Finally edge localization is used to decide which local maxima among the filter outputs are effectively edges and are not produced by noise

The Sobel edge detection operator (Sobel, 1970) has been the most popular operator until the improvement of the edge detection techniques having a theoretical basis. It consists of two masks in order to identify the edges under a vector form. The inputs of the Sobel approach include an image *I* and a threshold *t.* Once the noise smoothing filters have been applied, the corresponding linear filter is carried out to the new smoothed image by using a pair of 3x3 convolution masks, one estimating the gradient in the *x*-direction (columns) and other

� �

The output of the two above defined masks is represented by two images *I1* and *I2*. Through

Canny edge detection operator (Canny, 1986) is probably the most popular edge detection

good localisation with minimal distance between detected and effective edge position;

The first requirement reduces the response to noise through an optimal smoothing. Canny demonstrated that Gaussian filtering is optimal concerning edge detection. The second

−1 0 1 −2 0 2 −1 0 1

�(�� �) = ���(�� �)� � ��(�� �)� (2)

� (1)

**3. Feature extraction procedure from the image** 

are both images.

(Roque et al., 2010).

estimating the gradient in the *y*-direction (rows).

best possible detection with no spurious responses;

single response to delete multiple responses to a single edge.

�

−1 − 2 − 1 0 0 0 1 2 1

equation (2) the degree of the intensity gradient is estimated for each pixel *I(i,j).* 

Finally the pixels *p(i,j)* which are greater than the threshold *t* are identified as edges.

technique at the moment. It is created by taking into account three main purposes:

*level features* and *high level feature*.

requirement is introduced to improve the accuracy, in fact it is used to detect edges in the right position. This result is obtained by a process of non-maximum suppression (similar to peak detection) which maintains only the points that are located at the top of a crest of edge data. Finally the third requirement regards the position of a single edge point when a change in brightness occurs.

*High-level features extraction* is used to find shapes in computer images. To better understand this approach let us suppose that the image to be analyzed is represented by a human face. If we want to automatically recognise the face, we can extract the component features, for example the eyes, mouth and the nose. To detect them we can exploit their shape information: for instance, we know that the white part of the eye is ellipsoidal and so on. Shape extraction includes finding the position, the orientation and the size. In many applications the analysis can be helped by the way the shape are placed. In face analysis we imagine to find eyes above and the mouth below the nose and so on.

*Thresholding* is a simple shape extraction technique. It is used when the brightness of the shape is known, in fact pixels forming the shape can be detected categorizing pixels according to a fixed intensity threshold. The main advantage lies in its simplicity and in the fact that it requires a low computational effort but this approach is sensitive to illumination change and this is a considerable limit. When the illumination level changes linearly, the adoption of a histogram equalization would provide an image which does not vary; unfortunately this approach is widely sensitive to noise rendering and again the threshold comparison-based approach is impracticable. An alternative technique consists in the subtraction of the image from the background before applying a threshold comparison; this approach requires the a priori knowledge of the background. Threshold comparison and subtraction have the main advantage to be simple and fast but the performances of both technique are sensitive to partial shape data, noise and variation in illumination.

Another popular shape extraction technique is the so called *Template matching,* which consists in matching a template to an image. The template is a sub-image that represents the shape to be found in the image. The template is centred on an image point and the number of points that match the points in the image are calculated; the procedure is repeated for the whole image and points which led to the best match are the candidates to be the point where the shape is inner the image. Template matching can be seen as a method of parameter estimation, where parameters define the position of the template but the main disadvantage of the proposed approach is the high computational cost.

Another popular technique which locates shapes in images is the *Hough Transform (*Hough, 1962). This method was introduced by Hough to find bubble tracks and subsequently Rosenfeld (Rosenfeld, 1969) understood its possible advantages as an image processing method. It is widely used to extract lines, circles and ellipses and its main advantage is that it is able to reach the same results than template matching approach but is it faster (Princen et al., 1992). The Hough Transform delineates a mapping from the image points into an accumulator space called Hough space; the mapping is obtained in a computationally efficient manner based on the function that represents the target shape. This approach requires considerable storage and high computational resources but much less than template matching approach. When the shape to be extracted is more complex than lines and circles or the image cannot be partitioned into geometric primitive a *Generalised Hough* 

Fuzzy Inference Systems Applied to Image Classification in the Industrial Field 249

sensors from both the static and the dynamic point of view (Ferrari & Piuri, 2003). The sensor modules can be able to self-calibrate and also reduce the unexpected non-linearities. Also eventual errors can be detected and, if necessary, corrected (Wandell et al., 2002).

The main aim of signal pre-processing is to reduce the noise and to make use of inherent information provided by signals. In literature many conventional pre-processing techniques have been proposed (Proakis & Manolakis, 1996; Rabiner & Gold, 1975) including computational intelligence techniques; in this context a good survey of neural and fuzzy approaches for signal pre-processing is due to Widrow and Sterns (Widrow & Stern, 1985). If the captured data consist of an image, pre-processing phase is used to correct image acquisition and not perfect source image conditions. In each system, which implements machine vision functionalities, a pre-processing phase is recommended in order to correct

Image pre-processing is a phase which, through several operations, improves the image by suppressing undesirable distortions or enhancing relevant features for the further analysis tasks. Note that image pre-processing does not add information content to the image (Haralik & Shapiro, 1992; Hlavak et al., 1998) but uses the redundancy basing on the concept that a real object has similar neighbouring pixels which correspond to a similar brightness value. A distorted pixel can be removed from the image and it can also be reinserted in the

The main operations included in the pre-processing image phase are resumed as follow:

*Cropping* is introduced to remove some parts of the image in order to point out the regions

*Image filtering* exploits a small neighbourhood of a pixel belonging to the input image in

*Smoothing* techniques are used to reduce noise or eventual fluctuations occurring in the image. To reach this task it is necessary to suppress high frequencies in the Fourier

*Brightness threshold* is a fundamental operation to extract pertinent information. It consists in a gray scale transformation whose result is a binary image. This approach is based on the

*Edge Detection* is a very important step in image pre-processing. Edges are pixels lying where the intensity of image charges roughly. In previous paragraph the edge detection

image acquisition errors or to improve characteristics for visual inspection.

image having a value equal to the average of the neighbouring pixels.

order to provide a new brightness value in the output image.

segmentation and separates objects from their background.

Images are usually acquired by cameras in digital format.

**4.2 Data pre-processing** 

 Cropping Filtering Smoothing Brightness Detecting Edges

of interest.

transform domain.

method is treated in more details.

*Transform* (GHT) approach can be used (Ballard, 1981). GHT can be implemented basing on the discrete representation given by tabular functions.
