**Search Algorithm for Image Recognition Based on Learning Algorithm for Multivariate Data Analysis**

Juan G. Zambrano, E. Guzmán-Ramírez and Oleksiy Pogrebnyak

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/ 52179

## **1. Introduction**

An image or a pattern can be recognized using prior knowledge or the statistical informa‐ tion extracted from the image or the pattern. The systems for image recognition and classifi‐ cation have diverse applications, e.g. autonomous robot navigation[1], image tracking radar [2], face recognition [3], biometrics [4], intelligent transportation, license plate recognition, character recognition [5] and fingerprints [6].

The problem of automatic image recognition is a composite task that involves detection and localization of objects in a cluttered background, segmentation, normalization, recognition and verification. Depending on the nature of the application, e.g. sizes of training and test‐ ing database, clutter and variability of the background, noise, occlusion, and finally, speed requirements, some of the subtasks could be very challenging. Assuming that segmentation and normalization haven been done, we focus on the subtask of object recognition and veri‐ fication, and demonstrate the performance using several sets of images.

Diverse paradigms have been used in the development of algorithms for image recognition, some of them are: artificial neural networks [7, 8], principal component analysis [9, 10], fuz‐ zy models [11, 12], genetic algorithms [13, 14] and Auto-Associative memory [15]. The fol‐ lowing paragraphs describe some work done with these paradigms.

Abrishambaf *et al* designed a fingerprint recognition system based in Cellular Neural Net‐ works (CNN). The system includes a preprocessing phase where the input fingerprint image is enhanced and a recognition phase where the enhanced fingerprint image is matched with the fingerprints in the database. Both preprocessing and recognition phases are realized by means of CNN approaches. A novel application of skeletonization method is used to per‐

form ridgeline thinning which improves the quality of the extracted lines for further proc‐ essing, and hence increases the overall system performance [6].

A face recognition system for personal identification and verification using Genetic algo‐ rithm (GA) and Back-propagation Neural Network (BPNN) is described in [19]. The system consists of three steps. At the very outset some pre-processing are applied on the input im‐ age. Secondly face features are extracted, which will be taken as the input of the Back-propa‐ gation Neural Network and Genetic Algorithm in the third step and classification is carried out by using BPNN and GA. The proposed approaches are tested on a number of face im‐ ages. Experimental results demonstrate the higher degree performance of these algorithms.

Search Algorithm for Image Recognition Based on Learning Algorithm for Multivariate Data Analysis

http://dx.doi.org/10.5772/ 52179

5

In [20], Blahuta *et al.* applied pattern recognition on finite set brainstem ultrasound images to generate neuro solutions in medical problems. For analysis of these images the method of Principal Component Analysis (PSA) was used. This method is the one from a lot of meth‐ ods for image processing, exactly to pattern recognition where is necessary a feature extrac‐ tion. Also the used artificial neural networks (ANN) for this problem and compared the results. The method was implemented in NeuroSolutions software that is very sophisticated

Pandit and Gupta proposed a Neural Network model that has been utilized to train the sys‐ tem for image recognition. The NN model uses Auto-Associative memory for training. The model reads the image in the form of a matrix, evaluates the weight matrix associated with the image. After training process is done, whenever the image is provided to the system the model recognizes it appropriately. The evaluated weight matrix is used for image pattern matching. It is noticed that the model developed is accurate enough to recognize the image even if the image is distorted or some portion/ data is missing from the image. This model

In [21], authors present the design of three types of neural networks with different features for image recognition, including traditional backpropagation networks, radial basis function networks and counterpropagation networks. The design complexity and generalization abil‐ ity of the three types of neural network architectures are tested and compared based on the applied digit image recognition problem. Traditional backpropagation networks require very complex training process before being applied for classification or approximation. Ra‐ dial basis function networks simplify the training process by the specially organized 3-layer architecture. Counterpropagation networks do not need training process at all and can be designed directly by extracting all the parameters from input data. The experimental results show the good noise tolerance of both RBF networks and counterpropagation network on the image recognition problem, and somehow point out the poor generalization ability of traditional backpropagation networks. The excellent noise rejection ability makes the RBF

networks very proper for image data preprocessing before applied for recognition.

the proposed approach. Finally, Section 5 contains the conclusions of this Chapter.

The remaining sections of this Chapter are organized as follows. In next Section, a brief theo‐ retical background of the Learning Algorithm for Multivariate Data Analysis (LAMDA) is given. In Section 3 we describe the proposed search algorithm for image recognition based on LAMDA algorithm. Then, in Section 4 we present the implementation results obtained by

simulator of ANN with PCA multilayer (ML) NN topology.

eliminates the long time consuming process of image recognition [15].

In [16], Yang and Park developed a fingerprint verification system based on a set of invari‐ ant moment features and a nonlinear Back Propagation Neural Network (BPNN) verifier. They used an image-based method with invariant moment features for fingerprint verifica‐ tion to overcome the demerits of traditional minutiae-based methods and other image-based methods. The proposed system contains two stages: an off-line stage for template processing and an on-line stage for testing with input fingerprints. The system preprocesses finger‐ prints and reliably detects a unique reference point to determine a Region of Interest (ROI). A total of four sets of seven invariant moment features are extracted from four partitioned sub-images of an ROI. Matching between the feature vectors of a test fingerprint and those of a template fingerprint in the database is evaluated by a nonlinear BPNN and its perform‐ ance is compared with other methods in terms of absolute distance as a similarity measure. The experimental results show that the proposed method with BPNN matching has a higher matching accuracy, while the method with absolute distance has a faster matching speed. Comparison results with other famous methods also show that the proposed method out‐ performs them in verification accuracy.

In [17] the authors presents a classifier based on Radial Basis Function Network (RBFN) to detect frontal views of faces. The technique is separated into three main steps, namely: pre‐ processing, feature extraction, classification and recognition. The curvelet transform, Linear Discriminant Analysis (LDA) are used to extract features from facial images first, and RBFN is used to classify the facial images based on features. The use of RBFN also reduces the number of misclassification caused by not-linearly separable classes. 200 images are taken from ORL database and the parameters like recognition rate, acceptance ratio and execution time performance are calculated. It is shown that neural network based face recognition is robust and has better performance of recognition rate 98.6% and acceptance ratio 85 %.

Bhowmik *et al*. designed an efficient fusion technique for automatic face recognition. Fusion of visual and thermal images has been done to take the advantages of thermal images as well as visual images. By employing fusion a new image can be obtained, which provides the most detailed, reliable, and discriminating information. In this method fused images are generated using visual and thermal face images in the first step. At the second step, fused images are projected onto eigenspace and finally classified using a radial basis function neu‐ ral network. In the experiments Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database benchmark for thermal and visual face images have been used. Experi‐ mental results show that the proposed approach performs well in recognizing unknown in‐ dividuals with a maximum success rate of 96% [8].

Zeng and Liu described state of the art of important advances of type-2 fuzzy sets for pat‐ tern recognition [18]. The success of type-2 fuzzy sets has been largely attributed to their three-dimensional membership functions to handle more uncertainties in real-world prob‐ lems. In pattern recognition, both feature and hypothesis spaces have uncertainties, which motivate us of integrating type-2 fuzzy sets with conventional classifiers to achieve a better performance in terms of the robustness, generalization ability, or recognition accuracy.

A face recognition system for personal identification and verification using Genetic algo‐ rithm (GA) and Back-propagation Neural Network (BPNN) is described in [19]. The system consists of three steps. At the very outset some pre-processing are applied on the input im‐ age. Secondly face features are extracted, which will be taken as the input of the Back-propa‐ gation Neural Network and Genetic Algorithm in the third step and classification is carried out by using BPNN and GA. The proposed approaches are tested on a number of face im‐ ages. Experimental results demonstrate the higher degree performance of these algorithms.

form ridgeline thinning which improves the quality of the extracted lines for further proc‐

In [16], Yang and Park developed a fingerprint verification system based on a set of invari‐ ant moment features and a nonlinear Back Propagation Neural Network (BPNN) verifier. They used an image-based method with invariant moment features for fingerprint verifica‐ tion to overcome the demerits of traditional minutiae-based methods and other image-based methods. The proposed system contains two stages: an off-line stage for template processing and an on-line stage for testing with input fingerprints. The system preprocesses finger‐ prints and reliably detects a unique reference point to determine a Region of Interest (ROI). A total of four sets of seven invariant moment features are extracted from four partitioned sub-images of an ROI. Matching between the feature vectors of a test fingerprint and those of a template fingerprint in the database is evaluated by a nonlinear BPNN and its perform‐ ance is compared with other methods in terms of absolute distance as a similarity measure. The experimental results show that the proposed method with BPNN matching has a higher matching accuracy, while the method with absolute distance has a faster matching speed. Comparison results with other famous methods also show that the proposed method out‐

In [17] the authors presents a classifier based on Radial Basis Function Network (RBFN) to detect frontal views of faces. The technique is separated into three main steps, namely: pre‐ processing, feature extraction, classification and recognition. The curvelet transform, Linear Discriminant Analysis (LDA) are used to extract features from facial images first, and RBFN is used to classify the facial images based on features. The use of RBFN also reduces the number of misclassification caused by not-linearly separable classes. 200 images are taken from ORL database and the parameters like recognition rate, acceptance ratio and execution time performance are calculated. It is shown that neural network based face recognition is robust and has better performance of recognition rate 98.6% and acceptance ratio 85 %.

Bhowmik *et al*. designed an efficient fusion technique for automatic face recognition. Fusion of visual and thermal images has been done to take the advantages of thermal images as well as visual images. By employing fusion a new image can be obtained, which provides the most detailed, reliable, and discriminating information. In this method fused images are generated using visual and thermal face images in the first step. At the second step, fused images are projected onto eigenspace and finally classified using a radial basis function neu‐ ral network. In the experiments Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database benchmark for thermal and visual face images have been used. Experi‐ mental results show that the proposed approach performs well in recognizing unknown in‐

Zeng and Liu described state of the art of important advances of type-2 fuzzy sets for pat‐ tern recognition [18]. The success of type-2 fuzzy sets has been largely attributed to their three-dimensional membership functions to handle more uncertainties in real-world prob‐ lems. In pattern recognition, both feature and hypothesis spaces have uncertainties, which motivate us of integrating type-2 fuzzy sets with conventional classifiers to achieve a better performance in terms of the robustness, generalization ability, or recognition accuracy.

essing, and hence increases the overall system performance [6].

performs them in verification accuracy.

4 Search Algorithms for Engineering Optimization

dividuals with a maximum success rate of 96% [8].

In [20], Blahuta *et al.* applied pattern recognition on finite set brainstem ultrasound images to generate neuro solutions in medical problems. For analysis of these images the method of Principal Component Analysis (PSA) was used. This method is the one from a lot of meth‐ ods for image processing, exactly to pattern recognition where is necessary a feature extrac‐ tion. Also the used artificial neural networks (ANN) for this problem and compared the results. The method was implemented in NeuroSolutions software that is very sophisticated simulator of ANN with PCA multilayer (ML) NN topology.

Pandit and Gupta proposed a Neural Network model that has been utilized to train the sys‐ tem for image recognition. The NN model uses Auto-Associative memory for training. The model reads the image in the form of a matrix, evaluates the weight matrix associated with the image. After training process is done, whenever the image is provided to the system the model recognizes it appropriately. The evaluated weight matrix is used for image pattern matching. It is noticed that the model developed is accurate enough to recognize the image even if the image is distorted or some portion/ data is missing from the image. This model eliminates the long time consuming process of image recognition [15].

In [21], authors present the design of three types of neural networks with different features for image recognition, including traditional backpropagation networks, radial basis function networks and counterpropagation networks. The design complexity and generalization abil‐ ity of the three types of neural network architectures are tested and compared based on the applied digit image recognition problem. Traditional backpropagation networks require very complex training process before being applied for classification or approximation. Ra‐ dial basis function networks simplify the training process by the specially organized 3-layer architecture. Counterpropagation networks do not need training process at all and can be designed directly by extracting all the parameters from input data. The experimental results show the good noise tolerance of both RBF networks and counterpropagation network on the image recognition problem, and somehow point out the poor generalization ability of traditional backpropagation networks. The excellent noise rejection ability makes the RBF networks very proper for image data preprocessing before applied for recognition.

The remaining sections of this Chapter are organized as follows. In next Section, a brief theo‐ retical background of the Learning Algorithm for Multivariate Data Analysis (LAMDA) is given. In Section 3 we describe the proposed search algorithm for image recognition based on LAMDA algorithm. Then, in Section 4 we present the implementation results obtained by the proposed approach. Finally, Section 5 contains the conclusions of this Chapter.

## **2. Learning Algorithm for Multivariate Data Analysis**

The Learning Algorithm for Multivariate Data Analysis (LAMDA) is an incremental concep‐ tual clustering method based on fuzzy logic, which can be applied in the processes of forma‐ tion and recognition of concepts (classes). LAMDA has the following features [22-24]:

*2.1.1. Marginal Adequacy Degree*

and a class*c <sup>l</sup>*

takes in*c <sup>l</sup>*

*ginal adequacy degree (MAD)* between the value of component *xi*

Hence, one MAD vector can be associated with an object *x <sup>j</sup>*

*x*

, which is denoted as:

sistency with fuzzy logic, the descriptors must be normalized using (1). This stage generates

Membership functions, denoted as *μ<sup>X</sup>* (*x*), are used to associate a degree of membership of each of the elements of the domain to the corresponding fuzzy set. This degree of member‐ ship indicates the certainty (or uncertainty) that the element belongs to that set. Membership functions for fuzzy sets can be of any shape or type as determined by experts in the domain

**•** For each*x* ∈*U* , the membership function must be unique. That is, the same element can‐

over which the sets are defined. Only must satisfy the following constraints [27].

**•** A membership function must be bounded from below0and from above1.

not map to different degrees of membership for the same fuzzy set.

**•** The range of a membership function must therefore be [0, 1].

*N* MADs, and this process is repeated iteratively for every object with all classes [26].

min max min 2 1 *i i i L xx x*

*x x* - = = - -

, LAMDA computes for every descriptor the so-called *mar‐*

Search Algorithm for Image Recognition Based on Learning Algorithm for Multivariate Data Analysis

( /) [ ] 0,1 *<sup>n</sup> jl j l MAD x c i i* = ´® **x c** (1)

% % (2)

of object *x <sup>j</sup>*

(see Figure 1). To maintain con‐

and the value

7

http://dx.doi.org/10.5772/ 52179

Given an object *x <sup>j</sup>*

that the component *ci*

**Figure 1.** LAMDA basic structure.


Traditionally, the concept of similarity between objects has been considered fundamental to determine whether the descriptors are members of a class or not. LAMDA does not uses similarity measures between objects in order to group them, but it calculates a degree of ad‐ equacy. This concept is expressed as a membership function between the descriptor and any of the previously established classes [22, 25].

#### **2.1. Operation of LAMDA**

The objects *X* (input vectors) and the classes *C* are represented by a number of descriptors denoted by(*d*1, ..., *dn*). Then, every *di* has its own value inside the set*Dk* , the *n*-ary product of the*Dk* , written as*D*<sup>1</sup> ×, ..., ×*Dp*, with{(*d*1, ..., *dn*):*di* ∈*Dk for*1≤*i* ≤*n*, 1≤*k* ≤ *p*} and it is denomi‐ nated Universe (*U* ).

The set of objects can be described by *X* ={*x <sup>j</sup>* : *j* =1, 2, ..., *M* } and any object can be repre‐ sented by a vector *x <sup>j</sup>* =(*x*1, ..., *xn*) where*xi* ∈*U* , so every component *xi* will correspond to the value given by the descriptor *di* for the object*x <sup>j</sup>* . The set of classes can be described by *C* ={*c <sup>l</sup>* :*l* =1, 2, ..., *N* } and any class can be represented by a vector *c <sup>l</sup>* =(*c*1, ..., *cn*) where *ci* ∈*U* , so every component *ci* will corresponds to the value given by the descriptor *di* for the class *c <sup>l</sup>* [23].

#### *2.1.1. Marginal Adequacy Degree*

**2. Learning Algorithm for Multivariate Data Analysis**

The Learning Algorithm for Multivariate Data Analysis (LAMDA) is an incremental concep‐ tual clustering method based on fuzzy logic, which can be applied in the processes of forma‐

**•** The previous knowledge of the number of classes is not necessary (unsupervised learning).

**•** LAMDA can use a supervised learning stage followed by unsupervised one; for this rea‐

**•** Formation and recognition of concepts are based on the maximum adequacy (MA) rule.

**•** This methodology has the possibility to control the selectivity of the classification (exigen‐

**•** LAMDA models the concept of maximum entropy (homogeneity). This concept is repre‐ sented by a class denominated Non-Informative Class (NIC). The NIC concept plays the

Traditionally, the concept of similarity between objects has been considered fundamental to determine whether the descriptors are members of a class or not. LAMDA does not uses similarity measures between objects in order to group them, but it calculates a degree of ad‐ equacy. This concept is expressed as a membership function between the descriptor and any

The objects *X* (input vectors) and the classes *C* are represented by a number of descriptors denoted by(*d*1, ..., *dn*). Then, every *di* has its own value inside the set*Dk* , the *n*-ary product of the*Dk* , written as*D*<sup>1</sup> ×, ..., ×*Dp*, with{(*d*1, ..., *dn*):*di* ∈*Dk for*1≤*i* ≤*n*, 1≤*k* ≤ *p*} and it is denomi‐

=(*x*1, ..., *xn*) where*xi* ∈*U* , so every component *xi*

will corresponds to the value given by the descriptor *di*

for the object*x <sup>j</sup>*

:*l* =1, 2, ..., *N* } and any class can be represented by a vector *c <sup>l</sup>*

: *j* =1, 2, ..., *M* } and any object can be repre‐

. The set of classes can be described by

will correspond to the

=(*c*1, ..., *cn*) where

for the

tion and recognition of concepts (classes). LAMDA has the following features [22-24]:

**•** The descriptors can be qualitative, quantitative or a combination of both.

son, it is possible to achieve an evolutionary classification.

role of a threshold of decision, in the concepts formation process.

cy level) through the parameter*α*.

6 Search Algorithms for Engineering Optimization

of the previously established classes [22, 25].

The set of objects can be described by *X* ={*x <sup>j</sup>*

**2.1. Operation of LAMDA**

nated Universe (*U* ).

sented by a vector *x <sup>j</sup>*

[23].

*C* ={*c <sup>l</sup>*

class *c <sup>l</sup>*

value given by the descriptor *di*

*ci* ∈*U* , so every component *ci*

Given an object *x <sup>j</sup>* and a class*c <sup>l</sup>* , LAMDA computes for every descriptor the so-called *mar‐ ginal adequacy degree (MAD)* between the value of component *xi* of object *x <sup>j</sup>* and the value that the component *ci* takes in*c <sup>l</sup>* , which is denoted as:

$$\mathbf{A}AD(\mathbf{x}\_{l}^{l}\mid\mathbf{c}\_{l}^{l}) = \mathbf{x}^{l} \times \mathbf{c}^{l} \to \begin{bmatrix} \mathbf{0}, \mathbf{1} \end{bmatrix}^{n} \tag{1}$$

Hence, one MAD vector can be associated with an object *x <sup>j</sup>* (see Figure 1). To maintain con‐ sistency with fuzzy logic, the descriptors must be normalized using (1). This stage generates *N* MADs, and this process is repeated iteratively for every object with all classes [26].

$$\mathbf{x}\_{l} = \frac{\tilde{\mathbf{x}}\_{l} - \mathbf{x}\_{\text{min}}}{\mathbf{x}\_{\text{max}} - \mathbf{x}\_{\text{min}}} = \frac{\tilde{\mathbf{x}}\_{l}}{\mathbf{2}^{L} - 1} \tag{2}$$

**Figure 1.** LAMDA basic structure.

Membership functions, denoted as *μ<sup>X</sup>* (*x*), are used to associate a degree of membership of each of the elements of the domain to the corresponding fuzzy set. This degree of member‐ ship indicates the certainty (or uncertainty) that the element belongs to that set. Membership functions for fuzzy sets can be of any shape or type as determined by experts in the domain over which the sets are defined. Only must satisfy the following constraints [27].


The MAD is a membership function derived from a fuzzy generalization of a binomial prob‐ ability law [26]. As before, *x <sup>j</sup>* =(*x*1, ..., *xn*), and let *E* be a non-empty, proper subset of*X* . We have an experiment where the result is considered a "success" if the outcome *xi* is in*E*. Oth‐ erwise, the result is considered a "failure". Let *P*(*E*)=*ρ* be the probability of success so *P*(*E* ′ ) =*q* =1−*ρ* is the probability of failure; then intermediate values have a degree of success or failure. The probability mass function of *X* is defined as [28].

$$f\left(\mathbf{x}\right) = \left(\boldsymbol{\rho}\right)^{\left(\mathbf{x}\right)} \left(1 - \boldsymbol{\rho}\right)^{\left(\mathbf{l} - \mathbf{x}\right)}\tag{3}$$

**Name T-Norm (Intersection) S-Conorm (Union)** Min-Max min(*x*1, ..., *xn*) max(*x*1, ..., *xn*)

*xi* <sup>1</sup>−(∏

Search Algorithm for Image Recognition Based on Learning Algorithm for Multivariate Data Analysis

, 0} min{∑

, 1} min{(∑

 a

will be placed in the highest adequation degree class [23]. The MA rule is de‐

max x , x ,..., x ( cc c 1 2 () () () *<sup>l</sup>* ) *jj j MA GAD GAD GAD* <sup>=</sup> (5)

*<sup>T</sup>* ,*<sup>S</sup>* (intersection) and *<sup>S</sup>* <sup>=</sup> *<sup>L</sup>* <sup>0</sup>

LAMDA has been applied to different domains: medical images [32], pattern recognition [33], detection and diagnosis of failures of industrial processes [34], biological processes [35], distribution systems of electrical energy [36], processes for drinking water produc‐ tion [29], monitoring and diagnosis of industrial processes [37], selection of sensors [38],

*i*=1 *n xi*)

http://dx.doi.org/10.5772/ 52179

9

*i*=1 *n xi* , 1}

*i*=1 *n* (*xi* ) 1 λ ) λ , 1}

∑ *i*=1 *n* ( *xi* 1− *x i* )

1 + ∑ *i*=1 *n* ( *xi* 1− *x i* )

1, if it exist *xi* = 1

(4)

related to all classes, and according to the

*<sup>T</sup>* ,*<sup>S</sup>* (union). The parameter *α*

*i*=1 *n*

*i*=1 *n* (1− *xi* ) 1 λ ) λ

1 1 + ∑ *i*=1 *n* ( <sup>1</sup><sup>−</sup> *xi x i* )

0, if it exist *xi* = 0

( ) ( ) ,

a

*<sup>T</sup>* ,*<sup>S</sup>* <sup>≤</sup>*S*, *<sup>T</sup>* <sup>=</sup> *<sup>L</sup>* <sup>1</sup>

Finally, once computed the GAD of the object *x <sup>j</sup>*

11 1 ( ,..., ) ( ,..., ) 1 ( ,..., ) *T S L x x Tx x Sx x*

*nn n* = × +- ×

One class of non-associative T-norm and T-conorm-based compensatory operator is the line‐

*i*=1 *n xi*

Product ∏

Lukasiewicz max{1−*n* +∑

Yaguer 1−min{(∑

Hammacher

**Table 1.** T-norms and S-conorms.

ar convex T-S function [31]:

where*α* ∈ 0, 1 , *T* ≤ *L <sup>α</sup>*

vector quantization [39].

MA rule, *x <sup>j</sup>*

fined as

 

is called exigency level [22, 25].

a

where*ρ* ∈ 0, 1 . The following Fuzzy Probability Distributions are typically used by LAM‐ DA methodology to calculate the MADs [25],[29].


#### *2.1.2. Global Adequacy Degree*

Global Adequacy degree (GAD) is obtained by aggregating or summarizing of all marginal information previously calculated (see Figure 1), using mathematical aggregation operators (T-norms and S-conorms) given *N* MADs of an object *x <sup>j</sup>* relative to class*c <sup>l</sup>* , through a linear convex T-S function*L <sup>α</sup> <sup>T</sup>* ,*<sup>S</sup>* . Some T-norms and their dual S-conorm used in LAMDA method‐ ology are shown in Table 1 [22, 23].

The aggregation operators are mathematical objects that have the function of reducing a set of numbers into a unique representative number. This is simply a function, which assigns a real number *y*to any *n*-tuple (*x*1, *x*2, ...*xn*) of real numbers, *y* = *A*(*x*1, *x*2, ...*xn*)[30].

The T-norms and S-conorms are two families specialized on the aggregation under uncer‐ tainty. They can also be seen as a generalization of the Boolean logic connectives to multivalued logic. The T-norms generalize the conjunctive 'AND' (intersection) operator and the S-conorms generalize the disjunctive 'OR' (union) operator [30].

Linear convex T-S function is part of the so-called compensatory functions, and is utilized to combine a T-norm and a S-conorm in order to compensate their opposite effects. Zimmer‐ mann and Zysno [30] discovered that in a decision making context humans neither follow exactly the behavior of a T-norm nor a S-conorm when aggregating. In order to get closer to the human aggregation process, they proposed an operator on the unit interval based on Tnorms and S-conorms.

Search Algorithm for Image Recognition Based on Learning Algorithm for Multivariate Data Analysis http://dx.doi.org/10.5772/ 52179 9


#### **Table 1.** T-norms and S-conorms.

 

The MAD is a membership function derived from a fuzzy generalization of a binomial prob‐

erwise, the result is considered a "failure". Let *P*(*E*)=*ρ* be the probability of success so

() ( )( ) ( )( ) <sup>1</sup> 1 *x x*

r

) =*q* =1−*ρ* is the probability of failure; then intermediate values have a degree of success

 r-

where*ρ* ∈ 0, 1 . The following Fuzzy Probability Distributions are typically used by LAM‐

Global Adequacy degree (GAD) is obtained by aggregating or summarizing of all marginal information previously calculated (see Figure 1), using mathematical aggregation operators

The aggregation operators are mathematical objects that have the function of reducing a set of numbers into a unique representative number. This is simply a function, which assigns a

The T-norms and S-conorms are two families specialized on the aggregation under uncer‐ tainty. They can also be seen as a generalization of the Boolean logic connectives to multivalued logic. The T-norms generalize the conjunctive 'AND' (intersection) operator and the

Linear convex T-S function is part of the so-called compensatory functions, and is utilized to combine a T-norm and a S-conorm in order to compensate their opposite effects. Zimmer‐ mann and Zysno [30] discovered that in a decision making context humans neither follow exactly the behavior of a T-norm nor a S-conorm when aggregating. In order to get closer to the human aggregation process, they proposed an operator on the unit interval based on T-

*<sup>T</sup>* ,*<sup>S</sup>* . Some T-norms and their dual S-conorm used in LAMDA method‐

(T-norms and S-conorms) given *N* MADs of an object *x <sup>j</sup>* relative to class*c <sup>l</sup>*

real number *y*to any *n*-tuple (*x*1, *x*2, ...*xn*) of real numbers, *y* = *A*(*x*1, *x*2, ...*xn*)[30].

S-conorms generalize the disjunctive 'OR' (union) operator [30].

have an experiment where the result is considered a "success" if the outcome *xi*

or failure. The probability mass function of *X* is defined as [28].

DA methodology to calculate the MADs [25],[29].

*f x*

=(*x*1, ..., *xn*), and let *E* be a non-empty, proper subset of*X* . We

= - (3)

is in*E*. Oth‐

, through a linear

ability law [26]. As before, *x <sup>j</sup>*

8 Search Algorithms for Engineering Optimization

**•** Fuzzy Binomial Distribution.

**•** Gaussian Distribution.

convex T-S function*L <sup>α</sup>*

norms and S-conorms.

ology are shown in Table 1 [22, 23].

*2.1.2. Global Adequacy Degree*

**•** Fuzzy Binomial-Center Distribution.

**•** Fuzzy Binomial-Distance Distribution.

*P*(*E* ′

One class of non-associative T-norm and T-conorm-based compensatory operator is the line‐ ar convex T-S function [31]:

$$L\_a^{T,S}(\mathbf{x}\_1,\ldots,\mathbf{x}\_n) = \left(a\right) \cdot T(\mathbf{x}\_1,\ldots,\mathbf{x}\_n) + \left(1 - a\right) \cdot S(\mathbf{x}\_1,\ldots,\mathbf{x}\_n) \tag{4}$$

where*α* ∈ 0, 1 , *T* ≤ *L <sup>α</sup> <sup>T</sup>* ,*<sup>S</sup>* <sup>≤</sup>*S*, *<sup>T</sup>* <sup>=</sup> *<sup>L</sup>* <sup>1</sup> *<sup>T</sup>* ,*<sup>S</sup>* (intersection) and *<sup>S</sup>* <sup>=</sup> *<sup>L</sup>* <sup>0</sup> *<sup>T</sup>* ,*<sup>S</sup>* (union). The parameter *α* is called exigency level [22, 25].

Finally, once computed the GAD of the object *x <sup>j</sup>* related to all classes, and according to the MA rule, *x <sup>j</sup>* will be placed in the highest adequation degree class [23]. The MA rule is de‐ fined as

$$MA = \max\left(GAD\_{\underline{\mathbf{c}}^{\cdot}}\left(\mathbf{x}^{\prime}\right), GAD\_{\underline{\mathbf{c}}^{\cdot}}\left(\mathbf{x}^{\prime}\right), \dots, GAD\_{\underline{\mathbf{c}}^{\cdot}}\left(\mathbf{x}^{\prime}\right)\right) \tag{5}$$

LAMDA has been applied to different domains: medical images [32], pattern recognition [33], detection and diagnosis of failures of industrial processes [34], biological processes [35], distribution systems of electrical energy [36], processes for drinking water produc‐ tion [29], monitoring and diagnosis of industrial processes [37], selection of sensors [38], vector quantization [39].

## **3. Image recognition based on Learning Algorithm for Multivariate Data Analysis**

where*i* =1, 2, ..., *n*, *c*˜*<sup>i</sup>*

where*<sup>i</sup>* =1, 2, ..., *<sup>n</sup>*, *x*˜*<sup>i</sup>*

ta set.

*ρi l* =*ci l* .

input vector x *<sup>j</sup>*

scriptors values are the limits of the data set.

criterion, computed in four stages.

rithm LAMDA, it must be normalized:

**3.2. Search algorithm for image recognition based on LAMDA**

*x*

with each descriptor *ci*

lowing fuzzy probability distributions:

*Fuzzy Binomial-Center Distribution:*

*Fuzzy Binomial-Distance Distribution:*

*Fuzzy Binomial Distribution:*

is the descriptor before normalization, *ci*

<sup>0</sup>≤*ci* <sup>≤</sup>1, *c*min =0and*c*max =2*<sup>L</sup>* <sup>−</sup>1; in the context of image processing, *<sup>L</sup>* is the number of bits necessary to represent the value of a pixel. The limits (minimum and maximum) of the de‐

Search Algorithm for Image Recognition Based on Learning Algorithm for Multivariate Data Analysis

The proposed search algorithm performs the recognition task according to a membership

Stage 1. *Image normalization:* Before using the descriptors of the image in the search algo‐

% % (7)

= - (8)

min max min 2 1 *i i i L xx x*

is the descriptor before normalization, *xi*

Stage 2. *Marginal Adequacy Degree (MAD)*. MADS are calculated for each descriptor *xi*

<sup>0</sup><sup>≤</sup> *xi* <sup>≤</sup>1, *x*min =0and*x*max =2*<sup>L</sup>* <sup>−</sup>1, *<sup>L</sup>* is the number of bits necessary to represent the value of a pixel. The limits (minimum and maximum) of the descriptors values are the limits of the da‐

*<sup>l</sup>*of each class*c <sup>l</sup>*

( /) 1 *j j i i x x jl l l MAD x c ii i i* r

where*i* =1, 2, ..., *n*; *j* =1, 2, ..., *M* and*l* =1, 2, ..., *N* . For all fuzzy probability distributions,

( )

*i i j l i i <sup>x</sup> j j <sup>x</sup> i i*


r

( /)

*MAD x c*

( )( ) ( )( ) <sup>1</sup>

( )( ) ( )( )

*x x*

*x x l l*

( ) (1 ) 1

 r-

*<sup>j</sup> <sup>j</sup> <sup>i</sup> <sup>i</sup>*

*j j i i*

(1 )


1


 r-

*x x* - = = - - is the normalized descriptor,

http://dx.doi.org/10.5772/ 52179

11

is the normalized descriptor,

. For this purpose, we can use the fol‐

*j* of each

(9)

In this section the image recognition algorithm based on LAMDA is described. Our proposal is divided into two phases, training and recognition. At training phase, a codebook is gener‐ ated based on LAMDA algorithm, let us name it LAMDA codebook. At recognition phase, we propose a search algorithm based on LAMDA and we show its application in image rec‐ ognition process.

## **3.1. Training phase**

The LAMDA codebook is calculated in two stages, see Figure 2.

**Figure 2.** LAMDA codebook generation scheme

Stage 1. *LAMDA codebook generation.* At this stage, a codebook based on LAMDA algorithm is generated. This stage is a supervised process; the training set used in the codebook gener‐ ation is formed by a set of images.

Let *x* = *xi <sup>n</sup>* be a vector, which represents an image; the training set is defined as A={*x <sup>j</sup>* : *j* =1, 2, ..., *M* }. The result of this stage is a codebook denoted as *C* ={*c <sup>l</sup>* :*<sup>l</sup>* =1, 2, ..., *<sup>N</sup>* }, wherec= *ci <sup>n</sup>*.

Stage 2. *LAMDA codebook normalization.* Before using the LAMDA codebook, it must be normalized:

$$\mathcal{C}\_{\iota} = \frac{\tilde{\mathcal{C}}\_{\iota} - \mathcal{C}\_{\min}}{\mathcal{C}\_{\max} - \mathcal{C}\_{\min}} = \frac{\tilde{\mathcal{C}}\_{\iota}}{\mathcal{D}^L - 1} \tag{6}$$

where*i* =1, 2, ..., *n*, *c*˜*<sup>i</sup>* is the descriptor before normalization, *ci* is the normalized descriptor, <sup>0</sup>≤*ci* <sup>≤</sup>1, *c*min =0and*c*max =2*<sup>L</sup>* <sup>−</sup>1; in the context of image processing, *<sup>L</sup>* is the number of bits necessary to represent the value of a pixel. The limits (minimum and maximum) of the de‐ scriptors values are the limits of the data set.

#### **3.2. Search algorithm for image recognition based on LAMDA**

The proposed search algorithm performs the recognition task according to a membership criterion, computed in four stages.

Stage 1. *Image normalization:* Before using the descriptors of the image in the search algo‐ rithm LAMDA, it must be normalized:

$$\mathbf{x}\_{\iota} = \frac{\tilde{\mathbf{x}}\_{\iota} - \mathbf{x}\_{\min}}{\mathbf{x}\_{\max} - \mathbf{x}\_{\min}} = \frac{\tilde{\mathbf{x}}\_{\iota}}{\mathbf{2}^{L} - 1} \tag{7}$$

where*<sup>i</sup>* =1, 2, ..., *<sup>n</sup>*, *x*˜*<sup>i</sup>* is the descriptor before normalization, *xi* is the normalized descriptor, <sup>0</sup><sup>≤</sup> *xi* <sup>≤</sup>1, *x*min =0and*x*max =2*<sup>L</sup>* <sup>−</sup>1, *<sup>L</sup>* is the number of bits necessary to represent the value of a pixel. The limits (minimum and maximum) of the descriptors values are the limits of the da‐ ta set.

Stage 2. *Marginal Adequacy Degree (MAD)*. MADS are calculated for each descriptor *xi j* of each input vector x *<sup>j</sup>* with each descriptor *ci <sup>l</sup>*of each class*c <sup>l</sup>* . For this purpose, we can use the fol‐ lowing fuzzy probability distributions:

*Fuzzy Binomial Distribution:*

**3. Image recognition based on Learning Algorithm for Multivariate Data**

In this section the image recognition algorithm based on LAMDA is described. Our proposal is divided into two phases, training and recognition. At training phase, a codebook is gener‐ ated based on LAMDA algorithm, let us name it LAMDA codebook. At recognition phase, we propose a search algorithm based on LAMDA and we show its application in image rec‐

Stage 1. *LAMDA codebook generation.* At this stage, a codebook based on LAMDA algorithm is generated. This stage is a supervised process; the training set used in the codebook gener‐

Let *x* = *xi <sup>n</sup>* be a vector, which represents an image; the training set is defined as

Stage 2. *LAMDA codebook normalization.* Before using the LAMDA codebook, it must be

min max min 2 1 *i i i L cc c*

*c c* - = = - -

*c*

: *j* =1, 2, ..., *M* }. The result of this stage is a codebook denoted as

% % (6)

The LAMDA codebook is calculated in two stages, see Figure 2.

**Analysis**

10 Search Algorithms for Engineering Optimization

ognition process.

**3.1. Training phase**

**Figure 2.** LAMDA codebook generation scheme

ation is formed by a set of images.

:*<sup>l</sup>* =1, 2, ..., *<sup>N</sup>* }, wherec= *ci <sup>n</sup>*.

A={*x <sup>j</sup>*

*C* ={*c <sup>l</sup>*

normalized:

$$MAD(\mathbf{x}\_{\iota}^{\iota} \mid \mathbf{c}\_{\iota}^{\iota}) = \left(\boldsymbol{\rho}\_{\iota}^{\iota}\right)^{\left(\mathbf{x}\_{\iota}^{\iota}\right)} \left(1 - \boldsymbol{\rho}\_{\iota}^{\iota}\right)^{\left(1 - \mathbf{x}\_{\iota}^{\iota}\right)} \tag{8}$$

where*i* =1, 2, ..., *n*; *j* =1, 2, ..., *M* and*l* =1, 2, ..., *N* . For all fuzzy probability distributions, *ρi l* =*ci l* .

*Fuzzy Binomial-Center Distribution:*

$$MAD(\mathbf{x}\_i^{\prime} \mid \mathbf{c}\_i^{\prime}) = \frac{\left(\rho\_i^{\prime}\right)^{\left(\mathbf{x}\_i^{\prime}\right)} \left(1 - \rho\_i^{\prime}\right)^{\left(1 - \mathbf{x}\_i^{\prime}\right)}}{\left(\mathbf{x}\_i^{\prime}\right)^{\left(\mathbf{x}\_i^{\prime}\right)} \left(1 - \mathbf{x}\_i^{\prime}\right)^{\left(1 - \mathbf{x}\_i^{\prime}\right)}} \tag{9}$$

*Fuzzy Binomial-Distance Distribution:*

$$MAD(\mathbf{x}\_{i}^{l} \mid \mathbf{c}\_{i}^{l}) = \left(a\right)^{\left(l - \mathbf{x}\_{dw}\right)} \left(1 - a\right)^{\left(\mathbf{x}\_{dw}\right)} \tag{10}$$

where*a* =max( *ρ<sup>i</sup> <sup>l</sup>* , <sup>1</sup>−*ρ<sup>i</sup> <sup>l</sup>* ), ⋅ denotes a rounding operation to the largest previous integer value and*xdist* =*abs*(*xi j* −*ρ<sup>i</sup> l* ).

*Gaussian Function:*

$$\text{MAD}(\mathbf{x}\_{i}^{\prime} / \mathbf{c}\_{i}^{\prime}) = e^{-\frac{1}{2} \left(\frac{\mathbf{x}\_{i}^{\prime} - \rho \mathbf{f}^{\prime}}{\sigma^{2}}\right)^{2}} \tag{11}$$

**Figure 3.** Search algorithm LAMDA

In this section, the findings of the implementation of the search algorithm LAMDA, in im‐ age recognition of gray-scale are presented. In this implementation the fuzzy probability distributions, binomial and binomial center, and the aggregation operators, product and

Search Algorithm for Image Recognition Based on Learning Algorithm for Multivariate Data Analysis

http://dx.doi.org/10.5772/ 52179

13

**Figure 4.** Images of set-1, (a) original image. Altered images, erosive noise (b) 60%, (c) 100%; mixed noise (d) 30 %,

min-max are only used because only they have a lower computational complexity.

**4. Results**

(e) 40%

where *<sup>σ</sup>* <sup>2</sup> <sup>=</sup> <sup>1</sup> *<sup>n</sup>* <sup>−</sup><sup>1</sup> ∑ *i*=1 *n* (*xi j* − *x*¯)2 and *x*¯ <sup>=</sup> <sup>1</sup> *n* ∑ *i*=1 *n xi j* are the variance and arithmetic mean of the vector *x j* , respectively.

Stage 3. *Global Adequacy Degree (GAD)*. This stage determines the grade of membership of each input vector x *<sup>j</sup>* to each class*c <sup>l</sup> ,* by means of a convex linear function (12) and the use of mathematical aggregation operators (T-norms and S-conorms), these are shown in Table 2.

$$GAD\_{\boldsymbol{\varepsilon}'} (\mathbf{x}') = L\_{\boldsymbol{\alpha}}^{T, \mathcal{S}} = (\boldsymbol{\alpha}) \cdot T (MAD \mid (\mathbf{x}\_{\boldsymbol{\iota}}^{\boldsymbol{\iota}} / \boldsymbol{\mathcal{c}}\_{\boldsymbol{\iota}}^{\boldsymbol{\iota}})) + (1 - \boldsymbol{\alpha}) \cdot S (MAD \mid (\mathbf{x}\_{\boldsymbol{\iota}}^{\boldsymbol{\iota}} / \boldsymbol{\mathcal{c}}\_{\boldsymbol{\iota}}^{\boldsymbol{\iota}})) \tag{12}$$


**Table 2.** Mathematical aggregation operators

Stage 4. *Obtaining the index*. Finally, this stage generates the index of the class to which the input vector belongs. The index is determined by the GAD that presents the maximum val‐ ue (MA rule):

$$index = \max\left(GAD\_{\underline{\varsigma'}}\left(\mathbf{x'}\right), GAD\_{\underline{\varsigma'}}\left(\mathbf{x'}\right), \dots, GAD\_{\underline{\varsigma'}}\left(\mathbf{x'}\right)\right) \tag{13}$$

Figure 3 shows the proposed VQ scheme that makes use of the LAMDA algorithm and the codebook generated by LAMDA algorithm.

Search Algorithm for Image Recognition Based on Learning Algorithm for Multivariate Data Analysis http://dx.doi.org/10.5772/ 52179 13

**Figure 3.** Search algorithm LAMDA

## **4. Results**

( )( ) ( ) <sup>1</sup> ( ) ( /) 1 *dist dist x x j l MAD x c a a i i* -

<sup>2</sup> ( /)

Stage 3. *Global Adequacy Degree (GAD)*. This stage determines the grade of membership of each

mathematical aggregation operators (T-norms and S-conorms), these are shown in Table 2.

*<sup>l</sup>* ( ) ( ) ( ( / )) (1 ) ( ( / )) *j TS j l j l i i i i <sup>c</sup> GAD L T MAD x c S MAD x c* **x** = = ×

**Operator T-Norm (Intersection) S-Conorm (Union)**

Stage 4. *Obtaining the index*. Finally, this stage generates the index of the class to which the input vector belongs. The index is determined by the GAD that presents the maximum val‐

max x , x ,..., x ( cc c

Figure 3 shows the proposed VQ scheme that makes use of the LAMDA algorithm and the

*j* / *ci l*

*j l MAD x c e i i*

*n* ∑ *i*=1 *n xi j*

and *x*¯ <sup>=</sup> <sup>1</sup>

,

Min-Max min(*MAD*(*xi*

a

*i*=1 *n MAD*(*xi j* / *ci l*

a

Product ∏

codebook generated by LAMDA algorithm.

**Table 2.** Mathematical aggregation operators

where*a* =max( *ρ<sup>i</sup>*

value and*xdist* =*abs*(*xi*

*Gaussian Function:*

where *<sup>σ</sup>* <sup>2</sup> <sup>=</sup> <sup>1</sup>

, respectively.

input vector x *<sup>j</sup>*

ue (MA rule):

*x j*

*<sup>n</sup>* <sup>−</sup><sup>1</sup> ∑ *i*=1 *n* (*xi j* − *x*¯)2

*<sup>l</sup>* , <sup>1</sup>−*ρ<sup>i</sup>*

12 Search Algorithms for Engineering Optimization

*j* −*ρ<sup>i</sup> l* ). = - (10)

è ø <sup>=</sup> (11)

are the variance and arithmetic mean of the vector

+- × (12)

*j* / *ci l* ))

*i*=1 *n MAD*(*xi j* / *ci l* ))

*<sup>l</sup>* ), ⋅ denotes a rounding operation to the largest previous integer

2 2 1

to each class*c <sup>l</sup> ,* by means of a convex linear function (12) and the use of

 a

)) max(*MAD*(*xi*

) 1−(∏

1 2 () () () *<sup>l</sup>* ) *jj j index GAD GAD GAD* = (13)

*j j i i x*

s

æ ö - - ç ÷ ç ÷

r

In this section, the findings of the implementation of the search algorithm LAMDA, in im‐ age recognition of gray-scale are presented. In this implementation the fuzzy probability distributions, binomial and binomial center, and the aggregation operators, product and min-max are only used because only they have a lower computational complexity.

**Figure 4.** Images of set-1, (a) original image. Altered images, erosive noise (b) 60%, (c) 100%; mixed noise (d) 30 %, (e) 40%

**Image Fuzzy**

**distribution**

**Aggregation operator**

Binomial Min-max 1

Binomial Product 0-1

Binomial center Min-max 1

Binomial center Product 1

test images of set-1

**Table 3.** Performance results (recognition rate) showed by the proposed search algorithm withaltered versions of the

**Exigency level (α)**

Search Algorithm for Image Recognition Based on Learning Algorithm for Multivariate Data Analysis

**Distortion percentage added to image original Erosive noise Mixed noise 0% 60% 100% 30% 40%**

http://dx.doi.org/10.5772/ 52179

15

100% 100% 100% 100% 100%

100% 100% 100% 100% 100%

100% 100% 100% 100% 100%

100% 100% 100% 100% 100%

100% 100% 100% 100% 100%

100% 100% 100% 100% 100%

0% 0% 0% 0% 0%

0% 0% 0% 0% 0%

0% 0% 0% 0% 0%

0% 0% 0% 0% 0%

100% 100% 100% 100% 100%

100% 100% 100% 100% 100%

100% 100% 100% 100% 100%

100% 100% 100% 100% 100%

100% 100% 100% 0% 0%

100% 100% 100% 100% 100%

100% 100% 0% 0% 0%

100% 100% 100% 100% 0%

100% 100% 0% 0% 0%

100% 0% 0% 0% 0%

**Figure 5.** Images of set-2, (a) original image. Altered images, erosive noise (b) 60%, (c) 100%; mixed noise (d) 30 %, (e) 40%

For this experiment we chose two test sets of images, called set-1 and set-2, and their altered versions (see Figures 4, 5). We say that an altered version *x*˜ *<sup>γ</sup>* of the image *x <sup>γ</sup>* has undergone an *erosive* change whenever*x*˜ *<sup>γ</sup>* ≤ *x <sup>γ</sup>*, *dilative* change whenever *x*˜ *<sup>γ</sup>* ≥ *x <sup>γ</sup>* and *mixed* change when include a mixture of erosive and dilative change. These images were used to training the LAMDA codebook. At this stage, it was determined by means of some tests that if we only used the original images and the altered versions with erosive noise 60%, the best results were obtained for the test images of the set-1. In the case of the test images of the set-2, to obtain the best results we only used the original images and the altered versions with ero‐ sive noise 60% and 100%.

To evaluate the proposed search algorithm performance, altered versions of these images distorted by random noise were presented to the classification stage of the search algorithm LAMDA (see Figures 4, 5).

The fact of using two fuzzy probability distributions and two aggregation operators allows four combinations. This way, four versions of the search algorithm LAMDA are obtained: binomial min-max, binomial product, binomial center min-max and binomial center prod‐ uct. Moreover, we proceeded to modify it in the range from 0 to 1 with step 0.1 to determine the value of the level of exigency (α) that provide the best results. Each version of LAMDA was evaluated using two sets of test images. The results of this experiment are shown in Tables 3 and 4.

Table 3 shows the results obtained using the combinations: binomial min-max, binomial product, binomial center min-max y binomial center product and using the set of test im‐ ages of the set-1.

Search Algorithm for Image Recognition Based on Learning Algorithm for Multivariate Data Analysis http://dx.doi.org/10.5772/ 52179 15


**Figure 5.** Images of set-2, (a) original image. Altered images, erosive noise (b) 60%, (c) 100%; mixed noise (d) 30 %,

For this experiment we chose two test sets of images, called set-1 and set-2, and their altered versions (see Figures 4, 5). We say that an altered version *x*˜ *<sup>γ</sup>* of the image *x <sup>γ</sup>* has undergone an *erosive* change whenever*x*˜ *<sup>γ</sup>* ≤ *x <sup>γ</sup>*, *dilative* change whenever *x*˜ *<sup>γ</sup>* ≥ *x <sup>γ</sup>* and *mixed* change when include a mixture of erosive and dilative change. These images were used to training the LAMDA codebook. At this stage, it was determined by means of some tests that if we only used the original images and the altered versions with erosive noise 60%, the best results were obtained for the test images of the set-1. In the case of the test images of the set-2, to obtain the best results we only used the original images and the altered versions with ero‐

To evaluate the proposed search algorithm performance, altered versions of these images distorted by random noise were presented to the classification stage of the search algorithm

The fact of using two fuzzy probability distributions and two aggregation operators allows four combinations. This way, four versions of the search algorithm LAMDA are obtained: binomial min-max, binomial product, binomial center min-max and binomial center prod‐ uct. Moreover, we proceeded to modify it in the range from 0 to 1 with step 0.1 to determine the value of the level of exigency (α) that provide the best results. Each version of LAMDA was evaluated using two sets of test images. The results of this experiment are shown in

Table 3 shows the results obtained using the combinations: binomial min-max, binomial product, binomial center min-max y binomial center product and using the set of test im‐

(e) 40%

sive noise 60% and 100%.

14 Search Algorithms for Engineering Optimization

LAMDA (see Figures 4, 5).

Tables 3 and 4.

ages of the set-1.

**Table 3.** Performance results (recognition rate) showed by the proposed search algorithm withaltered versions of the test images of set-1


**Image Fuzzy**

test images of set-2 .

able to perform the classification.

half thus reducing the number of operations.

**distribution**

**Aggregation operator**

Binomial center Product 1

**Exigency**

**Table 4.** Performance results (recognition rate) showed by the proposed search algorithm withaltered versions of the

In the case of the combination of the binomial distribution with the aggregation operator min-max, the best results were obtained with a value of exigency level in the range from 0.8 to 1. We chose the exigency level equal to 1. As a result, the linear convex function is re‐ duced by half, and, consequently, the number of operations is reduced. On the other hand, the combination of the binomial distribution with the aggregation operator product was un‐

In the combination of the binomial center distribution with the aggregation operator minmax, the best results were obtained with a value of exigency level in the range from 0.1 to 1. We chose the exigency level equal to 1. This way, the linear convex function is reduced by

On the other hand, using the combination of the binomial center distribution with the aggre‐ gation operator product, the best results were obtained with a value of exigency level equal to 1. Although, as it is shown in Table 3, the classification is not efficient with the images altered with erosive noise of 100% and with mixed noise of 30% and 40%. With this combi‐

**level (α) Distortion percentage added to image**

Search Algorithm for Image Recognition Based on Learning Algorithm for Multivariate Data Analysis

100% 100% 100% 0% 100%

http://dx.doi.org/10.5772/ 52179

17

100% 100% 100% 100% 100%

100% 100% 100% 100% 100%

100% 100% 100% 100% 100%

100% 100% 100% 100% 100%

0% 0% 0% 0% 0%

0% 0% 0% 0% 0%

0% 0% 0% 0% 0%

0% 0% 0% 0% 0%

0% 0% 0% 0% 0%

0% 0% 0% 0% 0%

0% 0% 0% 0% 0%

0% 0% 0% 0% 0%


**Image Fuzzy**

**distribution**

16 Search Algorithms for Engineering Optimization

**Aggregation operator**

Binomial Min-max 1

Binomial Product 1

Binomial center Min-max 1

**Exigency**

**original Erosive noise Mixed noise 0% 60% 100% 30% 40%**

**level (α) Distortion percentage added to image**

100% 100% 100% 100% 100%

100% 100% 100% 100% 100%

100% 100% 100% 100% 100%

100% 100% 100% 100% 100%

100% 100% 100% 100% 100%

100% 100% 100% 0% 100%

100% 100% 100% 100% 100%

100% 100% 100% 100% 100%

100% 100% 100% 100% 100%

100% 100% 100% 100% 100%

0% 0% 0% 0% 0%

0% 0% 0% 0% 0%

0% 0% 0% 0% 0%

0% 0% 0% 0% 0%

0% 0% 0% 0% 0%

0% 0% 0% 0% 0%

0% 0% 0% 0% 0%

0% 0% 0% 0% 0%

100% 100% 100% 100% 100%

100% 100% 100% 100% 100%

100% 100% 100% 100% 100%

100% 100% 100% 100% 100%

100% 100% 100% 100% 100%

**Table 4.** Performance results (recognition rate) showed by the proposed search algorithm withaltered versions of the test images of set-2 .

In the case of the combination of the binomial distribution with the aggregation operator min-max, the best results were obtained with a value of exigency level in the range from 0.8 to 1. We chose the exigency level equal to 1. As a result, the linear convex function is re‐ duced by half, and, consequently, the number of operations is reduced. On the other hand, the combination of the binomial distribution with the aggregation operator product was un‐ able to perform the classification.

In the combination of the binomial center distribution with the aggregation operator minmax, the best results were obtained with a value of exigency level in the range from 0.1 to 1. We chose the exigency level equal to 1. This way, the linear convex function is reduced by half thus reducing the number of operations.

On the other hand, using the combination of the binomial center distribution with the aggre‐ gation operator product, the best results were obtained with a value of exigency level equal to 1. Although, as it is shown in Table 3, the classification is not efficient with the images altered with erosive noise of 100% and with mixed noise of 30% and 40%. With this combi‐ nation, the best results were obtained in comparison to the combination of the binomial dis‐ tribution with the aggregation operator product.

\*Address all correspondence to: olek@cic.ipn.mx

1 Universidad Tecnológica de la Mixteca, México

*tion Engineering*, 4, 705-13.

*tics & Control*, 16(1), 34-8.

gy.; , 58, 174-8.

*plications*, 146-9.

19-24.

**References**

2 Centro de Investigación en Computación, IPN, México

[1] Kala, R., Shukla, A., Tiwari, R., Rungta, S., & Janghel, R. R. (2009). Mobile Robot Nav‐ igation Control in Moving Obstacle Environment Using Genetic Algorithm, Artificial Neural Networks and A\* Algorithm. *World Congress on Computer Science and Informa‐*

Search Algorithm for Image Recognition Based on Learning Algorithm for Multivariate Data Analysis

http://dx.doi.org/10.5772/ 52179

19

[2] Zhu, Y., Yuan, Q., Wang, Q., Fu, Y., & Wang, H. (2009). Radar HRRP Recognition Based on the Wavelet Transform and Multi-Neural Network Fusion. *Electronics Op‐*

[3] Esbati, H., & Shirazi, J. (2011). Face Recognition with PCA and KPCA using Elman Neural Network and SVM. World Academy of Science, Engineering and Technolo‐

[4] Bowyer, K. W., Hollingsworth, K., & Flynn, P. J. (2008). Image understanding for iris biometrics: A survey. *Computer Vision and Image Understanding*, 110(2), 281-307.

[5] Anagnostopoulos-N, C., Anagnostopoulos, E., Psoroulas, I. E., Loumos, I. D., Kaya‐ fas, V., & , E. (2008). License Plate Recognition From Still Images and Video Sequen‐ ces: A Survey. *IEEE Transactions on Intelligent Transportation Systems*, 9(3), 377-91.

[6] Abrishambaf, R., Demirel, H., & Kale, I. (2008). A Fully CNN Based Fingerprint Rec‐ ognition System. *11th International Workshop on Cellular Neural Networks and their Ap‐*

[7] Egmont-Petersen, M., Ridder, D., & Handels, H. (2002). Image processing with neu‐

[8] Bhowmik, M. K., Bhattacharjee, D., Nasipuri, M., Basu, D. K., & Kundu, M. (2009). Classification of Fused Images using Radial Basis Function Neural Network for Hu‐ man Face Recognition. *World Congress on Nature & Biologically Inspired Computing*,

[9] Yang, J., Zhang, D., Frangi, A., & Yang, J-y. (2004). Two-Dimensional PCA: A New Approach to Appearance-Based Face Representation and Recognition. *IEEE Transac‐*

[10] Gottumukkal, R., & Asari, V. (2004). An improved face recognition technique based

ral networks-a review. *Pattern Recognition Letters*, 35(10), 2279-301.

*tions on Pattern Analysis and Machine Intelligence*, 26(1), 131-7.

on modular PCA approach. *Pattern Recogn Letters*, 25(4), 429-36.

Table 4 show the results obtained using the combinations: binomial min-max, binomial product, binomial center min-max and binomial center product and using the set of test im‐ ages of the set-2.

For the combination of the binomial distribution with the aggregation operator min-max, the best results were obtained with a value of exigency level in the range from 0.7 to 1. With the exigency level equal to 1, the linear convex function is reduced by half thus reducing the number of operations. On the other hand, the combination of the binomial distribution with the aggregation operator product was unable to perform classification.

In the combination of the binomial center distribution with the aggregation operator minmax, the best results were obtained with a value of exigency level in the range from 0.1 to 1. Choosing the exigency level equal to 1, the linear convex function is reduced by half and the number of operations is reduced too. On the other hand, the combination of the binomial center distribution with the aggregation operator product was unable to perform classification.

## **5. Conclusions**

In this Chapter, we have proposed the use of LAMDA methodology as a search algorithm for image recognition. It is important to mention that we used LAMDA algorithm both in the training phase and in the recognition phase.

The advantage of the LAMDA algorithm is its versatility which allows obtaining different versions making the combination of fuzzy probability distributions and aggregation opera‐ tors. Furthermore, it also has the possibility to vary the exigency level, and we can locate the range or the value of the exigency level where the algorithm has better results.

As it was shown in Tables 3 and 4, the search algorithm is competitive, since acceptable re‐ sults were obtained in the combinations: binomial min-max, binomial center min-max with both sets of images. As you can see the product aggregation operator was not able to per‐ form the recognition. In both combinations the exigency level was equal to 1, this fact al‐ lowed to reduce the linear convex function.

Finally, from these two combinations it is better to choose the binomial min-max, because with this combination fewer operations are performed.

## **Author details**

Juan G. Zambrano1 , E. Guzmán-Ramírez2 and Oleksiy Pogrebnyak2\*

\*Address all correspondence to: mmortari@unb.br


## **References**

nation, the best results were obtained in comparison to the combination of the binomial dis‐

Table 4 show the results obtained using the combinations: binomial min-max, binomial product, binomial center min-max and binomial center product and using the set of test im‐

For the combination of the binomial distribution with the aggregation operator min-max, the best results were obtained with a value of exigency level in the range from 0.7 to 1. With the exigency level equal to 1, the linear convex function is reduced by half thus reducing the number of operations. On the other hand, the combination of the binomial distribution with

In the combination of the binomial center distribution with the aggregation operator minmax, the best results were obtained with a value of exigency level in the range from 0.1 to 1. Choosing the exigency level equal to 1, the linear convex function is reduced by half and the number of operations is reduced too. On the other hand, the combination of the binomial center distribution with the aggregation operator product was unable to perform classification.

In this Chapter, we have proposed the use of LAMDA methodology as a search algorithm for image recognition. It is important to mention that we used LAMDA algorithm both in

The advantage of the LAMDA algorithm is its versatility which allows obtaining different versions making the combination of fuzzy probability distributions and aggregation opera‐ tors. Furthermore, it also has the possibility to vary the exigency level, and we can locate the

As it was shown in Tables 3 and 4, the search algorithm is competitive, since acceptable re‐ sults were obtained in the combinations: binomial min-max, binomial center min-max with both sets of images. As you can see the product aggregation operator was not able to per‐ form the recognition. In both combinations the exigency level was equal to 1, this fact al‐

Finally, from these two combinations it is better to choose the binomial min-max, because

and Oleksiy Pogrebnyak2\*

range or the value of the exigency level where the algorithm has better results.

the aggregation operator product was unable to perform classification.

tribution with the aggregation operator product.

18 Search Algorithms for Engineering Optimization

the training phase and in the recognition phase.

lowed to reduce the linear convex function.

with this combination fewer operations are performed.

, E. Guzmán-Ramírez2

\*Address all correspondence to: mmortari@unb.br

ages of the set-2.

**5. Conclusions**

**Author details**

Juan G. Zambrano1


[11] Bezdek, J. C., Keller, J., Krisnapuram, R., & Pal, N. (2005). Fuzzy Models and Algo‐ rithms for Pattern Recognition and Image Processing (The Handbooks of Fuzzy Sets):. *Springer-Verlag New York, Inc*.

[24] Aguilar-Martin, J., Sarrate, R., & Waissman, J. (2001). Knowledge-based Signal Anal‐ ysis and Case-based Condition Monitoring of a Machine Tool. *Joint 9th IFSA World*

Search Algorithm for Image Recognition Based on Learning Algorithm for Multivariate Data Analysis

http://dx.doi.org/10.5772/ 52179

21

[25] Aguilar-Martin, J., Agell, N., Sánchez, M., & Prats, F. (2002). Analysis of Tensions in a Population Based on the Adequacy Concept. *5th Catalonian Conference on Artificial In‐*

[26] Waissman, J., Ben-Youssef, C., & Vázquez, G. (2005). Fuzzy Automata Identification Based on Knowledge Discovery in Datasets for Supervision of a WWT Process. *3rd International Conference on Sciences of Electronic Technologies of Information and Telecom‐*

[27] Engelbrecht, A. P. (2007). Computational intelligence. *Anintoduction: John Wiley &*

[28] Buckley, J. J. (2005). Simulating Fuzzy Systems. *Kacprzyk J, editor: Springer-Verlag Ber‐*

[29] Hernández, H. R. (2006). Supervision et diagnostic des procédés de productiond'eau

[30] Detyniecki, M. (2000). Mathematical Aggregation Operators and their Application to

[31] Beliakov, G., Pradera, A., & Calvo, T. (2007). Aggregation Functions: A Guide for

[32] Chan, M., Aguilar-Martin, J., Piera, N., Celsis, P., & Vergnes, J. (1989). Classification techniques for feature extraction in low resolution tomographic evolutives images: Application to cerebral blood flow estimation. *In 12th Conf GRESTI Grouped'Etudes du*

[33] Piera, N., Desroches, P., & Aguilar-Martin, J. (1990). Variation points in pattern rec‐

[34] Kempowsky, T. (2004). Surveillance de Procedes a Base de Methodes de Classifica‐ tion: Conception d'un Outild'aide Pour la Detection et le Diagnostic des Defaillances.

[35] Atine-C, J., Doncescu, A., & Aguilar-Martin, J. (2005). A Fuzzy Clustering Approach for Supervision of Biological Processes by Image Processing. *EUSFLAT European Soci‐*

[36] Mora, J. J. (2006). Localización de fallas en sistemas de distribución de energía eléctri‐ ca usando métodos basados en el modelo y métodos basados en el conocimiento.

[37] Isaza, C. V. (2007). Diagnostic par Techniquesd'apprentissageFloues :Concep‐ tiond'uneMethode De Validation Et d'optimisation des Partitions. *PhD Thesis. l'Uni‐*

potable. *PhD thesis.l'Institut National des Sciences Appliquées de Toulouse*.

Video Querying. *PhD thesis.Université Pierre et Marie Curie*.

Practitioners. *Kacprzyk J, editor: Springer-Verlag Berlin Heidelberg*.

*PhD Thesis. l'Institut National des Sciences Appliquées de Toulouse*.

*Congress and 20th NAFIPS International Conference Proceedings*, 1, 286-91.

*telligence, CCIA*, 2504, 17-28.

*Traitement du Signal et des Images*.

ognition. *Pattern Recognition Letters*, 11, 519-24.

*ety for Fuzzy Logic and Technology*, 1057-63.

*PhDThesis. Universidad de Girona*.

*versité de Toulouse*.

*munications*.

*lin Heidelberg*.

*Sons Ltd*.


[24] Aguilar-Martin, J., Sarrate, R., & Waissman, J. (2001). Knowledge-based Signal Anal‐ ysis and Case-based Condition Monitoring of a Machine Tool. *Joint 9th IFSA World Congress and 20th NAFIPS International Conference Proceedings*, 1, 286-91.

[11] Bezdek, J. C., Keller, J., Krisnapuram, R., & Pal, N. (2005). Fuzzy Models and Algo‐ rithms for Pattern Recognition and Image Processing (The Handbooks of Fuzzy

[12] Mitchell, H. B. (2005). Pattern recognition using type-II fuzzy sets. *Inf. Sciences*,

[13] Bandyopadhyay, S., & Maulik, U. (2002). Geneticclustering for automaticevolution of clusters and application to imageclassification. *Pattern Recognition*, 35(6), 1197-208.

[14] Bhattacharya, M., & Das, A. (2010). Genetic Algorithm Based Feature Selection In a Recognition Scheme Using Adaptive Neuro Fuzzy Techniques. *Int. Journal of Comput‐*

[15] Pandit, M., & Gupta, M. (2011). Image Recognition With the Help of Auto-Associa‐ tive Neural Network. *International Journal of Computer Science and Security*, 5(1), 54-63.

[16] Yang, J. C., & Park, D. S. (2008). Fingerprint Verification Based on Invariant Moment Features and Nonlinear BPNN. *International Journal of Control, Automation, and Sys‐*

[17] Radha, V., & Nallammal, N. (2011). Neural Network Based Face Recognition Using RBFN Classifier. *Proceedings of the World Congress on Engineering and Computer Science*,

[18] Zeng, J., & Liu-Q, Z. (2007). Type-2 Fuzzy Sets for Pattern Recognition: The State-of-

[19] Sarawat Anam., Md, Shohidul, Islam. M. A., Kashem, M. N., Islam, M. R., & Islam, M. S. (2009). Face Recognition Using Genetic Algorithm and Back Propagation Neu‐ ral Network. *Proceedings of the International MultiConference of Engineers and Computer*

[20] Blahuta, J., Soukup, T., & Cermak, P. (2011). The image recognition of brain-stem ul‐ trasound images with using a neural network based on PCA. *IEEE International*

[21] Yu, H., Xie, T., Hamilton, M., & Wilamowski, B. (2011). Comparison of different neu‐ ral network architectures for digit image recognition. *4th International Conference on*

[22] Piera, N., Desroches, P., & Aguilar-Martin, J. (1989). LAMDA: An Incremental Con‐ ceptual Clustering Method. *LAAS Laboratoired'Automatiqueetd'Analyse des Systems.;Re‐*

[23] Piera, N., & Aguilar-Martin, J. (1991). Controlling Selectivity in Nonstandard Pattern Recognition Algorithms. *IEEE Transactions on Systems, Man and Cybernetics*, 21(1),

*Workshop on Medical Measurements and Applications Proceedings*, 5(2), 137-42.

Sets):. *Springer-Verlag New York, Inc*.

*ers, Communications & Control* [4], 458-468.

the-Art. *Journal of Uncertain Systems*, 11(3), 163-77.

*Human System Interactions*, 98-103.

170(2-4), 409-18.

20 Search Algorithms for Engineering Optimization

*tems*, 6(6), 800-8.

*Scientists*, 1.

*port* [89420], 1-21.

71-82.

1.


[38] Orantes, A., Kempowsky, T., Lann-V, M., Prat, L., Elgue, L., Gourdon, S., Cabassud, C., & , M. (2007). Selection of sensors by a new methodology coupling a classification technique and entropy criteria. *Chemical engineering research & design Journal*, 825-38.

**Chapter 2**

**Provisional chapter**

**Ant Algorithms for Adaptive Edge Detection**

**Ant Algorithms for Adaptive Edge Detection**

Edge detection is a pre-processing step in applications of computer and robot vision. It transforms the input image to a binary image that indicates either the presence or the absence of an edge. Therefore, the edge detectors represent a special group of search algorithms with the objective of finding the pixels belonging to true edges. The search is performed following certain criteria, as the edge pixels are found in regions of an image where the distinct intensity changes or discontinuities occur (e.g. in color, gray-intensity level, texture,

In applications domains such as robotics, vision-based sensors are widely used to provide information about the environment. On mobile robots, images from sensors are processed to detect and track the objects of interest and allow safe navigation. The purpose of edge detection is to segment the image in order to extract the features and objects of interest. No matter what method is applied, the objective remains the same, to change the representation of the original image into something easier to analyze. Digital images may be obtained under different lighting conditions and using different sensors. These may produce noise

In recent years, algorithms based on swarming behavior of animal colonies in nature have been applied to edge detection. Swarm Intelligence algorithms use the bottom-up approach; the patterns that appear at the system level are the result of local interactions between its lower-level components [2]. The initial purpose of Swarm Intelligence algorithms was to solve optimization problems [7], but recent studies show they can be a useful image-processing tool. The emerging properties inherent to swarm intelligence make these algorithms adaptive to the changing image patterns. This is a useful feature for real-time

In this work, two edge-detection methods inspired by the foraging behavior of natural ant colonies are presented. Ants use pheromone trails to mark the path to the food source. In

> ©2012 Jevtic and Li, licensee InTech. This is an open access chapter distributed under the terms of the Creative ´ Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Jevtić and Li; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

© 2013 Jevtić and Li, licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

distribution, and reproduction in any medium, provided the original work is properly cited.

Aleksandar Jevtić and Bo Li

Aleksandar Jevtic and Bo Li ´

http://dx.doi.org/10.5772/52792

and deteriorate the segmentation results.

**1. Introduction**

image processing.

etc.).

Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

[39] Guzmán, E., Zambrano, J. G., García, I., & Pogrebnyak, O. (2011). LAMDA Methodol‐ ogy Applied to Image Vector Quantization. *Computer Recognition Systems 4*, 95, 347-56.

**Provisional chapter**

## **Ant Algorithms for Adaptive Edge Detection**

**Ant Algorithms for Adaptive Edge Detection**

Aleksandar Jevtić and Bo Li Additional information is available at the end of the chapter

Aleksandar Jevtic and Bo Li ´

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/52792

## **1. Introduction**

[38] Orantes, A., Kempowsky, T., Lann-V, M., Prat, L., Elgue, L., Gourdon, S., Cabassud, C., & , M. (2007). Selection of sensors by a new methodology coupling a classification technique and entropy criteria. *Chemical engineering research & design Journal*, 825-38.

[39] Guzmán, E., Zambrano, J. G., García, I., & Pogrebnyak, O. (2011). LAMDA Methodol‐ ogy Applied to Image Vector Quantization. *Computer Recognition Systems 4*, 95,

347-56.

22 Search Algorithms for Engineering Optimization

Edge detection is a pre-processing step in applications of computer and robot vision. It transforms the input image to a binary image that indicates either the presence or the absence of an edge. Therefore, the edge detectors represent a special group of search algorithms with the objective of finding the pixels belonging to true edges. The search is performed following certain criteria, as the edge pixels are found in regions of an image where the distinct intensity changes or discontinuities occur (e.g. in color, gray-intensity level, texture, etc.).

In applications domains such as robotics, vision-based sensors are widely used to provide information about the environment. On mobile robots, images from sensors are processed to detect and track the objects of interest and allow safe navigation. The purpose of edge detection is to segment the image in order to extract the features and objects of interest. No matter what method is applied, the objective remains the same, to change the representation of the original image into something easier to analyze. Digital images may be obtained under different lighting conditions and using different sensors. These may produce noise and deteriorate the segmentation results.

In recent years, algorithms based on swarming behavior of animal colonies in nature have been applied to edge detection. Swarm Intelligence algorithms use the bottom-up approach; the patterns that appear at the system level are the result of local interactions between its lower-level components [2]. The initial purpose of Swarm Intelligence algorithms was to solve optimization problems [7], but recent studies show they can be a useful image-processing tool. The emerging properties inherent to swarm intelligence make these algorithms adaptive to the changing image patterns. This is a useful feature for real-time image processing.

In this work, two edge-detection methods inspired by the foraging behavior of natural ant colonies are presented. Ants use pheromone trails to mark the path to the food source. In

Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Jevtić and Li; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Jevtić and Li, licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

©2012 Jevtic and Li, licensee InTech. This is an open access chapter distributed under the terms of the Creative ´

digital images, pixels define the discrete space in which the artificial ants move and the edge pixels represent the food. The edge detection operation is performed on a set of grayscale images. The first proposed method extracts the edges from the original grayscale image. The second method finds the missing broken-edge segments and can be used as a complementary tool in order to improve the edge-detection results. Finally, the study on the adaptability of the first edge detector is performed using a set of grayscale images as a dynamically changing environment.

ACO is a metaheuristic that exploits the self-organizing nature of real ant colonies and their foraging behavior to solve discrete optimization problems. The learning ability, in natural and artificial ant colonies, consists in storing information about the environment by laying pheromone on the path that leads to a food source. The emerging pheromone structures serve as the swarm's external memory that can be used by any of its members. Although a single ant can only detect the local environment, the designer of a swarm-based system can

Ant Algorithms for Adaptive Edge Detection http://dx.doi.org/10.5772/52792 25

ACO algorithms have been applied to image processing. Some of the proposed applications include image retrieval [28] and image segmentation [11, 14, 18]. Several ACO-based edge detection methods have also been proposed in literature. Among others, these include modifications to Ant System (AS) [22] or Ant Colony System (ACS) algorithms [1, 8, 32] for a digital image habitat, combined with local gray-intensity comparison for different pixel's neighborhood matrices. Some studies showed that an improved detection can be obtained

In order to apply artificial ant colonies to edge detection one needs to set the rules for local interactions between the ants and define the "food" that ants will search for. For the edge

Artificial ants, unlike their biological counterparts, move through a discrete environment defined with nodes, and they have memory. When traversing from one node to another, ants leave pheromone trails on the edges connecting the nodes. The pheromone trails attract other ants that lay more pheromone, which consequently leads to pheromone trail accumulation. Negative feedback is applied through pheromone evaporation that, importantly, restrains the

Ant System (AS) is the first ACO algorithm proposed in literature and it was initially applied to the Travelling Salesman Problem (TSP) [5]. A general definition of the TSP is the following. For a given set of cities with known distances between them, the goal is to find the shortest tour that allows each city to be visited once and only once. In more formal terms, the goal is

AS consists of a colony of artificial ants that move between the nodes (cities) in search for the minimal route. The probability of displacing the *k*th ant from node *i* to node *j* is given by:

(*τij*)*<sup>α</sup>*(*ηij*)*<sup>β</sup>*

where *τij* and *ηij* are the intensity of the pheromone trail on edge (*i*, *j*) and the visibility of the node *j* from node *i*, respectively, and *α* and *β* are control parameters (*α*, *β >* 0; *α*, *β* ∈ ). The tabu*<sup>k</sup>* list contains nodes that have already been visited by the *k*th ant. The definition of the node's visibility is application-related, and for the TSP it is set to be inversely proportional

> *<sup>η</sup>ij* <sup>=</sup> <sup>1</sup> *dij*

<sup>∑</sup>*h*∈/*tabuk* , (*τih* )*<sup>α</sup>*(*ηih* )*<sup>β</sup>* if j/∈tabu*<sup>k</sup>*

(1)

(2)

0 otherwise

ants from taking the same route and allows continuous search for better solutions.

to find the Hamiltonian tour of minimal length on a fully connected graph.

 

*pk ij* =

observe the emergent global patterns that are a result of the cooperative behavior.

using a hybrid approach with an artificial neural network classifier [26].

detection problem, the food are the edge pixels in digital images.

**3. Ant System algorithm**

to the node's Euclidean distance:

The chapter is organized as follows. Section 2 provides an overview of the state-of-the-art edge detectors. Section 3 introduces the basic Ant System algorithm. In Section 4 the proposed Ant System-based edge detector is described. The discussion of the simulation results is also given in this section. Follows the description of the proposed broken-edge linking algorithm and the simulation results in Section 5. The study on the adaptability of the proposed Ant System-based edge detector is given in Section 6. Finally, in Section 7 the conclusions are made.

## **2. Related work**

Edges represent important contour features in the image since they are the boundaries where distinct intensity changes or discontinuities occur. In practice, it is difficult to design an edge detector capable of finding all the true edges in image. Edge detectors give ambiguous information about the location of object boundaries for which they are usually subjectively evaluated by the observers [30].

Several conventional edge detection methods have been widely cited in literature. The Prewitt operator [25] extracts contour features by fitting a Least Squares Error (LSE) quadratic surface over an image window and differentiate the fitted surface. The edge detectors proposed in [31] and [3] use local gradient operators, sometimes with additional smoothing for noise removal. The Laplacian operator [9] applies a second order differential operator to find edge points based on the zero crossing properties of the processed edge points.

Although conventional edge detectors usually perform linear filtering operations, there are various nonlinear methods proposed. In [23], authors proposed an edge detection method based on the Parameterized Logarithmic Image Processing (PLIP) and a four directional Sobel detector, achieving a higher level of independence from scene illumination. In [10], an edge detector based on bilateral filtering was proposed, which achieves better performance than single Gaussian filtering. In [21], authors proposed using Coordinate Logic Filters (CLF) to extract the edges from images. CLF constitute a class of nonlinear digital filters that are based on the execution of Coordinate Logic Operations (CLO). An alternative method for calculating CLF using Coordinate Logic Transforms (CLT) was introduced in [4]; the authors presented a new threshold-based technique for the detection of edges in grayscale images.

In recent years, Swarm Intelligence algorithms have shown its full potential in terms of flexibility and autonomy, especially when it comes to design and control of complex systems that consist of a large number of agents. Metaheuristics such as Ant Colony Optimization (ACO) [6], Particle Swarm Optimization (PSO) [17] and Bees Algorithm (BA) [24] include sets of algorithms that demonstrate emergent behavior as a result of local interactions between the members of the swarm. They tend to be decentralized, self-organized, autonomous and adaptive to the changes in the environment. The adaptability and the ability to learn are very important for systems that are designed to be autonomous.

ACO is a metaheuristic that exploits the self-organizing nature of real ant colonies and their foraging behavior to solve discrete optimization problems. The learning ability, in natural and artificial ant colonies, consists in storing information about the environment by laying pheromone on the path that leads to a food source. The emerging pheromone structures serve as the swarm's external memory that can be used by any of its members. Although a single ant can only detect the local environment, the designer of a swarm-based system can observe the emergent global patterns that are a result of the cooperative behavior.

ACO algorithms have been applied to image processing. Some of the proposed applications include image retrieval [28] and image segmentation [11, 14, 18]. Several ACO-based edge detection methods have also been proposed in literature. Among others, these include modifications to Ant System (AS) [22] or Ant Colony System (ACS) algorithms [1, 8, 32] for a digital image habitat, combined with local gray-intensity comparison for different pixel's neighborhood matrices. Some studies showed that an improved detection can be obtained using a hybrid approach with an artificial neural network classifier [26].

In order to apply artificial ant colonies to edge detection one needs to set the rules for local interactions between the ants and define the "food" that ants will search for. For the edge detection problem, the food are the edge pixels in digital images.

## **3. Ant System algorithm**

2 Search Algorithms

environment.

conclusions are made.

evaluated by the observers [30].

**2. Related work**

digital images, pixels define the discrete space in which the artificial ants move and the edge pixels represent the food. The edge detection operation is performed on a set of grayscale images. The first proposed method extracts the edges from the original grayscale image. The second method finds the missing broken-edge segments and can be used as a complementary tool in order to improve the edge-detection results. Finally, the study on the adaptability of the first edge detector is performed using a set of grayscale images as a dynamically changing

The chapter is organized as follows. Section 2 provides an overview of the state-of-the-art edge detectors. Section 3 introduces the basic Ant System algorithm. In Section 4 the proposed Ant System-based edge detector is described. The discussion of the simulation results is also given in this section. Follows the description of the proposed broken-edge linking algorithm and the simulation results in Section 5. The study on the adaptability of the proposed Ant System-based edge detector is given in Section 6. Finally, in Section 7 the

Edges represent important contour features in the image since they are the boundaries where distinct intensity changes or discontinuities occur. In practice, it is difficult to design an edge detector capable of finding all the true edges in image. Edge detectors give ambiguous information about the location of object boundaries for which they are usually subjectively

Several conventional edge detection methods have been widely cited in literature. The Prewitt operator [25] extracts contour features by fitting a Least Squares Error (LSE) quadratic surface over an image window and differentiate the fitted surface. The edge detectors proposed in [31] and [3] use local gradient operators, sometimes with additional smoothing for noise removal. The Laplacian operator [9] applies a second order differential operator to

Although conventional edge detectors usually perform linear filtering operations, there are various nonlinear methods proposed. In [23], authors proposed an edge detection method based on the Parameterized Logarithmic Image Processing (PLIP) and a four directional Sobel detector, achieving a higher level of independence from scene illumination. In [10], an edge detector based on bilateral filtering was proposed, which achieves better performance than single Gaussian filtering. In [21], authors proposed using Coordinate Logic Filters (CLF) to extract the edges from images. CLF constitute a class of nonlinear digital filters that are based on the execution of Coordinate Logic Operations (CLO). An alternative method for calculating CLF using Coordinate Logic Transforms (CLT) was introduced in [4]; the authors presented a new threshold-based technique for the detection of edges in grayscale images. In recent years, Swarm Intelligence algorithms have shown its full potential in terms of flexibility and autonomy, especially when it comes to design and control of complex systems that consist of a large number of agents. Metaheuristics such as Ant Colony Optimization (ACO) [6], Particle Swarm Optimization (PSO) [17] and Bees Algorithm (BA) [24] include sets of algorithms that demonstrate emergent behavior as a result of local interactions between the members of the swarm. They tend to be decentralized, self-organized, autonomous and adaptive to the changes in the environment. The adaptability and the ability to learn are very

find edge points based on the zero crossing properties of the processed edge points.

important for systems that are designed to be autonomous.

Artificial ants, unlike their biological counterparts, move through a discrete environment defined with nodes, and they have memory. When traversing from one node to another, ants leave pheromone trails on the edges connecting the nodes. The pheromone trails attract other ants that lay more pheromone, which consequently leads to pheromone trail accumulation. Negative feedback is applied through pheromone evaporation that, importantly, restrains the ants from taking the same route and allows continuous search for better solutions.

Ant System (AS) is the first ACO algorithm proposed in literature and it was initially applied to the Travelling Salesman Problem (TSP) [5]. A general definition of the TSP is the following. For a given set of cities with known distances between them, the goal is to find the shortest tour that allows each city to be visited once and only once. In more formal terms, the goal is to find the Hamiltonian tour of minimal length on a fully connected graph.

AS consists of a colony of artificial ants that move between the nodes (cities) in search for the minimal route. The probability of displacing the *k*th ant from node *i* to node *j* is given by:

$$p\_{ij}^k = \begin{cases} \frac{(\tau\_{ij})^a (\eta\_{ij})^\beta}{\sum\_{h \notin \text{table}\_{k^\*}} (\tau\_{ih})^a (\eta\_{ih})^\beta} & \text{if } j \notin \text{tabı}\_k \\\\ 0 & \text{otherwise} \end{cases} \tag{1}$$

where *τij* and *ηij* are the intensity of the pheromone trail on edge (*i*, *j*) and the visibility of the node *j* from node *i*, respectively, and *α* and *β* are control parameters (*α*, *β >* 0; *α*, *β* ∈ ). The tabu*<sup>k</sup>* list contains nodes that have already been visited by the *k*th ant. The definition of the node's visibility is application-related, and for the TSP it is set to be inversely proportional to the node's Euclidean distance:

$$
\eta\_{ij} = \frac{1}{d\_{ij}} \tag{2}
$$

It can be concluded from the equations (1) and (2) that the ants favor the edges that are shorter and contain a higher concentration of pheromone.

AS is performed in iterations. At the end of each iteration, pheromone values are updated by all the ants that have built a solution in the iteration itself. The pheromone update rule is described with the following equation:

$$
\pi\_{ij(new)} = (1 - \rho)\pi\_{ij(old)} + \sum\_{k=1}^{m} \Delta \pi\_{ij}^k \tag{3}
$$

where

defined as

*<sup>A</sup>* <sup>=</sup> <sup>1</sup>

**4.2. Ant System algorithm for edge detection**

ant's moves to the neighboring pixels are shown in Figure 3.

I1 I

**Figure 1.** Block diagram of the proposed edge detection method

where *I* = *I*(*i*, *j*) is the grey value of the pixel at (*i*, *j*) of the input image and *sigm*(*x*) is

and *B* and *k* control the threshold and rate of enhancement, respectively. (0 < *B* < 1, *B* ∈ ℜ; *k* ∈ ℵ). The transformation function (5) relative to the original image pixel values is shown in Figure 2. It can be observed that *G*(*I*) is continuous and monotonically increasing; therefore, the enhancement will not introduce new discontinuities into the reconstructed image.

The generic Ant System algorithm described in Section 3 was used as a base for the proposed edge detector. In digital images, discrete environment in which the ants move is defined by pixels, i.e. their gray-intensity values, 0 ≤ *I*(*i*, *j*) ≤ *Imax*, *i* = 1,2,. . . ,*N*; *j* = 1,2,. . . ,*M*. Possible

Input image

Multiscale adaptive gain

AS AS AS AS

. . . . . . . . . O1 O2 ON-1 ON

<sup>2</sup> IN-1 <sup>I</sup>

N

Threshold

Thinning

Binary edge image

*sigm*(*x*) = <sup>1</sup>

*sigm*(*k*(<sup>1</sup> <sup>−</sup> *<sup>B</sup>*)) <sup>−</sup> *sigm*(−*k*(<sup>1</sup> <sup>+</sup> *<sup>B</sup>*)) (6)

<sup>1</sup> <sup>+</sup> *<sup>e</sup>*−*<sup>x</sup>* (7)

Ant Algorithms for Adaptive Edge Detection http://dx.doi.org/10.5772/52792 27

where *ρ* is the pheromone evaporation rate (0 < *ρ* < 1, *ρ* ∈ ℜ), *m* is the number of ants in the colony, and ∆*τ<sup>k</sup> ij* is the amount of pheromone laid on the edge (*i*, *j*) by the *k*th ant, and is given by:

$$
\Delta \tau\_{ij}^k = \begin{cases}
\frac{Q}{L\_k} \text{ if edge } (i, j) \text{ is traversed by the } k\hbar \text{ant} \\
\\
0 \quad \text{otherwise}
\end{cases}
\tag{4}
$$

where *Lk* is the length of the tour found by the *k*th ant, and *Q* is a scaling constant (*Q* > 0, *Q* ∈ ℜ).

The algorithm stops when the satisfactory solution is found or when the maximum number of iterations is reached.

## **4. Ant System-based edge detector**

In this section, the AS-based edge detector proposed by [15] is described. The method generates a set of images from the original grayscale image using a nonlinear image enhancement technique called Multiscale Adaptive Gain [19], and then the modified AS algorithm is applied to detect the edges on each of the extracted images. The resulting set of pheromone-trail matrices is summed to produce the output image. Threshold and edge thinning, which are optional steps, are finally applied to obtain a binary edge image. The block diagram of the proposed method is shown in Figure 1.

#### **4.1. Multiscale Adaptive Gain**

Image enhancement techniques emphasize important features in the image while reducing the noise. Multiscale Adaptive Gain is applied to obtain contrast enhancement by suppressing pixels with the grey intensity values of very small amplitude and enhancing only those pixels with values larger than a certain threshold within each level of the transform space. The nonlinear operation is described with the following equation:

$$G(I) = A\left[\operatorname{sign}(k(I - B)) - \operatorname{sign}(-k(I + B))\right] \tag{5}$$

where

4 Search Algorithms

the colony, and ∆*τ<sup>k</sup>*

of iterations is reached.

given by:

*Q* ∈ ℜ).

It can be concluded from the equations (1) and (2) that the ants favor the edges that are

AS is performed in iterations. At the end of each iteration, pheromone values are updated by all the ants that have built a solution in the iteration itself. The pheromone update rule is

where *ρ* is the pheromone evaporation rate (0 < *ρ* < 1, *ρ* ∈ ℜ), *m* is the number of ants in

where *Lk* is the length of the tour found by the *k*th ant, and *Q* is a scaling constant (*Q* > 0,

The algorithm stops when the satisfactory solution is found or when the maximum number

In this section, the AS-based edge detector proposed by [15] is described. The method generates a set of images from the original grayscale image using a nonlinear image enhancement technique called Multiscale Adaptive Gain [19], and then the modified AS algorithm is applied to detect the edges on each of the extracted images. The resulting set of pheromone-trail matrices is summed to produce the output image. Threshold and edge thinning, which are optional steps, are finally applied to obtain a binary edge image. The

Image enhancement techniques emphasize important features in the image while reducing the noise. Multiscale Adaptive Gain is applied to obtain contrast enhancement by suppressing pixels with the grey intensity values of very small amplitude and enhancing only those pixels with values larger than a certain threshold within each level of the transform

*G*(*I*) = *A*[*sigm*(*k*(*I* − *B*)) − *sigm*(−*k*(*I* + *B*))] (5)

*Lk* if edge (*i*, *j*) is traversed by the k*th* ant

*m* ∑ *k*=1 ∆*τ<sup>k</sup>*

*ij* is the amount of pheromone laid on the edge (*i*, *j*) by the *k*th ant, and is

*ij* (3)

(4)

*<sup>τ</sup>ij*(*new*) = (<sup>1</sup> − *<sup>ρ</sup>*)*τij*(*old*) +

shorter and contain a higher concentration of pheromone.

described with the following equation:

∆*τ<sup>k</sup> ij* =

**4. Ant System-based edge detector**

**4.1. Multiscale Adaptive Gain**

  *Q*

block diagram of the proposed method is shown in Figure 1.

space. The nonlinear operation is described with the following equation:

0 otherwise

$$A = \frac{1}{\operatorname{sign}(k(1 - B)) - \operatorname{sign}(-k(1 + B))}\tag{6}$$

where *I* = *I*(*i*, *j*) is the grey value of the pixel at (*i*, *j*) of the input image and *sigm*(*x*) is defined as

$$\text{sign}(\mathbf{x}) = \frac{1}{1 + e^{-\mathbf{x}}} \tag{7}$$

and *B* and *k* control the threshold and rate of enhancement, respectively. (0 < *B* < 1, *B* ∈ ℜ; *k* ∈ ℵ). The transformation function (5) relative to the original image pixel values is shown in Figure 2. It can be observed that *G*(*I*) is continuous and monotonically increasing; therefore, the enhancement will not introduce new discontinuities into the reconstructed image.

#### **4.2. Ant System algorithm for edge detection**

The generic Ant System algorithm described in Section 3 was used as a base for the proposed edge detector. In digital images, discrete environment in which the ants move is defined by pixels, i.e. their gray-intensity values, 0 ≤ *I*(*i*, *j*) ≤ *Imax*, *i* = 1,2,. . . ,*N*; *j* = 1,2,. . . ,*M*. Possible ant's moves to the neighboring pixels are shown in Figure 3.

**Figure 1.** Block diagram of the proposed edge detection method

**Figure 2.** Transformation function *G*(*I*) in respect to the original image pixel values: (a) *B* = 0.45; *k* = 10, 20 and 40; (b) *B* = 0.2, 0.45 and 0.7; *k* = 20.

Unlike the cities' visibility in the TSP, the visibility of the pixel at (*i*, *j*) is defined as follows:

$$\eta\_{\rm ij} = \frac{1}{I\_{\rm max}} \cdot \max\begin{bmatrix} \left| I(i-1, j-1) - I(i+1, j+1) \right| \\ \left| I(i-1, j+1) - I(i+1, j-1) \right| \\ \left| I(i, j-1) - I(i, j+1) \right| \\ \left| I(i-1, j) - I(i+1, j) \right| \end{bmatrix} \tag{8}$$

2. Pixel transition rule: Unlike their biological counterparts, artificial ants have memory. Tabu*<sup>k</sup>* represents the list of pixels that the *k*th ant has already visited. If ant is found surrounded by the pixels that are either in the tabu list or occupied by other ants, it is randomly displaced to another unoccupied pixel that is not in the tabu list. Otherwise, the displacement probability of the *k*th ant to a neighboring pixel (*i*, *j*) is given by:

<sup>∑</sup>*<sup>u</sup>* <sup>∑</sup>*<sup>v</sup>* (*τuv* )*α*(*ηuv* )*<sup>β</sup> (i,j)* and *(u,v)* are allowed nodes

*<sup>τ</sup>ij*(*new*) = (<sup>1</sup> − *<sup>ρ</sup>*)*τij*(*old*) + <sup>∆</sup>*τij* (10)

*ij* (11)

Ant Algorithms for Adaptive Edge Detection http://dx.doi.org/10.5772/52792

where *τij* and *ηij* are the intensity of the pheromone trail and the visibility of the pixel at

3. Pheromone update rule: Negative feedback is implemented through pheromone

∆*τij* =

*m* ∑ *k*=1 ∆*τ<sup>k</sup>*

*ηij* if *ηij* ≥ *T* and *k*th ant displaces to pixel *(i,j)*

*T* is a threshold value which prevents ants from staying on the background pixels hence enforcing the search for the true edges. The existence of the pheromone evaporation rate, *ρ*, prevents the algorithm stagnation. Pheromone trail evaporates exponentially from the

4. Stopping criterion: The steps 2 and 3 are repeated in a loop and algorithm stops executing

The proposed method was tested on four different grayscale images of 256 × 256 pixels resolution: "Cameraman", "Lena", "House" and "Peppers". As seen from the block diagram in Figure 1, first the Multiscale Adaptive Gain defined in (5) is applied to the input image: 0 ≤ *I*(*i*, *j*) ≤ *Imax*, *i* = 1, 2, . . . , *N*; *j* = 1, 2, . . . , *M*. (*N* = *M* = 256.) The values of *B* and *k* were

varied to obtain a set of nine enhanced images: *B* = {0.2, 0.45, 0.7}; *k* = {10, 20, 40}.

(9)

29

(12)

*pk* (*i*,*j*) =

evaporation according to:

∆*τ<sup>k</sup> ij* =

repeatedly not-visited pixels.

**4.3. Simulation results and discussion**

 

when the maximum number of iterations is reached.

0 otherwise.

where

and

 

(*τij*)*<sup>α</sup>*(*ηij*)*<sup>β</sup>*

0 otherwise

(*i*, *j*), respectively, and *α* and *β* are control parameters (*α*, *β* > 0; *α*, *β* ∈ ℜ).

where *Imax* is the maximum gray-intensity value in the image (0 ≤ *Imax* ≤ 255). For the pixels in regions of distinct gray-intensity changes the higher visibility values are obtained, which makes those pixels more attractive to ants.

The AS algorithm is an iterative process which includes the following steps:

1. Initialization: the number of ants proportional to <sup>√</sup>*<sup>N</sup>* · *<sup>M</sup>* is randomly distributed on the pixels in the image. Only one ant is allowed to reside on a pixel within the same iteration. Initial non-zero pheromone trail value, *τ*0, is assigned to each pixel, otherwise the ants would never start the search.

**Figure 3.** Proposed pixel transition model

2. Pixel transition rule: Unlike their biological counterparts, artificial ants have memory. Tabu*<sup>k</sup>* represents the list of pixels that the *k*th ant has already visited. If ant is found surrounded by the pixels that are either in the tabu list or occupied by other ants, it is randomly displaced to another unoccupied pixel that is not in the tabu list. Otherwise, the displacement probability of the *k*th ant to a neighboring pixel (*i*, *j*) is given by:

$$p\_{(i,j)}^k = \begin{cases} \frac{(\tau\_{il})^a (\eta\_{jl})^\delta}{\sum\_{\nu} \sum\_{\nu} (\tau\_{i\nu})^a (\eta\_{\nu i})^\delta} & (i,j) \text{ and } (u,v) \text{ are allowed nodes} \\\\ 0 & \text{otherwise} \end{cases} \tag{9}$$

where *τij* and *ηij* are the intensity of the pheromone trail and the visibility of the pixel at (*i*, *j*), respectively, and *α* and *β* are control parameters (*α*, *β* > 0; *α*, *β* ∈ ℜ).

3. Pheromone update rule: Negative feedback is implemented through pheromone evaporation according to:

$$
\tau\_{\rm ij(new)} = (1 - \rho)\tau\_{\rm ij(old)} + \Delta\tau\_{\rm ij} \tag{10}
$$

where

6 Search Algorithms

G(I) = I B = 0.45 k = 10 B = 0.45 k = 20 B = 0.45 k = 40

−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

0.2, 0.45 and 0.7; *k* = 20.

Output image

−1 −0.8 −0.6 −0.4 −0.2 <sup>0</sup> 0.2 0.4 0.6 0.8 <sup>1</sup> −1

(a)

Input image

*<sup>η</sup>ij* <sup>=</sup> <sup>1</sup> *Imax*

which makes those pixels more attractive to ants.

would never start the search.

**Figure 3.** Proposed pixel transition model

· *max*

 

The AS algorithm is an iterative process which includes the following steps:

−1 −0.8 −0.6 −0.4 −0.2 <sup>0</sup> 0.2 0.4 0.6 0.8 <sup>1</sup> −1

(b)

Input image

 

(8)

−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1



**Figure 2.** Transformation function *G*(*I*) in respect to the original image pixel values: (a) *B* = 0.45; *k* = 10, 20 and 40; (b) *B* =

Unlike the cities' visibility in the TSP, the visibility of the pixel at (*i*, *j*) is defined as follows:

where *Imax* is the maximum gray-intensity value in the image (0 ≤ *Imax* ≤ 255). For the pixels in regions of distinct gray-intensity changes the higher visibility values are obtained,

1. Initialization: the number of ants proportional to <sup>√</sup>*<sup>N</sup>* · *<sup>M</sup>* is randomly distributed on the pixels in the image. Only one ant is allowed to reside on a pixel within the same iteration. Initial non-zero pheromone trail value, *τ*0, is assigned to each pixel, otherwise the ants

*I I I* 

*i-1,j-1 i-1,j i-1,j+1*

*I I* 

*i,j+1*

*i+1,j+1*

*i,j-1*

*I I I* 

*i+1,j-1 i+1,j*

Output image

G(I) = I B = 0.2 k = 20 B = 0.45 k = 20 B = 0.7 k = 20

$$
\Delta \tau\_{ij} = \sum\_{k=1}^{m} \Delta \tau\_{ij}^{k} \tag{11}
$$

and

$$
\Delta \pi\_{ij}^k = \begin{cases}
\eta\_{ij} \text{ if } \eta\_{ij} \ge T \text{ and } k \text{th ant displacements to pixel } (i, j) \\
0 \quad \text{otherwise.}\tag{12}
$$

*T* is a threshold value which prevents ants from staying on the background pixels hence enforcing the search for the true edges. The existence of the pheromone evaporation rate, *ρ*, prevents the algorithm stagnation. Pheromone trail evaporates exponentially from the repeatedly not-visited pixels.

4. Stopping criterion: The steps 2 and 3 are repeated in a loop and algorithm stops executing when the maximum number of iterations is reached.

## **4.3. Simulation results and discussion**

The proposed method was tested on four different grayscale images of 256 × 256 pixels resolution: "Cameraman", "Lena", "House" and "Peppers". As seen from the block diagram in Figure 1, first the Multiscale Adaptive Gain defined in (5) is applied to the input image: 0 ≤ *I*(*i*, *j*) ≤ *Imax*, *i* = 1, 2, . . . , *N*; *j* = 1, 2, . . . , *M*. (*N* = *M* = 256.) The values of *B* and *k* were varied to obtain a set of nine enhanced images: *B* = {0.2, 0.45, 0.7}; *k* = {10, 20, 40}.

(a) (b) (c) (d)

Ant Algorithms for Adaptive Edge Detection http://dx.doi.org/10.5772/52792 31

(e) (f) (g) (h)

**Figure 5.** Qualitative results of the proposed method, 256 × 256 pixel images: (a) "Cameraman" original image; (b) "House" original image; (c) "Lena" original image; (d) "Peppers" original image; (e) "Cameraman" pheromone trail image; (f) "House"

In the complexity-performance trade-off, it was found that varying the values of algorithm's parameters can affect its performance. A set of experiments was performed on a synthetic test image (Figure 7) to show how the number of ants and iterations will be related to the number of detected edge points. The results of this analysis are shown in Figure 8. The number of ants is proportional to the square root of the image resolution *<sup>n</sup>* <sup>=</sup> <sup>√</sup>*<sup>N</sup>* · *<sup>M</sup>*. The

It can be observed that when the number of ants was increased, the required number of iterations was reduced to achieve a similar performance. Figure 8 shows that the algorithm needs more than 130 iterations to reach good performance when the number of ants was

(a) (b) (c) (d)

**Figure 6.** Comparative results with other ant-based edge detectors, "Lena" 256 × 256 pixels: (a) original image; (b) Tian *et al.*;

pheromone trail image; (g) "Lena" pheromone trail image; (h) "Peppers" pheromone trail image.

*4.3.1. Performance evaluation*

number of edge points was 780.

(c) Nezamabadi-pour *et al.*; (d) the proposed method.

**Figure 4.** Effects of the transformation function *G*(*I*); "Cameraman", 256 × 256 pixels: (a) original image; (b) *B* = 0.2, *k* = 10; (c) *B* = 0.45, *k* = 20; (d) *B* = 0.7, *k* = 40.

The effects of the transformation function on the image "Cameraman" are shown in Figure 4. It can be observed that, by changing the transformation function's parameters, some features in the image become highlighted while others get attenuated.

Afterwards, the AS-based edge detector is applied to each of the nine enhanced images. The algorithm's parameters are set as proposed in [22]: *τ*<sup>0</sup> = 0.01, *α* = 1, *β* = 10, *ρ* = 0.05 and *T* = 0.08. The number of ants equal to <sup>√</sup>*<sup>N</sup>* · *<sup>M</sup>* = 256 was randomly distributed over the pixels in the image with the condition that no two ants were placed on the same pixel. The memory (tabu list) length for each ant was set to 10. The algorithm was stopped after 100 iterations generating a pheromone-trail matrix of the same resolution as the original image. After each of the nine enhanced images was processed, the sum of the pheromone-trail matrices produced the final pheromone-trail image (Figure 5(e)–(h)). The parameters values such as the number of ants, the memory length, and the number of iterations were obtained as a result of trial and error and their further optimization will be a part of future work.

The effectiveness of the proposed method was compared with the ant-based edge detectors proposed by Tian *et al.* [32] and Nezamabadi-pour *et al.* [22], and the results are shown in Figure 6. To provide a fair comparison, the threshold and morphological edge-thinning operations are neglected. The simulation results show that the proposed method outperforms the other two methods in terms of visual quality of the extracted edge information and sensitivity to weaker edges. The qualitative results of the edge detector proposed in [8] were presented after applying the thinning step, hence a fair comparison with the here-presented results could not be made. It is worth mentioning that the number of iterations used in experiments in [8] was much higher (1000 iterations) than required by our algorithm. The performance evaluation is given in Subsection 4.3.1.

The main contribution of the proposed edge-detection method is the preprocessing step and the parallel execution of the Ant System-based edge detector on a set of images that finally produce the output edge image. The execution time of the proposed method is high for real-time image processing, which would require additional algorithm's code optimization in a different programming environment. The presented experiments were performed in Matlab software that offers an easy high-level implementation but is ineffective in terms of speed.

**Figure 5.** Qualitative results of the proposed method, 256 × 256 pixel images: (a) "Cameraman" original image; (b) "House" original image; (c) "Lena" original image; (d) "Peppers" original image; (e) "Cameraman" pheromone trail image; (f) "House" pheromone trail image; (g) "Lena" pheromone trail image; (h) "Peppers" pheromone trail image.

#### *4.3.1. Performance evaluation*

8 Search Algorithms

(c) *B* = 0.45, *k* = 20; (d) *B* = 0.7, *k* = 40.

speed.

(a) (b) (c) (d)

**Figure 4.** Effects of the transformation function *G*(*I*); "Cameraman", 256 × 256 pixels: (a) original image; (b) *B* = 0.2, *k* = 10;

The effects of the transformation function on the image "Cameraman" are shown in Figure 4. It can be observed that, by changing the transformation function's parameters, some features

Afterwards, the AS-based edge detector is applied to each of the nine enhanced images. The algorithm's parameters are set as proposed in [22]: *τ*<sup>0</sup> = 0.01, *α* = 1, *β* = 10, *ρ* = 0.05 and *T* = 0.08. The number of ants equal to <sup>√</sup>*<sup>N</sup>* · *<sup>M</sup>* = 256 was randomly distributed over the pixels in the image with the condition that no two ants were placed on the same pixel. The memory (tabu list) length for each ant was set to 10. The algorithm was stopped after 100 iterations generating a pheromone-trail matrix of the same resolution as the original image. After each of the nine enhanced images was processed, the sum of the pheromone-trail matrices produced the final pheromone-trail image (Figure 5(e)–(h)). The parameters values such as the number of ants, the memory length, and the number of iterations were obtained as a

result of trial and error and their further optimization will be a part of future work.

our algorithm. The performance evaluation is given in Subsection 4.3.1.

The effectiveness of the proposed method was compared with the ant-based edge detectors proposed by Tian *et al.* [32] and Nezamabadi-pour *et al.* [22], and the results are shown in Figure 6. To provide a fair comparison, the threshold and morphological edge-thinning operations are neglected. The simulation results show that the proposed method outperforms the other two methods in terms of visual quality of the extracted edge information and sensitivity to weaker edges. The qualitative results of the edge detector proposed in [8] were presented after applying the thinning step, hence a fair comparison with the here-presented results could not be made. It is worth mentioning that the number of iterations used in experiments in [8] was much higher (1000 iterations) than required by

The main contribution of the proposed edge-detection method is the preprocessing step and the parallel execution of the Ant System-based edge detector on a set of images that finally produce the output edge image. The execution time of the proposed method is high for real-time image processing, which would require additional algorithm's code optimization in a different programming environment. The presented experiments were performed in Matlab software that offers an easy high-level implementation but is ineffective in terms of

in the image become highlighted while others get attenuated.

In the complexity-performance trade-off, it was found that varying the values of algorithm's parameters can affect its performance. A set of experiments was performed on a synthetic test image (Figure 7) to show how the number of ants and iterations will be related to the number of detected edge points. The results of this analysis are shown in Figure 8. The number of ants is proportional to the square root of the image resolution *<sup>n</sup>* <sup>=</sup> <sup>√</sup>*<sup>N</sup>* · *<sup>M</sup>*. The number of edge points was 780.

It can be observed that when the number of ants was increased, the required number of iterations was reduced to achieve a similar performance. Figure 8 shows that the algorithm needs more than 130 iterations to reach good performance when the number of ants was

**Figure 6.** Comparative results with other ant-based edge detectors, "Lena" 256 × 256 pixels: (a) original image; (b) Tian *et al.*; (c) Nezamabadi-pour *et al.*; (d) the proposed method.

Authors proposed this method to extract the contour of a breast as the region of interest in mammogram. [29] applied adaptive structuring elements to dilate the broken edges along their slope directions. [20] proposed improvement to the traditional Ant Colony Optimization (ACO) based method for broken-edge linking to reduce the computational

In this section, the Ant System-based broken-edge linking algorithm proposed by [13] is presented. As inputs are used: the Sobel edge image and the original grayscale image. The Sobel edge image is a binary image obtained after applying the Sobel edge operator [31] to the original grayscale image. From this image the endpoints are extracted that will be used

The original image is used to produce the *grayscale visibility* matrix, which for the pixel at

where *Imax* is the maximum gray value in the image, so *ξij* is normalized (0 ≤ *ξij* ≤ 1). For the pixels in regions of distinct gray intensity changes the higher values are obtained. The matrix of *grayscale visibility* will be the initial pheromone trail matrix. It is also used to calculate the fitness value of a route chosen by ant. The resulting image will contain the routes (connecting edges) with the highest fitness values found as optimal routes between the endpoints. In order to discard non-optimal routes, a fitness threshold is applied. Finally, the output image is the improved image that is a sum of the Sobel edge image and the

connecting edges. The block diagram of the proposed method is shown in Figure 9.

The proposed AS-based algorithm for broken-edge linking includes the following steps:

1. Initialization: The number of ants equals the number of endpoints found in the Sobel edge image, and each endpoint will be a starting pixel of a different ant. Initial pheromone trail

2. Pixel transition rule: Possible ant's transitions to the neighboring pixels are defined by 8-connection pixel transition model shown in Figure 3. The admissible neighboring pixels for the *k*th ant to move to are the ones not in the tabu*<sup>k</sup>* list. The probability for the *k*th ant

0 otherwise

<sup>∑</sup>*<sup>u</sup>* <sup>∑</sup>*<sup>v</sup>* (*τuv* )*α*(*ηuv* )*<sup>β</sup>* if(*i*, *<sup>j</sup>*) and (*u*, *<sup>v</sup>*) /<sup>∈</sup> tabu*<sup>k</sup>*

*r* − 1 ≤ *i*, *u* ≤ *r* + 1, *s* − 1 ≤ *j*, *v* ≤ *s* + 1


 

Ant Algorithms for Adaptive Edge Detection http://dx.doi.org/10.5772/52792

(13)

33

(14)


afterwards as the starting pixels for the ants' routes.

*<sup>ξ</sup>ij* <sup>=</sup> <sup>1</sup> *Imax*

for each pixel is set to its *grayscale visibility* value.

*pk*

(*r*,*s*)(*i*,*j*) =

to move from pixel (*r*,*s*) to pixel (*i*, *j*) is calculated as follows:

(*τij*)*<sup>α</sup>*(*ηij*)*<sup>β</sup>*

 

· *max*

 

(*i*, *j*) is calculated as follows:

cost.

**Figure 7.** Test image, 256 × 256 pixels: (a) original image; (b) ground-truth edge image.

set to 1 · *n*. However, the results not presented here showed that the algorithm was able to detect the maximal number of edge pixels after 400 iterations. Future work may include the optimization of parameters with respect to the computation time.

## **5. Ant System-based broken-edge linking algorithm**

Conventional image edge detection always results in missing edge segments. Broken-edge linking is an improvement technique that is complementary to edge detection. It is used to connect the broken edges in order to form the closed contours that separate the regions of interest. The detection of the missing edge segments is a challenging task. A missing segment is sought between two endpoints where the edge is broken. The noise that is present in the original image may limit the performance of edge-linking algorithms.

Many broken-edge linking techniques have been proposed to compensate the edges that are not fully connected by the conventional edge detectors. [16] applied morphological image enhancement techniques to detect and preserve thin-edge features in the low contrast regions of an image. [33] applied Sequential Edge-Linking (SEL) algorithm that provided full connectivity of the edges but for a rather simplified two-region edge-detection problem.

**Figure 8.** Extracted features vs. number of iterations for different ant-colony size.

Authors proposed this method to extract the contour of a breast as the region of interest in mammogram. [29] applied adaptive structuring elements to dilate the broken edges along their slope directions. [20] proposed improvement to the traditional Ant Colony Optimization (ACO) based method for broken-edge linking to reduce the computational cost.

10 Search Algorithms

(a) (b)

set to 1 · *n*. However, the results not presented here showed that the algorithm was able to detect the maximal number of edge pixels after 400 iterations. Future work may include the

Conventional image edge detection always results in missing edge segments. Broken-edge linking is an improvement technique that is complementary to edge detection. It is used to connect the broken edges in order to form the closed contours that separate the regions of interest. The detection of the missing edge segments is a challenging task. A missing segment is sought between two endpoints where the edge is broken. The noise that is present in the

Many broken-edge linking techniques have been proposed to compensate the edges that are not fully connected by the conventional edge detectors. [16] applied morphological image enhancement techniques to detect and preserve thin-edge features in the low contrast regions of an image. [33] applied Sequential Edge-Linking (SEL) algorithm that provided full connectivity of the edges but for a rather simplified two-region edge-detection problem.

<sup>0</sup> <sup>20</sup> <sup>40</sup> <sup>60</sup> <sup>80</sup> <sup>100</sup> <sup>120</sup> <sup>140</sup> <sup>0</sup>

ants = 1\*n ants = 4\*n ants = 12\*n

Number of Iterations

**Figure 7.** Test image, 256 × 256 pixels: (a) original image; (b) ground-truth edge image.

optimization of parameters with respect to the computation time.

**5. Ant System-based broken-edge linking algorithm**

original image may limit the performance of edge-linking algorithms.

**Figure 8.** Extracted features vs. number of iterations for different ant-colony size.

True Positive

In this section, the Ant System-based broken-edge linking algorithm proposed by [13] is presented. As inputs are used: the Sobel edge image and the original grayscale image. The Sobel edge image is a binary image obtained after applying the Sobel edge operator [31] to the original grayscale image. From this image the endpoints are extracted that will be used afterwards as the starting pixels for the ants' routes.

The original image is used to produce the *grayscale visibility* matrix, which for the pixel at (*i*, *j*) is calculated as follows:

$$\mathfrak{F}\_{ij} = \frac{1}{I\_{\max}} \cdot \max\left[ \begin{bmatrix} |I(i-1, j-1) - I(i+1, j+1)| \\ |I(i-1, j+1) - I(i+1, j-1)| \\ |I(i, j-1) - I(i, j+1)| \\ |I(i-1, j) - I(i+1, j)| \end{bmatrix} \right] \tag{13}$$

where *Imax* is the maximum gray value in the image, so *ξij* is normalized (0 ≤ *ξij* ≤ 1). For the pixels in regions of distinct gray intensity changes the higher values are obtained. The matrix of *grayscale visibility* will be the initial pheromone trail matrix. It is also used to calculate the fitness value of a route chosen by ant. The resulting image will contain the routes (connecting edges) with the highest fitness values found as optimal routes between the endpoints. In order to discard non-optimal routes, a fitness threshold is applied. Finally, the output image is the improved image that is a sum of the Sobel edge image and the connecting edges. The block diagram of the proposed method is shown in Figure 9.

The proposed AS-based algorithm for broken-edge linking includes the following steps:


$$p^k\_{(r,s)(i,j)} = \begin{cases} \frac{(\tau\_{ij})^a (\eta\_{ij})^b}{\sum\_u \sum\_v (\tau\_{uv})^a (\eta\_{uv})^b} & \text{if } (i,j) \text{ and } (u,v) & \notin & \text{tabı}\_k \\ & r-1 \le i, u \le r+1, \\ & s-1 \le j, v \le s+1 \\\\ 0 & \text{otherwise} \end{cases} \tag{14}$$

**Figure 9.** Block diagram of the proposed edge linking method

where *τij* and *ηij* are the intensity of the pheromone trail and the visibility of the pixel at (*i*, *j*), respectively, and *α* and *β* are control parameters (*α*, *β* > 0; *α*, *β* ∈ ℜ). The visibility of a pixel should not be misinterpreted as its *grayscale visibility*, and for the pixel at (*i*, *j*) it is defined as:

$$
\eta\_{\rm ij} = \frac{1}{d\_{\rm ij}} \tag{15}
$$

(a) (b) (c) (d)

**Figure 10.** Qualitative results of the proposed edge-linking method, "Peppers" 256 × 256 pixels: (a) original image; (b) Sobel

The fitness value of a pixel, *fk*, is equal to the fitness value of the route it belongs to. The

*fk* <sup>=</sup> ¯ *ξ σξ* · *Np*

4. Stopping criterion: The steps 2 and 3 are repeated in a loop and algorithm stops executing when the maximum number of iterations is reached. An iteration ends when all the ants finish the search for the endpoints, by either finding one or getting stuck and being unable

The simulation results of the proposed algorithm applied to the "Peppers" image of 256 × 256 pixels are shown in Figure 10. The algorithm detects the missing edge segments (Figure 10(c)) as the optimal routes consisted of the edge pixels. The initial pheromone trail for each pixel was set to its *grayscale visibility* value. In this manner, the pixels belonging to true edges have a higher probability of being chosen by ants on their initial routes, which shortens the time needed to find a satisfactory solution, or improves the solution found for a fixed number of iterations. The results were obtained after 100 iterations; this number was chosen on trial

Designated values *α* = 10 and *β* = 1 were determined on trial and error basis. A large *α*/*β* ratio forces the ants to choose the strongest edges. The existence of the control parameter *β* is important since it inclines the ant's route towards the closest endpoint. Experimental

*<sup>Q</sup>* if k*th* ant displaces to pixel (*i*, *j*)

*ξ* and *σξ* are the mean value and the standard deviation of the *grayscale visibility* of the pixels in the route, and *Np* is the total number of pixels belonging to that route. Pheromone evaporation prevents algorithm stagnation. From the repeatedly not-visited

(18)

35

Ant Algorithms for Adaptive Edge Detection http://dx.doi.org/10.5772/52792

(19)

edge image; (c) resulting image of the proposed method; (d) improved edge image.

 *<sup>f</sup> k*

0 otherwise.

∆*τ<sup>k</sup> ij* =

pixels the pheromone trail evaporates exponentially.

proposed fitness function is given by:

to advance to any adjacent pixel.

**5.1. Simulation results and discussion**

where ¯

and error basis.

where *dij* is the Euclidean distance of the pixel at (*i*, *j*) from the closest endpoint.

3. Pheromone update rule: Negative feedback is demonstrated through the pheromone trails evaporation according to:

$$
\pi\_{\rm ij(new)} = (1 - \rho)\tau\_{\rm ij(old)} + \Delta\tau\_{\rm ij} \tag{16}
$$

where *ρ* is the pheromone evaporation rate (0 < *ρ* < 1; *ρ* ∈ ℜ), and

$$
\Delta \tau\_{ij} = \sum\_{k=1}^{m} \Delta \tau\_{ij}^{k} \tag{17}
$$

where

**Figure 10.** Qualitative results of the proposed edge-linking method, "Peppers" 256 × 256 pixels: (a) original image; (b) Sobel edge image; (c) resulting image of the proposed method; (d) improved edge image.

$$
\Delta \tau\_{ij}^k = \begin{cases}
\frac{f\_k}{\mathcal{Q}} \text{ if } \quad \text{kth} \quad \text{ant displacements to pixel} \quad (i, j) \\
0 \quad \text{otherwise.}
\end{cases}
\tag{18}
$$

The fitness value of a pixel, *fk*, is equal to the fitness value of the route it belongs to. The proposed fitness function is given by:

$$f\_k = \frac{\tilde{\xi}}{\sigma\_{\tilde{\xi}} \cdot N\_p} \tag{19}$$

where ¯ *ξ* and *σξ* are the mean value and the standard deviation of the *grayscale visibility* of the pixels in the route, and *Np* is the total number of pixels belonging to that route. Pheromone evaporation prevents algorithm stagnation. From the repeatedly not-visited pixels the pheromone trail evaporates exponentially.

4. Stopping criterion: The steps 2 and 3 are repeated in a loop and algorithm stops executing when the maximum number of iterations is reached. An iteration ends when all the ants finish the search for the endpoints, by either finding one or getting stuck and being unable to advance to any adjacent pixel.

## **5.1. Simulation results and discussion**

12 Search Algorithms

Input grayscale image

Grayscale visibility matrix

Ant System edge linking

Compensated edges

Improved image

where *τij* and *ηij* are the intensity of the pheromone trail and the visibility of the pixel at (*i*, *j*), respectively, and *α* and *β* are control parameters (*α*, *β* > 0; *α*, *β* ∈ ℜ). The visibility of a pixel should not be misinterpreted as its *grayscale visibility*, and for the pixel at (*i*, *j*)

> *<sup>η</sup>ij* <sup>=</sup> <sup>1</sup> *dij*

*<sup>τ</sup>ij*(*new*) = (<sup>1</sup> − *<sup>ρ</sup>*)*τij*(*old*) + <sup>∆</sup>*τij* (16)

*ij* (17)

where *dij* is the Euclidean distance of the pixel at (*i*, *j*) from the closest endpoint. 3. Pheromone update rule: Negative feedback is demonstrated through the pheromone trails

∆*τij* =

*m* ∑ *k*=1 ∆*τ<sup>k</sup>*

where *ρ* is the pheromone evaporation rate (0 < *ρ* < 1; *ρ* ∈ ℜ), and

(15)

Edge image

**Figure 9.** Block diagram of the proposed edge linking method

it is defined as:

where

evaporation according to:

The simulation results of the proposed algorithm applied to the "Peppers" image of 256 × 256 pixels are shown in Figure 10. The algorithm detects the missing edge segments (Figure 10(c)) as the optimal routes consisted of the edge pixels. The initial pheromone trail for each pixel was set to its *grayscale visibility* value. In this manner, the pixels belonging to true edges have a higher probability of being chosen by ants on their initial routes, which shortens the time needed to find a satisfactory solution, or improves the solution found for a fixed number of iterations. The results were obtained after 100 iterations; this number was chosen on trial and error basis.

Designated values *α* = 10 and *β* = 1 were determined on trial and error basis. A large *α*/*β* ratio forces the ants to choose the strongest edges. The existence of the control parameter *β* is important since it inclines the ant's route towards the closest endpoint. Experimental results showed that, by setting the *β* value to zero, it took more steps for the ants to find the endpoints which made the computation time longer. In some cases, ants were not even able to find the satisfactory solution for a reasonable number of steps, or they just got stuck between already visited pixels.

The effect of the *α*/*β* parameter ratio on the resulting image is best presented in Figure 11. It can be observed that the endpoint in the upper-left corner of the ROI image (Figure 11(c)–(e)) was not connected to any of the closer endpoints, and that the ants successfully found the more remote endpoint which was the correct one. The existence of the *β* parameter keeps the ants away from the low-contrast regions, such as the region of low gray-intensity pixels between two closer endpoints.

(a) (b)

Ant Algorithms for Adaptive Edge Detection http://dx.doi.org/10.5772/52792 37

(c) (d) (e)

(f)

**Figure 11.** Effect of the control parameters on correct connection of the endpoints: "Peppers" 256 × 256 image: (a) original image with marked region of interest (ROI); (b) Sobel edge image with marked ROI; (c) enlarged ROI: Sobel edge image; (d) enlarged ROI: pheromone trails image; (e) enlarged ROI: improved edge image; (f) improved edge image with marked ROI.

accumulated pheromone represent the stored information about the environment that can be used by any member of the swarm. Although a single ant has no knowledge of the global pattern, the designer of such a swarm-based system is a privileged observer of the emergence

that comes as a result of the cooperative behavior.

The ant's memory, i.e. the length of the tabu list, was set to 10. Larger ant's memory values would improve the quality of the resulting binary image but would as well lead to the prolonged computation time. The designated value was large enough to keep the ants from being stuck in small pixel circles.

The fitness value of a route is dependent on the mean value and the standard deviation of the grayscale visibility of the pixels in the route, and the total number of pixels belonging to that route, as defined in (19). The routes that have higher grayscale visibility mean value are the stronger edges as the gray level contrast of their adjacent pixels is higher. The smaller standard deviation of the grayscale visibility of the pixels in the route results in a higher fitness value. By this, more importance is given to the routes consisted of pixels belonging to the same edge, thus avoiding the ants crossing between the edges and leaving pheromone trails on non-edge pixels. Finally, the shorter routes are more favorable as a solution, therefore by keeping the total number of pixels in the route smaller, the higher fitness values are obtained.

The number of iterations was set to 100, which gave satisfactory results within an acceptable computational time of execution. The lower resolution images, for example 128 × 128 pixels, allowed a larger number of iterations to be used, since a smaller number of ants was processed for a smaller number of relatively closer endpoints. The execution time of the algorithm was not optimal, and it was measured in minutes. One of the reasons is that the algorithm code was not written in an optimal manner since Matlab as a programming environment is not intended for a fast code execution, but rather for an easy high-level algorithm implementation.

In order to test the proposed method on different input images, simulations were performed on "House", "Lena" and "Cameraman" images of size 256 × 256 pixels. The results confirm the effectiveness of the method, as shown in Figure 12. It can be noticed that the found edge segments are often not unidirectional, which indicates that the fitness function was adequately defined and the ants found the true edges. The main contribution of the proposed broken-edge-linking method is in using a bottom-up approach that avoids using a global threshold to find the missing segments.

## **6. Adaptability of the proposed edge detector**

The adaptability and the ability to learn are important features of autonomous systems. In ant colonies, natural and artificial, learning consists in changing the environment by laying the pheromone trails while searching for food. The structures that emerge from the

14 Search Algorithms

between already visited pixels.

between two closer endpoints.

being stuck in small pixel circles.

fitness values are obtained.

algorithm implementation.

threshold to find the missing segments.

**6. Adaptability of the proposed edge detector**

results showed that, by setting the *β* value to zero, it took more steps for the ants to find the endpoints which made the computation time longer. In some cases, ants were not even able to find the satisfactory solution for a reasonable number of steps, or they just got stuck

The effect of the *α*/*β* parameter ratio on the resulting image is best presented in Figure 11. It can be observed that the endpoint in the upper-left corner of the ROI image (Figure 11(c)–(e)) was not connected to any of the closer endpoints, and that the ants successfully found the more remote endpoint which was the correct one. The existence of the *β* parameter keeps the ants away from the low-contrast regions, such as the region of low gray-intensity pixels

The ant's memory, i.e. the length of the tabu list, was set to 10. Larger ant's memory values would improve the quality of the resulting binary image but would as well lead to the prolonged computation time. The designated value was large enough to keep the ants from

The fitness value of a route is dependent on the mean value and the standard deviation of the grayscale visibility of the pixels in the route, and the total number of pixels belonging to that route, as defined in (19). The routes that have higher grayscale visibility mean value are the stronger edges as the gray level contrast of their adjacent pixels is higher. The smaller standard deviation of the grayscale visibility of the pixels in the route results in a higher fitness value. By this, more importance is given to the routes consisted of pixels belonging to the same edge, thus avoiding the ants crossing between the edges and leaving pheromone trails on non-edge pixels. Finally, the shorter routes are more favorable as a solution, therefore by keeping the total number of pixels in the route smaller, the higher

The number of iterations was set to 100, which gave satisfactory results within an acceptable computational time of execution. The lower resolution images, for example 128 × 128 pixels, allowed a larger number of iterations to be used, since a smaller number of ants was processed for a smaller number of relatively closer endpoints. The execution time of the algorithm was not optimal, and it was measured in minutes. One of the reasons is that the algorithm code was not written in an optimal manner since Matlab as a programming environment is not intended for a fast code execution, but rather for an easy high-level

In order to test the proposed method on different input images, simulations were performed on "House", "Lena" and "Cameraman" images of size 256 × 256 pixels. The results confirm the effectiveness of the method, as shown in Figure 12. It can be noticed that the found edge segments are often not unidirectional, which indicates that the fitness function was adequately defined and the ants found the true edges. The main contribution of the proposed broken-edge-linking method is in using a bottom-up approach that avoids using a global

The adaptability and the ability to learn are important features of autonomous systems. In ant colonies, natural and artificial, learning consists in changing the environment by laying the pheromone trails while searching for food. The structures that emerge from the

**Figure 11.** Effect of the control parameters on correct connection of the endpoints: "Peppers" 256 × 256 image: (a) original image with marked region of interest (ROI); (b) Sobel edge image with marked ROI; (c) enlarged ROI: Sobel edge image; (d) enlarged ROI: pheromone trails image; (e) enlarged ROI: improved edge image; (f) improved edge image with marked ROI.

accumulated pheromone represent the stored information about the environment that can be used by any member of the swarm. Although a single ant has no knowledge of the global pattern, the designer of such a swarm-based system is a privileged observer of the emergence that comes as a result of the cooperative behavior.

(a) (b) (c) (d) (e)

Ant Algorithms for Adaptive Edge Detection http://dx.doi.org/10.5772/52792 39

(f) (g) (h) (i) (j)

(k) (l) (m) (n) (o)

**Figure 13.** Adaptive edge detection on enhanced "Cameraman" images, 256 × 256 pixels: (a) enhanced image 1; (b) *t*=5 iterations; (c) *t*=10 iterations; (d) *t*=50 iterations; (e) *t*=100 iterations; (f) enhanced image 2; (g) *t*=105 iterations; (h) *t*=110 iterations; (i) *t*=150 iterations; (j) *t*=200 iterations; (k) enhanced image 3; (l) *t*=205 iterations; (m) *t*=210 iterations; (n) *t*=250

a Multiscale Adaptive Gain contrast enhancement to the 256 × 256 pixel "Cameraman" image (see Figure 4). Every *Ni* = 100 iterations one image from the set was replaced by another. The response of the artificial ant colony to the change in the environment was a different distribution of pheromone trails. The number of 100 iterations per image was enough for the new pheromone structure to be established. The algorithm parameters used in the experiments were determined empirically: *τ*<sup>0</sup> = 0.01, *ρ* = 0.5, *α* = 1, *β* = 10, *T* = 0.08 and the tabu list length was set to 10. Parameters could be optimized for a better edge detection, but it is of no importance for this study. It would not affect the adaptability of the algorithm since every image change would result in a change of the pheromone trail

The results show that the Ant System-based edge detector was capable of detecting the changes that occurred as a result of replacing one image from the set with another. The experiments were repeated for a set of four widely used test grayscale images: "Cameraman", "Lena", "House", and "Peppers". The images were used as inputs to the algorithm in that order. Every *Ni* = 100 iterations one image was replaced by the next one from the set. Again, the change in the environment produced by the change of input image resulted in

It can be observed that the new pheromone trails accumulated on the pixels belonging to the newly-emerged edges, while the pheromone trails where the edges were no longer present

iterations; (o) *t*=300 iterations.

structure. Simulation results are shown in Figure 13.

different pheromone patterns, which is shown in Figure 14.

**Figure 12.** Qualitative results of the proposed method, 256 × 256-pixel images: (a) "House" original image; (b) "House": Sobel edge image; (c) "House": result of the proposed method; (d) "House": improved edge image; (e) "Lena": original image; (f) "Lena": Sobel edge image; (g) "Lena": result of the proposed method; (h) "Lena": improved edge image; (i) "Cameraman": original image; (j) "Cameraman": Sobel edge image; (k) "Cameraman": result of the proposed method; (l) "Cameraman": improved edge image.

The resulting mass behavior in swarms is hard to predict. Although the adaptability can be demonstrated on a variety of applications such as in image segmentation [27], a general theoretical framework on design and control of swarms does not exist. Artificial swarms use bottom-up approach, meaning that the designer of such distributed multi-agent system needs to set the rules for local interactions between the agents themselves and, if required, between the agents and the environment. The indirect communication via environment is referred to as *stigmergy*, and in case of ant colonies, it consists in pheromone-laying and pheromone-following. For each specific application, the food that ants search for must also be defined.

This section presents a study on the adaptability of the algorithm proposed in Section 4 [12]. Experiments with two different sets of grayscale images were performed. In the first experimental setup, a set of three different grayscale images was used to test the adaptability of the proposed AS-based edge detector. The images were obtained by applying

16 Search Algorithms

improved edge image.

be defined.

(a) (b) (c) (d)

(e) (f) (g) (h)

(i) (j) (k) (l)

**Figure 12.** Qualitative results of the proposed method, 256 × 256-pixel images: (a) "House" original image; (b) "House": Sobel edge image; (c) "House": result of the proposed method; (d) "House": improved edge image; (e) "Lena": original image; (f) "Lena": Sobel edge image; (g) "Lena": result of the proposed method; (h) "Lena": improved edge image; (i) "Cameraman": original image; (j) "Cameraman": Sobel edge image; (k) "Cameraman": result of the proposed method; (l) "Cameraman":

The resulting mass behavior in swarms is hard to predict. Although the adaptability can be demonstrated on a variety of applications such as in image segmentation [27], a general theoretical framework on design and control of swarms does not exist. Artificial swarms use bottom-up approach, meaning that the designer of such distributed multi-agent system needs to set the rules for local interactions between the agents themselves and, if required, between the agents and the environment. The indirect communication via environment is referred to as *stigmergy*, and in case of ant colonies, it consists in pheromone-laying and pheromone-following. For each specific application, the food that ants search for must also

This section presents a study on the adaptability of the algorithm proposed in Section 4 [12]. Experiments with two different sets of grayscale images were performed. In the first experimental setup, a set of three different grayscale images was used to test the adaptability of the proposed AS-based edge detector. The images were obtained by applying

**Figure 13.** Adaptive edge detection on enhanced "Cameraman" images, 256 × 256 pixels: (a) enhanced image 1; (b) *t*=5 iterations; (c) *t*=10 iterations; (d) *t*=50 iterations; (e) *t*=100 iterations; (f) enhanced image 2; (g) *t*=105 iterations; (h) *t*=110 iterations; (i) *t*=150 iterations; (j) *t*=200 iterations; (k) enhanced image 3; (l) *t*=205 iterations; (m) *t*=210 iterations; (n) *t*=250 iterations; (o) *t*=300 iterations.

a Multiscale Adaptive Gain contrast enhancement to the 256 × 256 pixel "Cameraman" image (see Figure 4). Every *Ni* = 100 iterations one image from the set was replaced by another. The response of the artificial ant colony to the change in the environment was a different distribution of pheromone trails. The number of 100 iterations per image was enough for the new pheromone structure to be established. The algorithm parameters used in the experiments were determined empirically: *τ*<sup>0</sup> = 0.01, *ρ* = 0.5, *α* = 1, *β* = 10, *T* = 0.08 and the tabu list length was set to 10. Parameters could be optimized for a better edge detection, but it is of no importance for this study. It would not affect the adaptability of the algorithm since every image change would result in a change of the pheromone trail structure. Simulation results are shown in Figure 13.

The results show that the Ant System-based edge detector was capable of detecting the changes that occurred as a result of replacing one image from the set with another. The experiments were repeated for a set of four widely used test grayscale images: "Cameraman", "Lena", "House", and "Peppers". The images were used as inputs to the algorithm in that order. Every *Ni* = 100 iterations one image was replaced by the next one from the set. Again, the change in the environment produced by the change of input image resulted in different pheromone patterns, which is shown in Figure 14.

It can be observed that the new pheromone trails accumulated on the pixels belonging to the newly-emerged edges, while the pheromone trails where the edges were no longer present

enhancement, certain features in the image could be amplified while others could be reduced or even removed. This would enable easier detection of the regions of interest in the image.

Ant Algorithms for Adaptive Edge Detection http://dx.doi.org/10.5772/52792 41

Two edge-detection methods inspired by the ants foraging behavior were proposed. The first method uses a grayscale image as input and as output produces a pheromone image marking the location of the edge pixels. The second method finds the missing edge segments after the edge detection was applied and can be used as a complementary tool to any edge detector.

The first method combines a nonlinear contrast enhancement technique, Multiscale Adaptive Gain, and the Ant System algorithm inspired by the ants foraging behavior. The set of enhanced images was obtained after applying the Multiscale Adaptive Gain and the Ant System algorithm generated pheromone patterns where the true edges were found. The experiments showed that our method outperformed other ACO-based edge detectors in terms of visual quality of the extracted edge information and sensitivity in finding weaker edges. The quantitative analysis showed that the performance could further be optimized by

The adaptability of the proposed edge detector was demonstrated in a dynamically changing environment made of a set of digital grayscale images. The algorithm responded to the changes by generating pheromone patterns according to the distribution of the newly-created edges. It also proved to be robust since even an ant colony of a smaller size could detect the

The second proposed method uses the ant colony search for the edge segments that connect pairs of endpoints. A novel fitness function was proposed to evaluate the found segments. It depends on two variables: the pixels grayscale visibility and the edge-segment length. The fitness function produces higher values for the segments that consisted of smaller number of pixels, which had grayscale visibility of a higher mean value and a lower variance. Another novelty was to apply the grayscale visibility matrix as the initial pheromone trails matrix so that the pixels belonging to true edges have a higher probability of being chosen by ants on their initial routes, which reduced the computational load. The proposed broken-edge linking method was tested as a complementary tool for the Sobel edge detector, and it

Future research will include optimization and automatic detection of the proposed methods' parameters for an improved edge detection results. Until now these parameters were experimentally obtained. An exhaustive analysis of the edge detection method's adaptability will be performed in order to apply it to other digital habitats. Also, the methods optimization for faster execution would make them suitable for real-time image processing.

This work has been financed by the EU-funded Initial Training Network (ITN) in the Marie-Curie People Programme (FP7): INTRO (INTeractive RObotics research network),

edges, even though the number of detected edge pixels was reduced.

In our work, the Sobel edge detector was used to produce the binary edge image.

**7. Conclusions**

varying the number of ants and iterations.

significantly improved the output edge image.

**Acknowledgements**

grant agreement no.: 238486.

**Figure 14.** Adaptive edge detection on four test images, 256 × 256 pixels: (a) "Cameraman"; (b) *t*=5 iterations; (c) *t*=10 iterations; (d) *t*=50 iterations; (e) *t*=100 iterations; (f) "House"; (g) *t*=105 iterations; (h) *t*=110 iterations; (i) *t*=150 iterations; (j) *t*=200 iterations; (k) "Peppers"; (l) *t*=205 iterations; (m) *t*=210 iterations; (n) *t*=250 iterations; (o) *t*=300 iterations; (p) "Lena"; (q) *t*=305 iterations; (r) *t*=310 iterations; (s) *t*=350 iterations; (t) *t*=400 iterations.

gradually disappeared. In order to obtain a quicker transition between different pheromone distributions, the evaporation rate *ρ* was set to a higher value than for the edge-detection simulations (*ρ* = 0.5). This resulted in disappearing of the "weakest" edges and introduced slightly poorer overall performance of the proposed edge detector. The experimental results show that the algorithm is able to adapt to a dynamically changing environment resulting in different pheromone trail patterns. Even though the images were used in the experiments, the study could be extended to any other type of digital habitat which can lead to a new set of applications for the adaptive artificial ant colonies.

One of the possible applications for the adaptive edge detector could be real-time image processing where online image preprocessing could be used to obtain better image segmentation. By applying various image enhancement techniques, such as contrast enhancement, certain features in the image could be amplified while others could be reduced or even removed. This would enable easier detection of the regions of interest in the image.

## **7. Conclusions**

18 Search Algorithms

(a) (b) (c) (d) (e)

(f) (g) (h) (i) (j)

(k) (l) (m) (n) (o)

(p) (q) (r) (s) (t)

**Figure 14.** Adaptive edge detection on four test images, 256 × 256 pixels: (a) "Cameraman"; (b) *t*=5 iterations; (c) *t*=10 iterations; (d) *t*=50 iterations; (e) *t*=100 iterations; (f) "House"; (g) *t*=105 iterations; (h) *t*=110 iterations; (i) *t*=150 iterations; (j) *t*=200 iterations; (k) "Peppers"; (l) *t*=205 iterations; (m) *t*=210 iterations; (n) *t*=250 iterations; (o) *t*=300 iterations; (p) "Lena";

gradually disappeared. In order to obtain a quicker transition between different pheromone distributions, the evaporation rate *ρ* was set to a higher value than for the edge-detection simulations (*ρ* = 0.5). This resulted in disappearing of the "weakest" edges and introduced slightly poorer overall performance of the proposed edge detector. The experimental results show that the algorithm is able to adapt to a dynamically changing environment resulting in different pheromone trail patterns. Even though the images were used in the experiments, the study could be extended to any other type of digital habitat which can lead to a new set

One of the possible applications for the adaptive edge detector could be real-time image processing where online image preprocessing could be used to obtain better image segmentation. By applying various image enhancement techniques, such as contrast

(q) *t*=305 iterations; (r) *t*=310 iterations; (s) *t*=350 iterations; (t) *t*=400 iterations.

of applications for the adaptive artificial ant colonies.

Two edge-detection methods inspired by the ants foraging behavior were proposed. The first method uses a grayscale image as input and as output produces a pheromone image marking the location of the edge pixels. The second method finds the missing edge segments after the edge detection was applied and can be used as a complementary tool to any edge detector. In our work, the Sobel edge detector was used to produce the binary edge image.

The first method combines a nonlinear contrast enhancement technique, Multiscale Adaptive Gain, and the Ant System algorithm inspired by the ants foraging behavior. The set of enhanced images was obtained after applying the Multiscale Adaptive Gain and the Ant System algorithm generated pheromone patterns where the true edges were found. The experiments showed that our method outperformed other ACO-based edge detectors in terms of visual quality of the extracted edge information and sensitivity in finding weaker edges. The quantitative analysis showed that the performance could further be optimized by varying the number of ants and iterations.

The adaptability of the proposed edge detector was demonstrated in a dynamically changing environment made of a set of digital grayscale images. The algorithm responded to the changes by generating pheromone patterns according to the distribution of the newly-created edges. It also proved to be robust since even an ant colony of a smaller size could detect the edges, even though the number of detected edge pixels was reduced.

The second proposed method uses the ant colony search for the edge segments that connect pairs of endpoints. A novel fitness function was proposed to evaluate the found segments. It depends on two variables: the pixels grayscale visibility and the edge-segment length. The fitness function produces higher values for the segments that consisted of smaller number of pixels, which had grayscale visibility of a higher mean value and a lower variance. Another novelty was to apply the grayscale visibility matrix as the initial pheromone trails matrix so that the pixels belonging to true edges have a higher probability of being chosen by ants on their initial routes, which reduced the computational load. The proposed broken-edge linking method was tested as a complementary tool for the Sobel edge detector, and it significantly improved the output edge image.

Future research will include optimization and automatic detection of the proposed methods' parameters for an improved edge detection results. Until now these parameters were experimentally obtained. An exhaustive analysis of the edge detection method's adaptability will be performed in order to apply it to other digital habitats. Also, the methods optimization for faster execution would make them suitable for real-time image processing.

## **Acknowledgements**

This work has been financed by the EU-funded Initial Training Network (ITN) in the Marie-Curie People Programme (FP7): INTRO (INTeractive RObotics research network), grant agreement no.: 238486.

## **8. Acronyms**


## **Author details**

Aleksandar Jevti´c<sup>1</sup> and Bo Li2


## **References**

[1] Baterina, A. V. & Oppus, C. [2010]. Image edge detection using ant colony optimization, *WSEAS Transactions on Signal Processing* 6(2): 58–67.

[8] Etemad, S. A. & White, T. [2011]. An ant-inspired algorithm for detection of image edge

Ant Algorithms for Adaptive Edge Detection http://dx.doi.org/10.5772/52792 43

[9] Gonzalez, R. C. & Woods, R. E. [2008]. *Digital Image Processing*, 3rd edn, Prentice Hall,

[10] He, X., Jia, W., Hur, N., Wu, Q., Kim, J. & Hintz, T. [2006]. Bilateral edge detection on a virtual hexagonal structure, *in* G. Bebis, R. Boyle, B. Parvin, D. Koracin, P. Remagnino, A. Nefian, G. Meenakshisundaram, V. Pascucci, J. Zara, J. Molineros, H. Theisel & T. Malzbender (eds), *Advances in Visual Computing*, Vol. 4292 of *Lecture Notes in Computer*

[11] Huang, P., Cao, H. & Luo, S. [2008]. An artificial ant colonies approach to medical image

[12] Jevti´c, A. & Andina, D. [2010]. Adaptive artificial ant colonies for edge detection in digital images, *Proceedings of the 36th Annual Conference on IEEE Industrial Electronics*

[13] Jevti´c, A., Melgar, I. & Andina, D. [2009]. Ant based edge linking algorithm, *Proceedings of 35th Annual Conference of the IEEE Industrial Electronics Society (IECON 2009)*, Porto,

[14] Jevti´c, A., Quintanilla-Dominguez, J., Barrón-Adame, J.-M. & Andina, D. [2011]. Image segmentation using ant system-based clustering algorithm, *in* E. Corchado, V. Snasel, J. Sedano, A. E. Hassanien, J. L. Calvo & D. Slezak (eds), *SOCO 2011 - 6th International Conference on Soft Computing Models in Industrial and Environmental Applications*, Vol. 87,

[15] Jevti´c, A., Quintanilla-Domínguez, J., Cortina-Januchs, M. G. & Andina, D. [2009]. Edge detection using ant colony search algorithm and multiscale contrast enhancement, *Proceedings of 2009 IEEE International Conference on Systems, Man, and Cybernetics (SMC*

[16] Jiang, J. A., Chuang, C. L., Lu, Y. L. & Fahn, C. S. [2007]. Mathematical-morphology-based edge detectors for detection of thin edges in

[17] Kennedy, J. & Eberhart, R. C. [1995]. Particle swarm optimisation, *Proceedings of IEEE International Conference on Neural Networks Vol. IV*, IEEE service center, Piscataway, NJ,

[18] Khajehpour, P., Lucas, C. & Araabi, B. N. [2005]. Hierarchical image segmentation using ant colony and chemical computing approach, *in* L. Wang, K. Chen & Y. S. Ong (eds), *Advances in Natural Computation*, Vol. 3611 of *Lecture Notes in Computer Science*, Springer

[19] Laine, A. F., Schuler, S., Fan, J. & Huda, W. [1994]. Mammographic feature enhancement by multiscale analysis, *IEEE Transactions on Medical Imaging* 13(4): 725–740.

[20] Lu, D. S. & Chen, C. C. [2008]. Edge detection improvement by ant colony optimization,

segmentation, *Computer Methods & Programs in Biomedicine* 92: 267–273.

features, *Applied Soft Computing* 11(8): 4883–4893.

*Science*, Springer Berlin / Heidelberg, pp. 176–185.

*Society, IECON 2010*, pp. 2813–2816.

Springer Berlin / Heidelberg, pp. 35–45.

*2009)*, San Antonio, TX, USA, pp. 2193–2198.

low-contrast regions, *IET Image Processing* 1(3): 269–277.

Portugal, pp. 3353–3358.

USA, pp. 1942–1948.

Berlin / Heidelberg, pp. 1250–1258.

*Pattern Recognition Letters* 29(4): 416–425.

New Jersey, USA.


[8] Etemad, S. A. & White, T. [2011]. An ant-inspired algorithm for detection of image edge features, *Applied Soft Computing* 11(8): 4883–4893.

20 Search Algorithms

**8. Acronyms**

Ant Colony Optimization

Coordinate Logic Filters Coordinate Logic Operations Coordinate Logic Transforms

Particle Swarm Optimization

Sequential Edge-Linking Traveling Salesman Problem

Parameterized Logarithmic Image Processing

<sup>⋆</sup> Address all correspondence to: aleksandar.jevtic@robosoft.fr; bo.li@tfe.umu.se

2 Dept. of Applied Physics and Electronics, Ume*a*˚ University, Ume*a*˚, Sweden

*WSEAS Transactions on Signal Processing* 6(2): 58–67.

*Artificial Systems*, Oxford University Press, New York.

*Pattern Analysis & Machine Intelligence* 8: 679–714.

[1] Baterina, A. V. & Oppus, C. [2010]. Image edge detection using ant colony optimization,

[2] Bonabeau, E., Dorigo, M. & Theraulaz, G. [1999]. *Swarm Intelligence: From Natural to*

[3] Canny, J. [1986]. A computational approach to edge detection, *IEEE Transactions on*

[4] Danahy, E. E., Panetta, K. A. & Agaian, S. S. [2007]. Coordinate logic transforms and their use in the detection of edges within binary and grayscale images, *IEEE International*

[5] Dorigo, M., Maniezzo, V. & Colorni, A. [1996]. Ant system: optimization by a colony of cooperating agents, *IEEE Transactions on Systems, Man, and Cybernetics - Part B*

[7] Engelbrecht, A. P. [2005]. *Fundamentals of Computational Swarm Intelligence*, John Wiley

*Conference on Image Processing, 2007. ICIP 2007.*, Vol. 3, pp. III–53–III–56.

[6] Dorigo, M. & Stützle, T. [2004]. *Ant colony optimization*, MIT Press, Cambridge.

Ant Colony System

Least Square Error

Region Of Interest

Aleksandar Jevti´c<sup>1</sup> and Bo Li2

1 Robosoft, Bidart, France

26(1): 29–41.

& Sons, Ltd, Chichester, UK.

**Author details**

**References**

Ant System Bees Algorithm


[21] Mertzios, B. G. & Tsirikolias, K. [2001]. Applications of coordinate logic filters in image analysis and pattern recognition, *Proceedings of the 2nd International Symposium on Image and Signal Processing and Analysis, 2001. ISPA 2001.*, pp. 125–130.

**Chapter 3**

**Content-Based Image Feature Description and**

With the growth in the number of color images, developing an efficient image retrieval sys‐ tem has received much attention in recent years. The first step to retrieve relevant informa‐ tion from image and video databases is the selection of appropriate feature representations (e.g. color, texture, shape) so that the feature attributes are both consistent in feature space and perceptually close to the user [1]. There are many CBIR systems, which adopt different low level features and similarity measure, have been proposed in the literature [2-5]. In gen‐ eral, perceptually similar images are not necessarily similar in terms of low-level features [6]. Hence, these content-based systems capture pre-attentive similarity rather than semantic similarity [7]. In order to achieve more efficient CBIR system, active researches are currently focused on the two complemented approaches: region-based approach [4, 8-10] and rele‐

Typically, the region-based approaches segment each image into several regions with homo‐ genous visual prosperities, and enable users to rate the relevant regions for constructing a new query. In general, an incorrect segmentation may result in inaccurate representation. However, automatically extracting image objects is still a challengeing issue, especially for a database containing a collection of heterogeneous images. For example, Jing et al. [8] inte‐ grate several effective relevance feedback algorithms into a region-based image retrieval system, which incorporates the properties of all the segmented regions to perform many-tomany relationships of regional similarity measure. However, some semantic information will be disregarded without considering similar regions in the same image. In another study [10], Vu et al. proposed a region-of-interest (ROI) technique which is a sampling-based ap‐

> © 2013 Yang et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

© 2013 Yang et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

distribution, and reproduction in any medium, provided the original work is properly cited.

Nai-Chung Yang, Chung-Ming Kuo and

Additional information is available at the end of the chapter

**Retrieving**

Wei-Han Chang

**1. Introduction**

vance feedback [6, 11-13].

http://dx.doi.org/10.5772/45841

