**3. Adaptive Neuro-Fuzzy Inference System (ANFIS)**

In spite of some non-linear control problems can be handled using neural control schemes, in situations where there is precise tracking of fast trajectories for non-linear systems with high nonlinearities and large uncertainties, neural control schemes are severely inadequate (Denai et al., 2004). Adaptive Neuro-Fuzzy Inference Systems are realized by an appropriate combination of neural and fuzzy systems and provide a valuable modeling approach of complex systems (Denai et al., 2004; Rezaeeian et al., 2008; Hanafy, 2010).

The proper selection of the number, the type and the parameter of the fuzzy membership functions and rules is crucial for achieving the desired performance and in most situations, it is difficult. Yet, it has been done in many applications through trial and error. This fact highlights the significance of tuning fuzzy system. Adaptive Neuro-Fuzzy Inference Systems are Fuzzy Sugeno models put in the framework of adaptive systems to facilitate learning and adaptation. Such framework makes FLC more systematic and less relying on expert knowledge. To present the ANFIS architecture, let us consider two-fuzzy rules based on a first order Sugeno model:

**Rule 1**: if (x is *A*<sup>1</sup> ) and (y is *B*<sup>1</sup> ) then

$$(f\_1 = p\_1 x + q\_1 y + r\_1)$$

**Rule 2**: if (x is *A*<sup>2</sup> ) and (y is *B*<sup>2</sup> ) then

$$(f\_2 = p\_2 \ge +q\_2 y + r\_2) \text{j}$$

ANFIS architecture to implement these two rules is shown in figure 3. Note that a circle indicates a fixed node whereas a square indicates an adaptive node (the parameters are changed during training). In the following presentation OL denotes the output of node i in layer L.a

Fig. 3. Construct of ANFIS.

**Layer 1:** the fuzzy membership function (MF) represented by the node: All the nodes in this layer are adaptive nodes, i is the degree of the membership of the input to

$$O\_{1,i} = \mu\_{Ai}(\mathbf{x}) \qquad \qquad \qquad \qquad \mathbf{i=1,2}$$

Where *<sup>i</sup> a* , *<sup>i</sup> b* , and *<sup>i</sup> c* are the parameters for the MF

88 Fuzzy Inference System – Theory and Applications

In spite of some non-linear control problems can be handled using neural control schemes, in situations where there is precise tracking of fast trajectories for non-linear systems with high nonlinearities and large uncertainties, neural control schemes are severely inadequate (Denai et al., 2004). Adaptive Neuro-Fuzzy Inference Systems are realized by an appropriate combination of neural and fuzzy systems and provide a valuable modeling approach of

The proper selection of the number, the type and the parameter of the fuzzy membership functions and rules is crucial for achieving the desired performance and in most situations, it is difficult. Yet, it has been done in many applications through trial and error. This fact highlights the significance of tuning fuzzy system. Adaptive Neuro-Fuzzy Inference Systems are Fuzzy Sugeno models put in the framework of adaptive systems to facilitate learning and adaptation. Such framework makes FLC more systematic and less relying on expert knowledge. To present the ANFIS architecture, let us consider two-fuzzy rules based

( ) *f px qy r* 11 1 1

( ) *f px qy r* 22 2 2 j

ANFIS architecture to implement these two rules is shown in figure 3. Note that a circle indicates a fixed node whereas a square indicates an adaptive node (the parameters are changed during training). In the following presentation OL denotes the output of node i in

**3. Adaptive Neuro-Fuzzy Inference System (ANFIS)** 

on a first order Sugeno model:

**Rule 1**: if (x is *A*<sup>1</sup> ) and (y is *B*<sup>1</sup> ) then

**Rule 2**: if (x is *A*<sup>2</sup> ) and (y is *B*<sup>2</sup> ) then

Fig. 3. Construct of ANFIS.

layer L.a

complex systems (Denai et al., 2004; Rezaeeian et al., 2008; Hanafy, 2010).

$$O\_{1,i} = \mu\_{Bi-2}(y) \qquad \qquad \qquad \text{i=3,4} \tag{1}$$

*Ai* and *Bi* can be any appropriate fuzzy sets in parameter form. For example, if bell MF is used then

$$\mu\_{Ai}(\mathbf{x}) = \frac{1}{1 + \left[ (\frac{\mathbf{x} - \mathbf{c}\_i}{a\_i})^2 \right]^{b\_i}} \qquad \text{i=1,2} \tag{2}$$

**Layer 2**: The nodes in this layer are fixed (not adaptive). These are labeled M to indicate that they play the role of a simple multiplier. The outputs of these nodes are given by:

$$\mathbf{U}\_{\mathfrak{D},i} = w\_i = \mu\_{Ai}(\mathbf{x})\mu\_{Bi}(y) \qquad \text{i} = \text{1}, \text{2} \tag{3}$$

The output of each node in this layer represents the firing strength of the rule.

**Layer 3**: Nodes in this layer are also fixed nodes. These are labeled N to indicate that these perform a normalization of the firing strength from previous layer. The output of each node in this layer is given by:

$$\mathbf{U}\_{\Im,i} = \overline{w}\_{\overline{i}} = \frac{w\_{\overline{i}}}{w\_1 + w\_2} \qquad \qquad \qquad \mathbf{i} = \mathbf{1} \,\mathbf{2} \tag{4}$$

**Layer 4**: All the nodes in this layer are adaptive nodes. The output of each node is simply the product of the normalized firing strength and a first order polynomial:

$$\mathbf{C}\_{\mathbf{4},i} = \overline{w}\_{i} f\_{i} = \overline{w}\_{i} (p\_{i}\mathbf{x} + q\_{i}y + r\_{i}) \qquad \mathbf{i} = \mathbf{1}, \mathbf{2} \tag{5}$$

Where: pi , qi , and ri are design parameters (consequent parameter since they deal with the then-part of the fuzzy rule).

**Layer 5**: This layer has only one node labeled S to indicate that it performs the function of a simple summer. The output of this single node is given by:

$$O\_{5,i} = f = \sum\_{i=1}^{2} \overline{w}\_i f\_i = \frac{\sum\_{i=1}^{2} w\_i f\_i}{\sum\_{i=1}^{2} w\_i} \tag{6}$$

In this ANFIS architecture, there are two adaptive layers (1, 4). Layer 1 has three modifiable parameters (ai, bi , and ci) pertaining to the input MFs. These parameters are called premise

Control of Efficient Intelligent Robotic Gripper Using Fuzzy Inference System 91

such clustering algorithm. For these algorithms, the quality of the solution depends strongly on the choice of initial values i.e., the number of cluster centers and their initial locations

In (Yager & Filev, 1994), the authors proposed a simple and effective algorithm, called the mountain method, for estimating the number and initial location of cluster centers. Their method is based on girding the data space and computing a potential value for each grid point based on its distances to the actual data points; a grid point with the highest potential value is chosen as the first cluster center and the potential of all grid points are reduced according to their distance from the cluster center. The next cluster center is then placed at the grid point with the highest remaining potential value. This procedure of acquiring new cluster center and reducing the potential of surrounding grid points is repeated until the potential of all grid points falls below a threshold. Although this method is simple and effective, the computation grows exponentially with the dimension of the problem. The author in (Chiu, 1994) proposed an extension of this mountain method, called subtractive clustering, in which each data point, rather than the grid point, is considered as a potential cluster center. Using this method, the number of effective "grid points" to be evaluated is simply equal to the number of data points, independent of the dimension of the problem. Another advantage of this method is that it eliminates the need to specify a grid resolution, in which tradeoffs between accuracy and computational complexity must be considered.

To extract rules from data, we first separate the training data into groups according to their respective class. Consider a group of n data points {X1, X2,…, Xn} for a specific class, where Xi is a vector in the input feature space. Assume that the feature space is normalized so that all data are bounded by a unit hypercube. We consider each data point as a potential cluster center for the group and define a measure of the potential of data point Xi to serve as a

> 

*i j <sup>n</sup> x x*

*P e* (9)

*ar* (10)

<sup>2</sup>

. denotes the Euclidean distance, and ra is a positive constant. Thus, the measure of the potential of a data point is a function of its distances to all other data points in its group. A data point with many neighboring data points will have a high potential value. The constant ra is effectively a normalized radius defining a neighborhood; data points outside this radius have a little influence on the potential. Note that because the data space is normalized, ra =1.0 is equal to the length of one side of the data space. After the potential of every data point in the group has been computed, we select the data point with the highest potential as the first cluster center. Let x1\* be the location of the first cluster center and P1\* be its potential

 <sup>2</sup> 4

value. We then revise the potential of each data point xi in the group by the formula

*i j*

1

(Nikhil et al., 1997) .

**4.2 The subtractive clustering method** 

cluster center as

Where

parameters. Layer 4 has also three modifiable parameters polynomial. These parameters are called consequent parameters (pi, qi, ri) pertaining to the first order.

In order to improve the training efficiency, a hybrid learning algorithm is applied to justify the parameters of input and output membership functions. The antecedent parameters (the parameters related to input membership functions) and the consequent parameters (the parameters related to output membership functions) are two parameter sets in the architecture which should be tuned. When we suppose that premise parameters are fixed, then the output of ANFIS will be a linear combination of the consequent parameters. So, the output can be written as:

$$f = \overline{w}\_1 f\_1 + \overline{w}\_2 f\_2 \tag{7}$$

With substituting Equation (5) in Equation (7), the output can be rearranged as:

$$f = (\overline{w}\_1 \mathbf{x}) p\_1 + (\overline{w}\_1 y) q\_1 + (\overline{w}\_1) r\_1 + (\overline{w}\_2 \mathbf{x}) p\_2 + (\overline{w}\_2 y) q\_2 + (\overline{w}\_2) r\_2 \tag{8}$$

So, the consequent parameters can be tuned by the least square method. On the other hand, if consequent parameters are fixed, the premise parameters can be adjusted by the gradient descent method. ANFIS utilizes hybrid learning algorithm in which the least square method is used to identify the consequent parameters in forward pass and the gradient descent method is applied to determine the premise parameters in backward pass.

Not yet, many recent developments in evolutionary algorithms have provided several strategies for NFIS design. Three main strategies, including Pittsburg-type, Michigan-type, and iterative rule learning genetic fuzzy systems, focus on generating and learning fuzzy rules in genetic fuzzy systems (Lin et al.;2008)

#### **4. Fuzzy controllers using susbtractive clustering**

A common way of developing Fuzzy Controller is by determining the rule base and some appropriate fuzzy sets over the controller's input and output ranges. An efficient approach, namely, Fuzzy Subtractive Clustering is used here, which minimizes the number of rules of Fuzzy Logic Controllers. This technique provides a mechanism to obtain the reduced rule set covering the whole input/ output space as well as membership functions for each input variable. In (Chopra et al., 2006), Fuzzy subtractive clustering approach is shown to reduce 49 rules to 8 rules where simulation of a wide range of linear and nonlinear processes is carried out and results are compared with existing Fuzzy Logic Controller with 49 rules.

#### **4.1 Introduction to cluster analysis**

By definition, cluster analysis is grouping of objects into homogenous groups based on same object features. Clustering of numerical data forms the basis of many classification and system-modeling algorithms. The purpose of clustering is to identify natural grouping of data from a large data set to produce a concise representation of a system's behavior. Clustering algorithms typically requires the user to pre-specify the number of cluster centers and their initial locations. The locations of the cluster centers are then adapted in a way such that these can better represent a set of data points covering the range of data behavior. The Fuzzy Clustering Means (FCM) algorithm (Bezdek, 1990) method is well-known example of such clustering algorithm. For these algorithms, the quality of the solution depends strongly on the choice of initial values i.e., the number of cluster centers and their initial locations (Nikhil et al., 1997) .

In (Yager & Filev, 1994), the authors proposed a simple and effective algorithm, called the mountain method, for estimating the number and initial location of cluster centers. Their method is based on girding the data space and computing a potential value for each grid point based on its distances to the actual data points; a grid point with the highest potential value is chosen as the first cluster center and the potential of all grid points are reduced according to their distance from the cluster center. The next cluster center is then placed at the grid point with the highest remaining potential value. This procedure of acquiring new cluster center and reducing the potential of surrounding grid points is repeated until the potential of all grid points falls below a threshold. Although this method is simple and effective, the computation grows exponentially with the dimension of the problem. The author in (Chiu, 1994) proposed an extension of this mountain method, called subtractive clustering, in which each data point, rather than the grid point, is considered as a potential cluster center. Using this method, the number of effective "grid points" to be evaluated is simply equal to the number of data points, independent of the dimension of the problem. Another advantage of this method is that it eliminates the need to specify a grid resolution, in which tradeoffs between accuracy and computational complexity must be considered.

#### **4.2 The subtractive clustering method**

To extract rules from data, we first separate the training data into groups according to their respective class. Consider a group of n data points {X1, X2,…, Xn} for a specific class, where Xi is a vector in the input feature space. Assume that the feature space is normalized so that all data are bounded by a unit hypercube. We consider each data point as a potential cluster center for the group and define a measure of the potential of data point Xi to serve as a cluster center as

$$P\_i = \sum\_{j=1}^{n} e^{-\alpha \left\| \mathbf{x}\_i - \mathbf{x}\_j \right\|^2} \tag{9}$$

Where

90 Fuzzy Inference System – Theory and Applications

parameters. Layer 4 has also three modifiable parameters polynomial. These parameters are

In order to improve the training efficiency, a hybrid learning algorithm is applied to justify the parameters of input and output membership functions. The antecedent parameters (the parameters related to input membership functions) and the consequent parameters (the parameters related to output membership functions) are two parameter sets in the architecture which should be tuned. When we suppose that premise parameters are fixed, then the output of ANFIS will be a linear combination of the consequent parameters. So, the

So, the consequent parameters can be tuned by the least square method. On the other hand, if consequent parameters are fixed, the premise parameters can be adjusted by the gradient descent method. ANFIS utilizes hybrid learning algorithm in which the least square method is used to identify the consequent parameters in forward pass and the gradient descent

Not yet, many recent developments in evolutionary algorithms have provided several strategies for NFIS design. Three main strategies, including Pittsburg-type, Michigan-type, and iterative rule learning genetic fuzzy systems, focus on generating and learning fuzzy

A common way of developing Fuzzy Controller is by determining the rule base and some appropriate fuzzy sets over the controller's input and output ranges. An efficient approach, namely, Fuzzy Subtractive Clustering is used here, which minimizes the number of rules of Fuzzy Logic Controllers. This technique provides a mechanism to obtain the reduced rule set covering the whole input/ output space as well as membership functions for each input variable. In (Chopra et al., 2006), Fuzzy subtractive clustering approach is shown to reduce 49 rules to 8 rules where simulation of a wide range of linear and nonlinear processes is carried out and results are compared with existing Fuzzy Logic Controller with 49 rules.

By definition, cluster analysis is grouping of objects into homogenous groups based on same object features. Clustering of numerical data forms the basis of many classification and system-modeling algorithms. The purpose of clustering is to identify natural grouping of data from a large data set to produce a concise representation of a system's behavior. Clustering algorithms typically requires the user to pre-specify the number of cluster centers and their initial locations. The locations of the cluster centers are then adapted in a way such that these can better represent a set of data points covering the range of data behavior. The Fuzzy Clustering Means (FCM) algorithm (Bezdek, 1990) method is well-known example of

11 22 *f wf wf* (7)

1 1 1 1 11 2 2 2 2 22 *f wxp wyq w r wxp wyq w r* ( ) ( ) () ( ) ( ) () (8)

called consequent parameters (pi, qi, ri) pertaining to the first order.

With substituting Equation (5) in Equation (7), the output can be rearranged as:

method is applied to determine the premise parameters in backward pass.

rules in genetic fuzzy systems (Lin et al.;2008)

**4.1 Introduction to cluster analysis** 

**4. Fuzzy controllers using susbtractive clustering** 

output can be written as:

$$\alpha = \frac{4}{r\_a^2} \tag{10}$$

. denotes the Euclidean distance, and ra is a positive constant. Thus, the measure of the potential of a data point is a function of its distances to all other data points in its group. A data point with many neighboring data points will have a high potential value. The constant ra is effectively a normalized radius defining a neighborhood; data points outside this radius have a little influence on the potential. Note that because the data space is normalized, ra =1.0 is equal to the length of one side of the data space. After the potential of every data point in the group has been computed, we select the data point with the highest potential as the first cluster center. Let x1\* be the location of the first cluster center and P1 \* be its potential value. We then revise the potential of each data point xi in the group by the formula

Control of Efficient Intelligent Robotic Gripper Using Fuzzy Inference System 93

The effectiveness of the Fuzzy Inference control will be illustrated here by applying the method to control the operation of a robotic gripper. The robotic gripper will be first described, its operation principle will be illustrated, then the application of the Adaptive

Generally, the main goal of robotic gripper during object grasping and object lifting process is applying sufficient force to avoid the risk of a difficult task or sometimes a task that could not be achieved. The problem can be posed as an optimization problem (Ottaviano et al.,2000; Bicchi & Kumar,2000). Sensory systems are very important in this field. Two types of sensing are most actively being investigated to increase robot awareness: contact and non-contact sensing. The main type of non-contact sensing is vision sensing where video camera is processed to give the robot the object information. However, it is costly and gives no data concerning forces (Lorenz et al.,1990). Tactile sensing, on the other hand, has the capability to do proximity sensing as well as force sensing, it is less expensive, faster and needs less complex equipment (Choi et al.,2005). The basic principle of the Slip-Sensitive Reaction used in this work is that, the gripper should be able to automatically react to object slipping during grasp with the application of greater force. A lot of researches have been focusing on fingertip sensors development to detect slippage and applied force (Dario & De Rossi ,1985; Friedrich et al., 2000), which requires complicated drive circuit and suffers from difficult data processing and calibration. Polyvinylidene fluoride (PVDF) piezoelectric sensors are presented in (Barsky et al.,1989) to detect contact normal force as well as slip. Also, an array 8x8 matrix photo resistor is introduced in (Ren et al.,2000) to detect slippage. A slip sensor based on the operation of optical encoder used to monitor the slip rate resulting from insufficient force is presented in (Salami et al, 2000). However, it is expensive and have some constrains on the object to be lifted. Several researchers handle finger adaptation using more than one link in one finger to verify stable grasping (Seguna & Saliba, 2001; Dubey & Crowder, 2004). This results in complicated mechanical system leading to difficulty in control and slow response. Fuzzy controllers have been very successful in solving the grasping problem, as they do not need mathematical model of the system (Dominguez-Lopez & Vila-Rosado, 2006). In this study, a new design and implementation of robotic gripper with electric actuation using brushless dc servo motor is presented. Standard sensors adaptation in this work leads to maintaining the simplicity of the mechanical design and gripper operation keeping a reasonable cost. The gripper control was achieved through two control schemes. System modeling had been introduced using ANFIS approach. A new grasping scenario is used in which we collect information about the masses of the grasped objects before starting the grasping process without any additional sensors. This is achieved through knowledge of object pushing force that allows applying an appropriate force and minimizing object displacement slip through implementation of the

A proper gripper design can simplify the overall robot system assembly, increase the overall system reliability, and decrease the cost of implementing the system. Hence, the design of

the gripping system is very important for the successful operation.

Network Fuzzy Inference System control to the gripper system will be presented.

**5. ANFIS control of an intelligent robotic gripper** 

proposed fuzzy control.

**5.1 Gripper design and configuration** 

$$P\_i \Longleftarrow P\_i - P\_1^\* e^{-\beta \left\| x\_i - x\_1^\* \right\|^2} \tag{11}$$

Where

$$
\beta = \frac{4}{\eta\_b^2} \tag{12}
$$

and rb is a positive constant. Thus, we subtract an amount of potential from each data point as a function of its distance from the first cluster center. The data points near the first cluster center will have greatly reduced potential, and therefore will unlikely be selected as the next cluster center for the group. The constant rb is effectively the radius defining the neighborhood which will have measurable reductions in potential. To avoid obtaining closly spaced cluster centers, we typically choose rb =1.25 ra (Chopra et al. , 2006).

When the potential of all data points in the group has been reduced according to Equation 11, we select the data point with the highest remaining potential as the second cluster center. We then further reduce the potential of each data point according to their distance potential as the second cluster center. In general, after the K' th cluster center has been obtained, we revise the potential of each data point by the formula

$$P\_i \Longleftarrow P\_i - P\_k \text{\*}^{-\rho \left\| x\_i - x\_i^\* \right\|^2} \tag{13}$$

Where xk\* is the location of the K' th cluster center and pk\* is its potential value. The process of acquiring new cluster center and reducing potential repeats until the remaining potential of all data points in the group is below some fractions of the potential of the first cluster center P1\*. Typically, one can use pk\* < 0.15P1 \* as the stopping criterion (Chiu, 1997).

Each cluster center found in the training data of a given class identifies a region in the feature space that is well populated by members of that class. Thus, we can translate each cluster center into a fuzzy rule for identifying the class.

Suppose cluster center xi \* was found in the group of data for class c1; this cluster center provides the rule:

Rule : *i* If {x is near \* *<sup>i</sup> x* } then class is c1.

The degree of fulfillment of {x is near \* *<sup>i</sup> x* } is defined as

$$\mu\_i = e^{-\alpha \left\| \mathbf{x} - \mathbf{x}\_i^\* \right\|^2} \tag{14}$$

Where is a constant defined by Equation 10 .

By applying subtractive clustering to each class of data individually, we thus obtain a set of rules for identifying each class. The individual sets of rules can then be combined to form the rule base of the classifier. For example, suppose we found 2 clusters centers in class c1 data, and 5 cluster centers in class c2 data, then the rule base will contain 2 rules that identify class c1 members and 5 rules that identify class c2 members. When performing classification, the output class of the classifier is simply determined by the rule with the highest degree of fulfillment.
