Compensatory Adaptive Neural Fuzzy Inference System

*Rabah Mellah, Hocine Khati, Hand Talem and Said Guermah*

#### **Abstract**

The traditional approach to fuzzy design is based on knowledge acquired by expert operators formulated into rules. However, operators may not be able to translate their knowledge and experience into a fuzzy logic controller. In addition, most adaptive fuzzy controllers present difficulties in determining appropriate fuzzy rules and appropriate membership functions. This chapter presents adaptive neural-fuzzy controller equipped with compensatory fuzzy control in order to adjust membership functions, and as well to optimize the adaptive reasoning by using a compensatory learning algorithm. An analysis of stability and transparency based on a passivity framework is carried out. The resulting controllers are implemented on a two degree of freedom robotic system. The simulation results obtained show a fairly high accuracy in terms of position and velocity tracking, what highlights the effectiveness of the proposed controllers.

**Keywords:** control, fuzzy logic, neural-fuzzy, compensatory fuzzy, Kalman filter, manipulator robot

### **1. Introduction**

The advantage of fuzzy control is that the fuzzy system can model any continuous (sufficiently smooth) nonlinear function in a compact set and the modeling error decreases [1]. Fuzzy logic resembles human analysis in its use of inaccurate information to create decisions. Many such problems can be formulated as the minimization of functional defined over a class of admissible domains [2]. However, the difficulty in deploying fuzzy clustering strategies along with the high calculating cost and without update the parameters were their disadvantage [3].

On the other hand, a major concern of researchers is turned towards the combination of fuzzy logic and neural network. In this combination, a fuzzy reasoning is followed within multilayered hierarchical neural network. The parameters are represented by connection weights or involved in unit functions. They are learned the actual data [4]. In the near past, ANFIS (Adaptive Neural Fuzzy Inference System) models have become very popular for two reasons: the first reason is that in calibrating of non-linear relationships they offer more advantages over conventional modeling techniques, namely the ability to handle large amounts of noisy data from dynamic and non-linear systems, particularly when the underlying physical relationships are not fully understood. The second reason is that they facilitate the solving of linear systems which include the interpolation modeling such as time series [5].

The reason why authors used ANFIS is that it not only includes the characteristics of both methods but also eliminates some disadvantages in case of their lonely use [6]. Unfortunately, conventional neural fuzzy systems can only optimize the fuzzy membership functions under specially defined fuzzy operators which are unchangeable forever, which makes it use the local optimization technique rather than the global optimization technique [7–8]. Thus an adaptive neural fuzzy controller with compensatory fuzzy is most suitable in an environment where system dynamics change dramatically, become highly nonlinear, and in principle not fully known.

On the ground of these observations, several optimal and systematic methods have been developed for the design of neural fuzzy controllers with compensatory fuzzy. Among these methods, we have retained the compensatory adaptive neural fuzzy inference system approach which consists in adjusting not only fuzzy membership functions but also dynamically optimize the adaptive fuzzy reasoning. Besides that, ANFIS is a class of adaptive networks that are functionally equivalent to a first order Takagi-Sugeno fuzzy model.

Recently compensatory adaptive neural fuzzy inference system control has gained more attention from the control Community in general, as adaptive fuzzy systems are of crucial importance in several areas. The compensatory adaptive neural fuzzy inference system is preferred to deal with nonlinearities and complexity by working on data characterized by incompleteness and inaccuracy. Therefore, it offers powerful skills, such as adaptive adjustment, parallelism, tolerance error and generalization for the neural fuzzy controller. Thus the optimal methods are used to adjust and optimize the parameters of neural fuzzy controllers through an optimization algorithm in order to improve the control performance [9].

In this chapter, we will present and analyze in section 2 the structure of adaptive neural fuzzy inference system (ANFIS), based on concepts such as fuzzy logic, optimization techniques. This approach is carried out in order to remove a control constraint relating to the need to have a model as faithful as possible, knowing that the modeling errors and the imperfections of the models, contribute to significantly degrade the performance of the conventional control laws [10].

Section 3 presents the mathematical formalism appropriate to the compensatory neural fuzzy inference system controller proposed. The effectiveness of the proposed control is highlighted by some simulation results in Section 4. Finally this paper is concluded with a summary and an outlook to future research directions in Section 5.

#### **2. Presentation of adaptive neural fuzzy inference system (ANFIS)**

Jang was the first to present ANFIS as a multi-layer adaptive network-based fuzzy inference system [11]. One can compare this method to a fuzzy inference system besides that it uses back- propagation in minimizing the errors. The operation of a FIS is similar to that of both fuzzy logic (FL) and artificial neural networks (ANN). ln both (ANN) and (FL), the input passes through the input layer (via the input membership function) and the output appears in output layer (via the output membership function). This type of advanced fuzzy logic uses neural networks. Hence, a learning algorithm can be used to change the parameters until an optimal solution is found. It follows that ANFIS uses either back-propagation or a combination of least squares estimation and back-propagation to estimate the membership function parameters [12]. Neural-Fuzzy system has newly known more attraction in research communities than other types of fuzzy expert systems. The reason of it combines the advantages of learning ability of neural network and reasoning ability *Compensatory Adaptive Neural Fuzzy Inference System DOI: http://dx.doi.org/10.5772/intechopen.96050*

of fuzzy logic to successfully solve many non-linear and complex real-world problems [13].

The regulator ANFIS is computationally very efficient, as it favors mathematical analysis, and works well with linear, adaptive, and optimization techniques. The fuzzy reasoning is performed with operators min and prod [14].

The conclusions of fuzzy rules are numeric values calculated from the inputs, so the final value is obtained by performing a weighted average of the conclusions [15, 16].

To simplify understanding and without loss of generalities, let us consider a fuzzy regulator with two inputs *e*<sup>1</sup> and *e*<sup>2</sup> and one output *u*. The entry *x*<sup>1</sup> is associated with two fuzzy sets *A*<sup>1</sup> and *A*2. As for the two fuzzy sets associated with the second entry *x*<sup>2</sup> are *B*<sup>1</sup> and *B*2. The output *u* is modeled by a fuzzy Sugeno-type system, composed of the following four rules [17]:

Rule 1 : *if x*<sup>1</sup> *is A*<sup>1</sup> *and x*<sup>2</sup> *is B*<sup>1</sup> *then u*<sup>1</sup> ¼ *f* <sup>1</sup>ð Þ¼ *x*1, *x*<sup>2</sup> *a*1*x*<sup>1</sup> þ *b*1*x*<sup>2</sup> þ *c*<sup>1</sup> (1)

$$\text{Rule 2}: \text{if } \propto\_1 \text{is } A\_1 \text{ and } \propto\_2 \text{ is } B\_2 \text{ then } u\_2 = f\_2(\mathbf{x}\_1, \mathbf{x}\_2) = a\_2 \mathbf{x}\_1 + b\_2 \mathbf{x}\_2 + c\_2 \tag{2}$$

Rule 3 : *if x*<sup>1</sup> *is A*<sup>2</sup> *and x*<sup>2</sup> *is B*<sup>1</sup> *then u*<sup>3</sup> ¼ *f* <sup>3</sup>ð Þ¼ *x*1, *x*<sup>2</sup> *a*3*x*<sup>1</sup> þ *b*3*x*<sup>2</sup> þ *c*<sup>3</sup> (3)

$$\text{Rule 4}: \text{if } \ge\_1 \text{is } A\_2 \text{ and } \ge\_2 \text{is } B\_2 \text{ then } u\_4 = f\_4(\mathfrak{x}\_1, \mathfrak{x}\_2) = a\_4 \mathfrak{x}\_1 + b\_4 \mathfrak{x}\_2 + c\_4 \tag{4}$$

Let us denote *Ok*,*<sup>i</sup>* the node in the *ith* position of the *kth* layer. The node functions in the same layer are of the same function family as defined below.

The input layer is denoted Layer 1 and any node *i* in this layer is a square node with a node function that describes the membership function. Hence *O*1,*<sup>i</sup>* is the membership function of *Ai*, and it specifies the degree to which a given variable x satisfies its quantifier *Ai*. We select the membership function in such a way the maximum of which is equal to unity and the minimum equal to zero.

The structure of the regulator ANFIS is given by the following figure (**Figure 1**):

**Figure 1.** *Structure of the regulator ANFIS.*

Through this structure we can see five layers described as follows:

*Layer 1*: The function of node at this layer is identical to the membership function in the fuzzification process:

*Layer 2*: generate the degree of activation of a rule.

$$\begin{cases} \mathbf{O}\_{2,1} = \mathbf{O}\_{1,1} . \mathbf{O}\_{1,3} = \mathbf{w}\_1 \\ \mathbf{O}\_{2,2} = \mathbf{O}\_{1,1} . \mathbf{O}\_{1,4} = \mathbf{w}\_2 \\ \mathbf{O}\_{2,3} = \mathbf{O}\_{1,2} . \mathbf{O}\_{1,3} = \mathbf{w}\_3 \\ \mathbf{O}\_{2,4} = \mathbf{O}\_{1,2} . \mathbf{O}\_{1,4} = \mathbf{w}\_4 \end{cases} \tag{5}$$

*Layer 3*: Each node of this layer is a circular node denoted by N. The output node represents the normalized activation degree according to the *i*th rule.

$$\mathbf{O}\_{3,i} = \frac{\mathbf{w}\_i}{\sum\_{k=1}^4 \mathbf{w}\_k} \text{ for } i = 1, 4 \tag{6}$$

*Layer 4*: Each node of this layer is a square node with a function described as follows:

$$\mathbf{O}\_{4,i} = \mathbf{O}\_{3,i}\mathbf{f}\_i = \mathbf{v}\_i(\mathbf{a}\_i\mathbf{e} + \mathbf{b}\_i\Delta\mathbf{e} + \mathbf{c}\_i) \tag{7}$$

where v*<sup>i</sup>* is the output of the node I of layer 3 and ai, bi f g , ci is the set of update parameters.

*Layer 5*: In this layer, there is only one node that determines the overall output by using the following expression:

$$\mathbf{O}\_{\mathbf{5},i} = \sum\_{\mathbf{i}} \mathbf{O}\_{\mathbf{4},\mathbf{i}} \mathbf{f}\_{\mathbf{i}} \text{ for } \mathbf{i} = \mathbf{1} \dots 4 \tag{8}$$

Considering *x*<sup>1</sup> and *x*<sup>2</sup> are the position error *e* and its derivative Δ*e*: ½ �¼ *x*1, *x*<sup>2</sup> ½ � *e*,Δ*e :* We associate two fuzzy sets for each of the inputs *x*<sup>1</sup> and *x*<sup>2</sup> namely *N (Negative)* and *P (Positive)*. *μ<sup>N</sup>* and *μ<sup>P</sup>* represent the degrees of membership appropriate to variables *xi* with respect to the fuzzy subsets *Ai* and *Bi*, defined by the following membership functions (**Figure 2**) [17]:

$$For \ i = 1, 2$$

$$\mu\_N(\mathbf{x}\_i) = \begin{cases} 1, & \text{if } \mathbf{x}\_i < -1 \\ -\mathbf{0}.5\mathbf{x}\_i + & \mathbf{0}.5, \qquad \text{if } -1 < \mathbf{x}\_i < 1 \\ & \mathbf{0}, & \text{if } \mathbf{x}\_i > 1 \end{cases} \tag{9}$$

**Figure 2.** *Membership functions.*

*Compensatory Adaptive Neural Fuzzy Inference System DOI: http://dx.doi.org/10.5772/intechopen.96050*

$$\mu\_P(\mathbf{x}\_i) = \begin{cases} \mathbf{0}, & \text{if } \mathbf{x}\_i < -1 \\ \mathbf{0}.5\mathbf{x}\_i + & \text{0.5}, \qquad \circlearrowright^r - \mathbf{1} < \mathbf{x}\_i < \mathbf{1} \\ & \mathbf{1}, & \text{if } \ x\_i > \mathbf{1} \end{cases} \tag{10}$$

#### **2.1 Learning algorithm**

The learning process consists of identifying the consequence parameters *ai*, *bi* and *ci* for *i = 1,2, … .4*. Thus, let us assume *yd* and *y* are respectively the desired and actual outputs of system. In this work, we consider that the consequence parameters are adjusted by the minimization of the following objective function:

$$\mathbf{e}(\mathbf{k}) = \frac{1}{2} \left(\mathbf{e}\right)^2 \tag{11}$$

where *e* ¼ *yd* � *y*.

In addition, let be Φ*<sup>i</sup>* the vector of parameters to be adjusted. Our objective is to find the parameters *ai*, *bi* and *ci* of the vector Φ*<sup>i</sup>* using the gradient descent method combined with the approach of extended Kalman filter. This is equivalent to writing:

$$
\Phi\_i(k+1) = \Phi\_i(k) - a(k)\frac{\partial f}{\partial \Phi\_i} \tag{12}
$$

We have:

$$\frac{\partial f}{\partial \Phi\_i} = -\varepsilon \frac{\partial \mathbf{y}}{\partial \Phi\_i} = -\varepsilon \frac{\partial \mathbf{y}}{\partial u} \frac{\partial u}{\partial \Phi\_i} \tag{13}$$

From Eqs. (12) and (13), it follows:

$$
\Phi\_i(k+1) = \Phi\_i(k) + a(k)\frac{\partial y}{\partial u}\frac{\partial u}{\partial \Phi\_i}e \tag{14}
$$

In our case, *<sup>∂</sup><sup>y</sup> <sup>∂</sup><sup>u</sup>* cannot be evaluated, but can be estimated using the extended Kalman filter equations. Consequently, Eq. (14) can be written as:

$$
\Phi\_i(k+1) = \Phi\_i(k) + K'\Psi\_i e \tag{15}
$$

where

$$K'=a(k)\frac{\partial \mathcal{y}}{\partial u} \tag{16}$$

$$\Psi\_i = \frac{\partial u}{\partial \Phi\_i} = \begin{bmatrix} \frac{\partial u}{\partial a\_i} \\ \frac{\partial u}{\partial b\_i} \\ \frac{\partial u}{\partial c\_i} \\ \frac{\partial u}{\partial c\_i} \end{bmatrix} \tag{17}$$

The Eq. (15) can be identified to extended Kalman filter equation:

$$\Phi\_i(k+1) = \Phi\_i(k) + K(k)e \tag{18}$$

Where *K k*ð Þ is the Kalman gain defined as follows:

$$K(k) = \frac{P(k)H^T(k)}{H(k)P(k)H^T(k) + R(k)}\tag{19}$$

Where *H k*ð Þ is the Jacobian matrix (observation matrix of the system); *P k*ð Þ is the covariance estimation matrix of the error and is the covariance matrix of the process noise.

Taking *H k*ð Þ¼ ð Þ Ψ*<sup>i</sup> <sup>T</sup>*, *P k*ð Þ¼ *<sup>λ</sup>*<sup>1</sup> and *R k*ð Þ¼ *<sup>λ</sup>*2, the gain *K k*ð Þ can be written:

$$K(k) = \frac{\lambda\_1}{\left(\Psi\_i\right)^T \lambda\_1(\Psi\_i) + \lambda\_2} (\Psi\_i) = \frac{\lambda\_1}{\lambda\_1(\Psi\_i)^T(\Psi\_i) + \lambda\_2} (\Psi\_i) \tag{20}$$

Hence Eq. (18) reduces:

$$\Phi\_i(k+1) = \Phi\_i(k) + \frac{\lambda\_1}{\lambda\_1(\Psi\_i)^T(\Psi\_i) + \lambda\_2}(\Psi\_i)e\tag{21}$$

By identification between Eqs. (15) and (21), we have:

$$K' = \frac{\lambda\_1}{\lambda\_1(\Psi\_i)^T(\Psi\_i) + \lambda\_2} \tag{22}$$

Finally, the vector of consequence parameters Φ*<sup>i</sup>* can be adjusted by the following relation:

$$
\Phi\_i(k+1) = \Phi\_i(k) + K'(\Psi\_i)e \tag{23}
$$

#### **2.2 Stability analysis of the control system**

From Eq. (23), we can consider for a very short time *Te*, this relation:

$$\dot{\Phi}\_i = \frac{\Phi\_i(k+1) - \Phi\_i(k)}{T\_\epsilon} = \frac{K'(\Psi\_i)e}{T\_\epsilon} = K\_1(\Psi\_i)e \tag{24}$$

Where *<sup>K</sup>*<sup>1</sup> <sup>¼</sup> *<sup>K</sup>*<sup>0</sup> *Te* Hence:

$$
\dot{\Phi}\_i = K\_1(\Psi\_i)e\_u \tag{25}
$$

Where *eu* ¼ *K*1*e* is the error between the controller's desired output *ud* and actual output u.

Let be <sup>Φ</sup><sup>~</sup> *<sup>i</sup>* <sup>¼</sup> <sup>Φ</sup>*id* � <sup>Φ</sup>*i*, where <sup>Φ</sup>*<sup>i</sup>* is the vector of the consequence parameters and Φ*id* the vector of the desired consequence parameters.

$$
\dot{\Phi}\_i = \dot{\Phi}\_{id} - \dot{\Phi}\_i = -(\Psi\_i)\mathbf{e}\_u \tag{26}
$$

For linear variation, the error *eu* is defined by [18]:

$$e\_u = u\_d - u = \sum\_{i=1}^{4} \left( (\Psi\_i)^T \Phi\_{id} - (\Psi\_i)^T \Phi\_i \right) = \sum\_{i=1}^{4} \left( (\Psi\_i)^T (\Phi\_{id} - \Phi\_i) \right) = \sum\_{i=1}^{4} \left( (\Psi\_i)^T \bar{\Phi}\_i \right) \tag{27}$$

Consider the following Lyapunov function [19–21]:

$$V = \frac{1}{2} \sum\_{i=1}^{4} \left( \left( \bar{\Phi}\_i \right)^T \left( \bar{\Phi}\_i \right) \right) \tag{28}$$

Differentiating *V* with respect to time yields, we obtain [17]:

$$\dot{V} = \sum\_{i=1}^{4} \left( \left( \dot{\tilde{\Phi}}\_{i} \right)^{T} (\tilde{\Phi}\_{i}) \right) \tag{29}$$

From Eqs. (26), (27) and (29), we obtain:

$$\dot{V} = \sum\_{i=1}^{4} \left( \left( - (\Psi\_i) e\_u \right)^T (\ddot{\Phi}\_i) \right) = - (e\_u)^T \sum\_{i=1}^{4} \left( (\Psi\_i)^T (\ddot{\Phi}\_i) \right) = - (e\_u)^T (e\_u) \tag{30}$$

Consequently, from Eq. (30), we find that *V*\_ ≤0, so we conclude that the system is asymptotically stable in the sense of Lyapunov according to the LaSalle theorem.

#### **3. Compensatory adaptive neural fuzzy inference system (CANFIS)**

The other class of inference systems that can deal this type of analytic information in conclusion of rules inference was proposed by Sugeno and his staff.

As for our contribution here, it consists in adding a compensatory fuzzy part to adjust consequence parameters and as well to dynamically optimize the adaptive fuzzy reasoning. In addition to this, ANFIS represents a class of adaptive networks that are functionally equivalent to a first order Takagi-Sugeno fuzzy model. As performed above, by taking a center-average deffuzzifier mapping, the crisp value of the output *u* is given as:

$$w = \frac{\sum\_{i=1}^{4} (a\_i e + b\_i \Delta e + c\_i) w\_i}{\sum\_{i=1}^{4} w\_i} \tag{31}$$

We consider the pessimistic and optimistic operation given respectively as follows:

$$
\omega\_i = \omega\_i \tag{32}
$$

$$\boldsymbol{w}\_{i} = [\boldsymbol{w}\_{i}]^{\frac{1}{2}} \tag{33}$$

By using these two operations, our contribution is to add the compensatory form formulated as [7]:

$$\mathbf{C}\_{i}(\mathbf{z}\_{i}, \mathbf{q}\_{i}, \boldsymbol{\gamma}\_{i}) = (\mathbf{z}\_{i})^{1 - \gamma\_{i}} (\boldsymbol{m}\_{i})^{\gamma\_{i}} \tag{34}$$

Where *γiϵ* ½ � 0 1 is compensatory degree. Finally, the crisp value of the compensatory neural-fuzzy inference is derived as [13, 14]:

$$\mu = \frac{\sum\_{i=1}^{4} (a\_i \varepsilon + b\_i \Delta \varepsilon + c\_i) \left[ w\_i \right]^{1 - \frac{\gamma}{2}}}{\sum\_{i=1}^{4} \left[ w\_i \right]^{1 - \frac{\gamma}{2}}} \tag{35}$$

For simplicity, we define:

$$a\_i = 1 - \frac{\gamma\_i}{2} \tag{36}$$

Then we have:

$$u = \frac{\sum\_{i=1}^{4} (a\_i e + b\_i \Delta e + c\_i) [w\_i]^{a\_i}}{\sum\_{i=1}^{4} [w\_i]^{a\_i}} \tag{37}$$

The structure of the CANFIS controller with compensatory fuzzy for two input and one output, is shown by **Figure 3** [9].

#### **3.1 Learning algorithm**

Consider as for ANFIS two dimensional data vectors, *x* ¼ ½ � *e*,Δ*e* and one dimensional output data vector *u*2. In order to limit the computation time, we have optimally adjusted the consequence parameters and compensatory degree by minimizing the following objective function:

**Figure 3.** *Structure of CANFIS controller.*

*Compensatory Adaptive Neural Fuzzy Inference System DOI: http://dx.doi.org/10.5772/intechopen.96050*

Where *yd* and *y* are respectively desired and actual values of output system. Now let Φ2*<sup>i</sup>* for i = 1….4, be the vector of update parameters. We aim to determine vector Φ2*<sup>i</sup>* through the extended Kalman filter which consists in linearizing the output around the control input at each sampling period. This is equivalent to writing [16, 19, 20]:

$$\frac{\partial \mathcal{J}}{\partial \Phi\_{2i}} = \frac{\partial \mathcal{J}}{\partial u\_2} = -\left(\mathcal{y}\_d - \mathcal{y}\right) \frac{\partial \mathcal{y}}{\partial u\_2} \frac{\partial u\_2}{\partial \Phi\_{2i}} = -K'.\Psi\_{2i}e\tag{39}$$

In which

$$
\Psi\_{2i} = \frac{\partial u\_2}{\partial \Phi\_{2i}}\tag{40}
$$

$$K\_2 = \frac{\lambda\_1}{\lambda\_1 \Psi\_{2i}^T \Psi\_{2i} + \lambda\_2} \tag{41}$$

Where *λ*<sup>1</sup> and *λ*<sup>2</sup> are adaptation gains for varying the convergence rate. Further, to eliminate the constraint *γ<sup>i</sup>* ∈½ � 0, 1 , we redefine *γ<sup>i</sup>* as follows [7]:

$$\frac{\left(p\_i\right)^2}{\left(p\_i\right)^2 + \left(r\_i\right)^2} \tag{42}$$

Where *pi* and *ri* are update parameters such that *γ<sup>i</sup>* ∈½ � 0, 1 . Consequently, the vector of update parameters is given as ð Þ Φ2*<sup>i</sup> <sup>T</sup>* <sup>¼</sup> *ai*, *bi*,*ci*, *pi* ,*ri* � � for CANFIS. According to the definition, we have [17]:

$$\frac{\partial \boldsymbol{u}\_2}{\partial \mathbf{a}\_i} = \frac{\boldsymbol{e}[\boldsymbol{w}\_i]^{a\_i}}{\sum\_{i=1}^4 [\boldsymbol{w}\_i]^{a\_i}} \tag{43}$$

$$\frac{\partial \boldsymbol{u}\_2}{\partial \boldsymbol{b}\_i} = \frac{\Delta \boldsymbol{e} [\boldsymbol{w}\_i]^{a\_i}}{\sum\_{i=1}^4 [\boldsymbol{w}\_i]^{a\_i}} \tag{44}$$

$$\frac{\partial \boldsymbol{u}\_2}{\partial \boldsymbol{c}\_i} = \frac{[\boldsymbol{w}\_i]^{a\_i}}{\sum\_{i=1}^4 [\boldsymbol{w}\_i]^{a\_i}} \tag{45}$$

$$\frac{\partial u\_2}{\partial \gamma\_i} = -\frac{1}{2} \left[ \sum\_{i=1}^4 (a\_i e + b\_i \Delta e + c\_i) \right] \frac{z\_i \ln \left( w\_i \right)}{\sum\_{i=1}^4 z\_i} \tag{46}$$

$$\frac{\partial u\_2}{\partial p\_i} = -\left\{ \frac{2p\_i \left(q\_i\right)^2}{\left(p\_i\right)^2 + \left(q\_i\right)^2} \right\} \frac{\partial u\_2}{\partial \gamma\_i} \tag{47}$$

$$\frac{\partial u\_2}{\partial r\_i} = \left\{ \frac{2q\_i \left(p\_i\right)^2}{\left(p\_i\right)^2 + \left(r\_i\right)^2} \right\} \frac{\partial u\_2}{\partial \gamma\_i} \tag{48}$$

Finally, the vector of parameters Φ2*<sup>i</sup>* is adjusted using the following equation:

$$
\Phi\_{2i}(k+1) = \Phi\_{2i}(k) + K^{\prime}\Psi\_{2i}e\tag{49}
$$

Where ð Þ Ψ2*<sup>i</sup> <sup>T</sup>* <sup>¼</sup> *<sup>∂</sup>u*<sup>2</sup> *∂ai* , *<sup>∂</sup>u*<sup>2</sup> *∂bi* , *<sup>∂</sup>u*<sup>2</sup> *<sup>∂</sup>ci* , *<sup>∂</sup>u*<sup>2</sup> *∂pi* , *<sup>∂</sup>u*<sup>2</sup> *∂ri* h i.

#### **3.2 Stability analysis of the control system**

From Eq. (48), we can consider for a very short time *Te*, this relation:

$$\dot{\Phi}\_{2i} = \frac{\Phi\_{2i}(k+1) - \Phi\_{2i}(k)}{T\_{\varepsilon}} = \frac{K'(\Psi\_{2i})e}{T\_{\varepsilon}} = K\_1(\Psi\_{2i})e \tag{50}$$

Where *<sup>K</sup>*<sup>1</sup> <sup>¼</sup> *<sup>K</sup>*<sup>0</sup> *Te* Hence:

$$
\dot{\Phi}\_{2i} = K\_1(\Psi\_{2i}) e\_u \tag{51}
$$

Where *eu* ¼ *K*1*e* is the error between the controller's desired output *ud* and actual output u.

Let be <sup>Φ</sup><sup>~</sup> <sup>2</sup>*<sup>i</sup>* <sup>¼</sup> <sup>Φ</sup>2*id* � <sup>Φ</sup>2*i*, where <sup>Φ</sup>2*<sup>i</sup>* is the vector of the consequence parameters and Φ2*id* the vector of the desired consequence parameters.

$$
\dot{\Phi}\_{2i} = \dot{\Phi}\_{2id} - \dot{\Phi}\_{2i} = -(\Psi\_{2i})\mathbf{e}\_u \tag{52}
$$

For linear variation, the error *eu* is defined by:

$$\mathfrak{e}\_{u} = \mathfrak{u}\_{d} - \mathfrak{u} = \sum\_{i=1}^{4} \left( (\Psi\_{2i})^{T} \Phi\_{2id} - (\Psi\_{2i})^{T} \Phi\_{2i} \right)$$

$$= \sum\_{i=1}^{4} \left( (\Psi\_{2i})^{T} (\Phi\_{2id} - \Phi\_{2i}) \right) = \sum\_{i=1}^{4} \left( (\Psi\_{2i})^{T} \tilde{\Phi}\_{2i} \right) \tag{53}$$

Consider the following Lyapunov function:

$$V = \frac{1}{2} \sum\_{i=1}^{4} \left( \left( \tilde{\Phi}\_{2i} \right)^{T} \left( \tilde{\Phi}\_{2i} \right) \right) \tag{54}$$

Differentiating *V* with respect to time yields, we obtain:

$$\dot{V} = \sum\_{i=1}^{4} \left( \left( \dot{\bar{\Phi}}\_{2i} \right)^{T} \left( \bar{\Phi}\_{2i} \right) \right) \tag{55}$$

From Eqs. (51), (52) and (54), we obtain:

$$\dot{V} = \sum\_{i=1}^{4} \left( \left( - (\Psi\_{2i}) \boldsymbol{e}\_u \right)^{T} (\boldsymbol{\tilde{\Phi}}\_{2i}) \right) = - (\boldsymbol{e}\_u)^{T} \sum\_{i=1}^{4} \left( (\Psi\_{2i})^{T} (\boldsymbol{\tilde{\Phi}}\_{2i}) \right) = - (\boldsymbol{e}\_u)^{T} (\boldsymbol{e}\_u) \tag{56}$$

Consequently, from Eq. (55), we find that *V*\_ ≤0, so we conclude that the system is asymptotically stable in the sense of Lyapunov according to the LaSalle theorem.

#### **4. Simulation results and interpretation**

We applied in simulation the neural-fuzzy command equipped with a compensator explained above, to the two-joint robot in a performance environment described by the following joint trajectories:

$$q\_{ir} = \frac{\pi}{6} (\mathbf{1} - \cos(\mathbf{\dot{6}}t)) \tag{57}$$

*Compensatory Adaptive Neural Fuzzy Inference System DOI: http://dx.doi.org/10.5772/intechopen.96050*

With *i* ¼ 1 … *:*2.

The compact form of the dynamic model relating to the two-joint robot is given as follows:

$$\tau = \underbrace{\begin{bmatrix} M\_{11} \, M\_{12} \\ M\_{21} \, M\_{22} \end{bmatrix}}\_{} \ddot{q} + \underbrace{\begin{bmatrix} \mathbf{C}\_{11}(q, \dot{q}) \, \mathbf{C}\_{12}(q, \dot{q}) \\ \mathbf{C}\_{21}(q, \dot{q}) \, \mathbf{C}\_{22}(q, \dot{q}) \end{bmatrix}}\_{} \dot{q} + \underbrace{\begin{bmatrix} \mathbf{G}\_{1}(q) \\ \mathbf{G}\_{2}(q) \end{bmatrix}}\_{} \tag{58}$$

*M q*ð Þ *C q*ð Þ , *q*\_ *G q*ð Þ

Where.

*q* ¼ *q*1; *q*<sup>2</sup> � �: Vector of joint position variables. *q*\_ ¼ *q*\_ <sup>1</sup>; *q*\_ <sup>2</sup> � �: Vector of joint velocity variables. €*q* ¼ €*q*1; €*q*<sup>2</sup> � �: Vector of joint acceleration variables. *τ* ¼ ½ � *τ*1; *τ*<sup>2</sup> : Vector of torques applied to joint. *M q*ð Þ: Inertial matrix. *C q*ð Þ , *q*\_ : Matrix of terms centripetal and coriolis.

*G q*ð Þ: Vector of gravitational effects

$$M\_{11}(q) = m\_1l\_{c1}^2 + m\_2l\_1^2 + m\_2l\_{c2}^2 + m\_2l\_1l\_{c2}\cos\left(q\_2\right) + I\_1 + I\_2$$

$$M\_{12}(q) = M\_{21}(q) = m\_2l\_{c2}^2 + m\_2l\_1l\_{c2}\cos\left(q\_2\right) + I\_2$$

$$M\_{22}(q) = m\_2l\_{c2}^2 + I\_2$$

$$C\_{11}(q, \dot{q}) = -m\_2l\_1l\_{c2}\sin\left(q\_2\right)\dot{q}\_2$$

$$C\_{12}(q, \dot{q}) = -m\_2l\_1l\_{c2}\sin\left(q\_2\right)\left[\dot{q}\_1 + \dot{q}\_2\right]$$

$$C\_{21}(q, \dot{q}) = m\_2l\_1l\_{c2}\sin\left(q\_2\right)$$

$$C\_{22}(q, \dot{q}) = 0$$

$$G\_1(q) = [m\_1l\_{c1} + m\_2l\_1]\text{gsin}\left(q\_1\right) + m\_2gl\_{c2}\sin\left(q\_1 + q\_2\right)$$

$$G\_2(q) = m\_2gl\_{c2}\sin\left(q\_1 + q\_2\right)$$

The parameters relating to the dynamic model of this robot are given in the following table (**Table 1**):


#### **Table 1.** *Robot parameters.*

**Figure 4.** *Motion errors tracking and torques behavior with neural-fuzzy without disturbances.*

**Figure 5.** *Motion errors tracking and torques behavior with compensatory neural-fuzzy with without disturbances.*

*Compensatory Adaptive Neural Fuzzy Inference System DOI: http://dx.doi.org/10.5772/intechopen.96050*

**Figures 4** and **5** show the time evolution position and velocity errors and the torque applied to each joint of manipulator Robot with neuro-fuzzy controller and neuro-fuzzy controller respectively. Through these graphics, we can see that, the Neural –fuzzy controllers and compensatory Neural-fuzzy controllers provide a good tracking performance.

On the one hand, we can observe that the tracking errors are limited by low values, and the dynamics of the errors in position vary little compared to that of the errors in speed. This is physically explained by the fact that the position depends only on the environment while the velocity in addition to the environment depends on the Jacobean matrix. On the other hand, the command paths are smooth, which facilitates their implementation. This is achieved through the appropriate choice of parameters of the control structures.

In order to test the capacity of adaptation and robustness of the proposed approach, we have added in our simulation at time *t* ¼ 5*s* the combined friction and external torque disturbance for each joint, given as follows.

$$
\pi\_{\hat{t}i} = \mathbf{38.3}\dot{q}\_i + \mathbf{18.9}\cos\left(q\_i\right) \tag{59}
$$

The results obtained are illustrated by **Figures 6** and **7**, where we note that the tracking errors show peaks especially at the moment of the introduction of the disturbances, which are rejected quickly, by the Neural – fuzzy controllers and Compensatory Neural-fuzzy structure of the regulator, which allows to conclude that the tracking performance is very little affected by these disturbances. This is due to the low sensitivity to disturbance of the input data of the proposed control strategy.

**Figure 6.** *Motion errors tracking and torques behavior with neural –fuzzy with disturbances.*

**Figure 7.** *Motion errors tracking and torques behavior with compensatory neural –fuzzy with disturbances.*
