**7. Simulation results**

In this paragraph, graphical and numerical simulation results of system identification, direct, indirect (SM), and optimal control, with and without I-term, will be given. For lack of space we will give graphical results only for the X1 variable. Furthermore the graphical results for the other variables possessed similar behavior. The identification results are obtained from an RTNN identifier by a BP or L-M learning. For sake of comparison we give results of systems identification using both algorithms of learning.

#### **7.1 Simulation results of the system identification**

188 Recurrent Neural Networks and Soft Computing

Fig. 6. Block diagram of the real-time optimal control with I-term containing RTNN

0

*A B*

, ( ) *e e*

<sup>1</sup> () [ ] [ ] () *T T U k B PB R B PB X k e ee e ee e*

<sup>1</sup> ( 1) [ ( ) ( ) ( ( ) ) ( )] *T TT P k A P k P kB B P kB R B P k A Q e e e e eee e ee e*

The given up optimal control is rather complicated and here it is used for purpose of

In this paragraph, graphical and numerical simulation results of system identification, direct, indirect (SM), and optimal control, with and without I-term, will be given. For lack of space we will give graphical results only for the X1 variable. Furthermore the graphical results for the other variables possessed similar behavior. The identification results are obtained from an RTNN identifier by a BP or L-M learning. For sake of comparison we give

*A B*

Where: Xe = [X|V] T is a state vector with dimension (L + N) and:

Where the Pe is solution of the discrete Riccati equation:

results of systems identification using both algorithms of learning.

( 1) ( ) ( ) *X k AX k BU k e e e e* (59)

*CB CA I <sup>I</sup>* (60)

(61)

(62)

identifier and optimal controller

The optimal I-term control is given by:

comparison.

**7. Simulation results** 

The RTNN-1 performed real-time neural system identification (parameters and states estimation) of 18 output plant variables, which are: 4 variables for each collocation point z=0.2H, z=0.4H, z=0.6H, z=0.8H of the fixed bed as: X1 (acidogenic bacteria), X2 (methanogenic bacteria), S1 (chemical oxygen demand) and S2 (volatile fatty acids), and the following variables in the recirculation tank: S1T (chemical oxygen demand) and S2T (volatile fatty acids). For lake of space we shall show some graphical results (see Fig. 7-9) only for the X1 variable. The topology of the RTNN-1 is (2, 20, 18), the activation functions are tanh(.) for both layers. The learning rate parameters for the L-M learning are as follows: the forgetting factor is =1, the regularization constant is ρ*=*0.001, and the initial value of the P matrix is an identity matrix with dimension 420x420. For the BP algorithm of learning the learning constants are chosen as =0, =0.4. The simulation results of RTNN-1 system identification are obtained on-line during 400 days with a step of 0.5 day in four measurement points using BP and L-M learning. The identification inputs used are combination of three sinusoids as:

$$S\_{1,in} = 0.5 + 0.02\sin\left(\frac{\pi}{100}t\right) + 0.1\sin\left(\frac{3\pi}{100}t\right) + 0.04\cos\left(\frac{\pi}{100}t\right) \tag{63}$$

$$S\_{2,in} = 0.5 + 0.1\sin\left(\frac{\pi}{100}t\right) + 0.1\sin\left(\frac{5\pi}{100}t\right) + 0.1\cos\left(\frac{8\pi}{100}t\right) \tag{64}$$

Fig. 7. Graphical simulation results of the neural identification of the plant output X1 vs. RTNN output in four measurement points for the total time of L-M learning : a) z=0.2H, b) z=0.4H, c) z=0.6H, d) z=0.8H

Centralized Distributed Parameter Bioprocess

z=0.4H, c) z=0.6H, d) z=0.8H

z=0.4H, c) z=0.6H, d) z=0.8H

Identification and I-Term Control Using Recurrent Neural Network Model 191

Fig. 10. Graphical simulation results of the neural identification of the plant output X1 vs. RTNN output in four measurement points for the total time of BP learning : a) z=0.2H, b)

Fig. 11. Graphical simulation results of the neural identification of the plant output X1 vs. RTNN output in four measurement points for the beginning of BP learning: a) z=0.2H, b)

Fig. 8. Graphical simulation results of the neural identification of the plant output X1 vs. RTNN output in four measurement points for the beginning of L-M learning : a) z=0.2H, b) z=0.4H, c) z=0.6H, d) z=0.8H

Fig. 9. Three dimensional plot of the neural identification results of the plant output X1 in four measurement points of L-M learning : z=0.2H, z=0.4H, z=0.6H, z=0.8H

Fig. 8. Graphical simulation results of the neural identification of the plant output X1 vs. RTNN output in four measurement points for the beginning of L-M learning : a) z=0.2H, b)

Fig. 9. Three dimensional plot of the neural identification results of the plant output X1 in

four measurement points of L-M learning : z=0.2H, z=0.4H, z=0.6H, z=0.8H

z=0.4H, c) z=0.6H, d) z=0.8H

Fig. 10. Graphical simulation results of the neural identification of the plant output X1 vs. RTNN output in four measurement points for the total time of BP learning : a) z=0.2H, b) z=0.4H, c) z=0.6H, d) z=0.8H

Fig. 11. Graphical simulation results of the neural identification of the plant output X1 vs. RTNN output in four measurement points for the beginning of BP learning: a) z=0.2H, b) z=0.4H, c) z=0.6H, d) z=0.8H

Centralized Distributed Parameter Bioprocess

**using L-M learning** 

consideration.

case).

Identification and I-Term Control Using Recurrent Neural Network Model 193

The real-time DANC (see Fig. 4) contained a neural identifier RTNN-1 and a neural controller RTNN-2 with topology (40, 10, 2). Both RTNNs-1, 2 are learnt by the L-M algorithm with parameters: RTNN-1 (=1, ρ*=*0.0001, Po=10 I with dimension 420x420); RTNN-2 (=1, ρ*=*0.01, Po=0.8 I with dimension 430x430). The simulation results of DANC are obtained on-line during 1000 days with a step of 0.1 day. The control signals are shown on Fig. 13. The Fig. 14-16 compared the plant output X1 with the reference signal in different measurement points. The form of the set points (train of pulses with random amplitude) of the variable X1 in the different measurement points is equal but the amplitude is different depending on the point position. This means that the plant has different signal amplification in each measurement point which needs to be taken in

**7.2 Simulation results of the centralized direct adaptive neural control with I-term** 

Fig. 13. Plant input control signals generated by I-term DANC: a) Sin1, and b) Sin2

*Collocation point X1 X2 S1 / S1T S2 / S2T*

z=0.2 2.2920E-8 1.3366E-7 5.9740E-6 1.7568E-5

z=0.4 1.4517E-8 8.0704E-8 3.4003E-6 9.3272E-6

z=0.6 8.5061E-9 4.3891E-8 1.9213E-6 4.9682E-6

z=0.8 4.4770E-9 2.1242E-8 1.2789E-6 3.1322E-6

Recirculation tank 1.0067E-6 2.2073E-6

The given on Fig. 14-16 graphical results of I-term DANC showed smooth exponential behavior. It could be seen also that the L-M learning converge fast and the I-term remove the constant noise Of, and the plant uncertainties. The obtained numerical results (see Table 4) of final MSE of L-M learning possessed small values (1.7568E-5 in the worse

Table 4. MSE of the centralized I-term DANC of the bioprocess output variables in the

collocation points, using the L-M RTNN learning

Fig. 12. Three dimensional plot of the neural identification results of the plant output X1 in four measurement points of BP learning : z=0.2H, z=0.4H, z=0.6H, z=0.8H

Table 2 and Table 3 compared the final Means Squared Error (MSE%) results of the L-M and BP neural identification of plant variables for the fixed bed and the recirculation tank. Note that the form of the plant process variables in the different measurement points is equal but the amplitude is different depending on the point position.


Table 2. MSE of the centralized RTNN approximation of the bioprocess output variables in the collocation points, using the L-M RTNN learning


Table 3. MSE of the centralized RTNN approximation of the bioprocess output variables in the collocation points, using the BP RTNN learning

The graphical and numerical results of the centralized RTNN identification (see Fig. 7-12, and Tables 2, 3) showed a good RTNN convergence and precise plant output tracking (MSE is 2.5476E-4 for the L-M, and 2.8282E-4 for the BP RTNN learning in the worst case).

Fig. 12. Three dimensional plot of the neural identification results of the plant output X1 in

Table 2 and Table 3 compared the final Means Squared Error (MSE%) results of the L-M and BP neural identification of plant variables for the fixed bed and the recirculation tank. Note that the form of the plant process variables in the different measurement points is equal but

*Collocation point X1 X2 S1 / S1T S2 / S2T* z=0.2 5.0843E-7 1.8141E-6 1.3510E-4 2.5476E-4 z=0.4 3.1428E-7 1.3934E-6 8.3839E-5 1.8217E-4 z=0.6 1.9617E-7 9.6976E-7 5.2303E-5 1.2200E-4 z=0.8 1.2669E-7 6.6515E-7 3.3940E-5 8.1905E-5 Recirculation tank 2.6318E-5 6.3791E-5 Table 2. MSE of the centralized RTNN approximation of the bioprocess output variables in

*Collocation point X1 X2 S1 / S1T S2 / S2T* z=0.2 5.9981E-7 2.1006E-6 1.5901E-4 2.8282E-4 z=0.4 3.7111E-7 1.6192E-6 9.8240E-5 2.0506E-4 z=0.6 2.3145E-7 1.1308E-6 6.1119E-5 1.3908E-4 z=0.8 1.4997E-7 7.7771E-7 3.9595E-5 9.4061E-5 Recirculation tank 3.0694E-5 7.3404E-5 Table 3. MSE of the centralized RTNN approximation of the bioprocess output variables in

The graphical and numerical results of the centralized RTNN identification (see Fig. 7-12, and Tables 2, 3) showed a good RTNN convergence and precise plant output tracking (MSE

is 2.5476E-4 for the L-M, and 2.8282E-4 for the BP RTNN learning in the worst case).

four measurement points of BP learning : z=0.2H, z=0.4H, z=0.6H, z=0.8H

the amplitude is different depending on the point position.

the collocation points, using the L-M RTNN learning

the collocation points, using the BP RTNN learning
