**2. Synchronous computation of tracking mean field dynamics**

By asynchronous updating at each time step numerical simulations select one processing element and refine its mean activation under fixed mean activations of the others. Let *<sup>ψ</sup>i*(�*si*�) denote *<sup>ψ</sup>* with fixed *sj* , *j* �= *i*,

$$\psi\_{i}(\langle s\_{i}\rangle) = h\_{i}\langle s\_{i}\rangle + c\_{i} - \left[\frac{1+\langle s\_{i}\rangle}{2}\log\frac{1+\langle s\_{i}\rangle}{2} + \frac{1-\langle s\_{i}\rangle}{2}\log\frac{1-\langle s\_{i}\rangle}{2}\right],\tag{8}$$

where *<sup>β</sup>* = 1 is considered. �*si*� = tanh(*hi*) minimizes the above equation. *<sup>ψ</sup>i*(�*si*�) is an approximation to the one-dimensional function obtained by cutting functional surface of *ψ* along the direction of �*si*� for fixed � *sj* � , *<sup>j</sup>* �= *<sup>i</sup>*. By asynchronous updating the coefficient *hi* of the linear term in equation (8) always maintains an instance determined by fixing most recently updated mean activations. Asynchronous updating is represented by

$$\langle s\_i \rangle \leftarrow f\left(\sum\_{j \neq i} w\_{ij} \left\langle s\_j \right\rangle + c\_i\right). \tag{9}$$

10.5772/57217

287

http://dx.doi.org/10.5772/57217

th 0 step

*<sup>T</sup>* and

0 *a*<sup>12</sup> *a*<sup>13</sup> *a*<sup>14</sup> *a*<sup>21</sup> 000 0 *a*<sup>32</sup> 0 0 0 0 *a*<sup>42</sup> 0

**u**[*k*] =

 

� � � � � � � � **x**[*k* − 1] **x**[*k* − 2] **x**[*k* − 3]

0000 0 0 *a*<sup>23</sup> *a*<sup>24</sup> *a*<sup>31</sup> 000 0 *a*<sup>42</sup> 0 0

 ,

> � � � � � � � �

 

**Figure 1.** Asynchronous update.

where **<sup>x</sup>**[*k*] = (*x*1[*k*], ··· , *<sup>x</sup>*4[*k*])

*T* denotes transpose and

for *k* ≥ 3. The matrix form is expressed by

*B* =

  1x

1x

1x

2 x

2 x

<sup>2</sup> x <sup>3</sup> x <sup>4</sup> x

3 x

3 x

Tracking Mean Field Dynamics by Synchronous Computations of Recurrent Multilayer Perceptrons

4 x

**x**[*k*] = *B***u**[*k*] + **c** (14)

4 x

th 1k step

k th 2 step

k th 3 step

k th 4 step

k th 5 step

k th 6 step

k th 7 step

k th 8 step

The asynchronous cutting and approximating strategy is very different from synchronous updating that directly combines equations (2) and (3) for all *i*, such as

$$
\langle s \rangle \leftarrow \tanh(\mathcal{W} \langle s \rangle) + \mathcal{c} \tag{10}
$$

where *W* collects all *wij*. By synchronous updating, all *hi* use the copy formed by all mean activations synchronously determined at the previous step. Numerical simulations have verified synchronous updating based on equation (10) infeasible for relaxing of mean field dynamics.

### **2.1. Linear system**

Let *f* be a linear function and the asynchronous updating rule (9) is equivalent to

$$\mathbf{x}\_{i} \leftarrow \sum\_{j \neq i} a\_{ij} \mathbf{x}\_{j} + c\_{i} \tag{11}$$

where *<sup>A</sup>* = [*aij*] is a *<sup>N</sup>* × *<sup>N</sup>* matrix with *aii* = 0, ∀*<sup>i</sup>* = 1, ··· , *<sup>N</sup>*. To facilitate our presentation, we first give an example with *N* = 4 for illustration. Figure 1 shows data flow of asynchronous updating (11), where directed edges indicate the latest mean activations employed for updating. Each time asynchronous updating insists on revising only one mean activation. Without losing generality, consecutive steps of updating mean activations can be listed as follows,

$$\begin{array}{llll} \mathbf{x}\_1[k+1] = 0 & + \mathbf{a}\_{12}\mathbf{x}\_2[k] & + \mathbf{a}\_{13}\mathbf{x}\_3[k] & + \mathbf{a}\_{14}\mathbf{x}\_4[k] + \mathbf{c}\_1 \\ \mathbf{x}\_2[k+2] = a\_{21}\mathbf{x}\_1[k+1] + 0 & + a\_{23}\mathbf{x}\_3[k] & + a\_{24}\mathbf{x}\_4[k] + \mathbf{c}\_2 \\ \mathbf{x}\_3[k+3] = a\_{31}\mathbf{x}\_1[k+1] + a\_{32}\mathbf{x}\_2[k+2] + 0 & + a\_{34}\mathbf{x}\_4[k] + \mathbf{c}\_3 \\ \mathbf{x}\_4[k+4] = a\_{41}\mathbf{x}\_1[k+1] + a\_{42}\mathbf{x}\_2[k+2] + a\_{43}\mathbf{x}\_3[k+3] + 0 & + \mathbf{c}\_4 \end{array} \tag{12}$$

The system (12) is translated to synchronous updating by replacing *k* with *k* − 1, *k* − 2, *k* − 3 and *k* − 4 respectively in the four rows of equation (13)

$$\begin{array}{l} \mathbf{x}\_{1}[k] = 0 & + \mathbf{a}\_{12}\mathbf{x}\_{2}[k-1] + \mathbf{a}\_{13}\mathbf{x}\_{3}[k-1] + \mathbf{a}\_{14}\mathbf{x}\_{4}[k-1] + \mathbf{c}\_{1} \\ \mathbf{x}\_{2}[k] = \mathbf{a}\_{21}\mathbf{x}\_{1}[k-1] + 0 & + \mathbf{a}\_{23}\mathbf{x}\_{3}[k-2] + \mathbf{a}\_{24}\mathbf{x}\_{4}[k-2] + \mathbf{c}\_{2} \\ \mathbf{x}\_{3}[k] = \mathbf{a}\_{31}\mathbf{x}\_{1}[k-2] + \mathbf{a}\_{32}\mathbf{x}\_{2}[k-1] + 0 & + \mathbf{a}\_{34}\mathbf{x}\_{4}[k-3] + \mathbf{c}\_{3} \\ \mathbf{x}\_{4}[k] = \mathbf{a}\_{41}\mathbf{x}\_{1}[k-3] + \mathbf{a}\_{42}\mathbf{x}\_{2}[k-2] + \mathbf{a}\_{43}\mathbf{x}\_{3}[k-1] + 0 & + \mathbf{c}\_{4} \end{array} \tag{13}$$

**Figure 1.** Asynchronous update.

4 Computational and Numerical Simulations

along the direction of �*si*� for fixed

dynamics.

**2.1. Linear system**

listed as follows,

where *<sup>β</sup>* = 1 is considered. �*si*� = tanh(*hi*) minimizes the above equation. *<sup>ψ</sup>i*(�*si*�) is an approximation to the one-dimensional function obtained by cutting functional surface of *ψ*

of the linear term in equation (8) always maintains an instance determined by fixing most

The asynchronous cutting and approximating strategy is very different from synchronous

where *W* collects all *wij*. By synchronous updating, all *hi* use the copy formed by all mean activations synchronously determined at the previous step. Numerical simulations have verified synchronous updating based on equation (10) infeasible for relaxing of mean field

where *<sup>A</sup>* = [*aij*] is a *<sup>N</sup>* × *<sup>N</sup>* matrix with *aii* = 0, ∀*<sup>i</sup>* = 1, ··· , *<sup>N</sup>*. To facilitate our presentation, we first give an example with *N* = 4 for illustration. Figure 1 shows data flow of asynchronous updating (11), where directed edges indicate the latest mean activations employed for updating. Each time asynchronous updating insists on revising only one mean activation. Without losing generality, consecutive steps of updating mean activations can be

> *x*1[*k* + 1] = 0 + *a*12*x*2[*k*] + *a*13*x*3[*k*] + *a*14*x*4[*k*] + *c*<sup>1</sup> *x*2[*k* + 2] = *a*21*x*1[*k* + 1] + 0 + *a*23*x*3[*k*] + *a*24*x*4[*k*] + *c*<sup>2</sup> *x*3[*k* + 3] = *a*31*x*1[*k* + 1] + *a*32*x*2[*k* + 2] + 0 + *a*34*x*4[*k*] + *c*<sup>3</sup> *x*4[*k* + 4] = *a*41*x*1[*k* + 1] + *a*42*x*2[*k* + 2] + *a*43*x*3[*k* + 3] + 0 + *c*<sup>4</sup>

The system (12) is translated to synchronous updating by replacing *k* with *k* − 1, *k* − 2, *k* − 3

*<sup>x</sup>*1[*k*] = <sup>0</sup> + *<sup>a</sup>*12*x*2[*<sup>k</sup>* − <sup>1</sup>] + *<sup>a</sup>*13*x*3[*<sup>k</sup>* − <sup>1</sup>] + *<sup>a</sup>*14*x*4[*<sup>k</sup>* − <sup>1</sup>] + *<sup>c</sup>*<sup>1</sup> *<sup>x</sup>*2[*k*] = *<sup>a</sup>*21*x*1[*<sup>k</sup>* − <sup>1</sup>] + <sup>0</sup> + *<sup>a</sup>*23*x*3[*<sup>k</sup>* − <sup>2</sup>] + *<sup>a</sup>*24*x*4[*<sup>k</sup>* − <sup>2</sup>] + *<sup>c</sup>*<sup>2</sup> *<sup>x</sup>*3[*k*] = *<sup>a</sup>*31*x*1[*<sup>k</sup>* − <sup>2</sup>] + *<sup>a</sup>*32*x*2[*<sup>k</sup>* − <sup>1</sup>] + <sup>0</sup> + *<sup>a</sup>*34*x*4[*<sup>k</sup>* − <sup>3</sup>] + *<sup>c</sup>*<sup>3</sup> *<sup>x</sup>*4[*k*] = *<sup>a</sup>*41*x*1[*<sup>k</sup>* − <sup>3</sup>] + *<sup>a</sup>*42*x*2[*<sup>k</sup>* − <sup>2</sup>] + *<sup>a</sup>*43*x*3[*<sup>k</sup>* − <sup>1</sup>] + <sup>0</sup> + *<sup>c</sup>*<sup>4</sup>

and *k* − 4 respectively in the four rows of equation (13)

Let *f* be a linear function and the asynchronous updating rule (9) is equivalent to

*xi* <sup>←</sup> ∑ *j*�=*i*

, *<sup>j</sup>* �= *<sup>i</sup>*. By asynchronous updating the coefficient *hi*

�*s*� ← tanh(*W* �*s*�) + *c* (10)

*aijxj* + *ci* (11)

(12)

(13)

. (9)

� *sj* �

�*si*� ← *<sup>f</sup>*

updating that directly combines equations (2) and (3) for all *i*, such as

recently updated mean activations. Asynchronous updating is represented by

∑ *j*�=*i wij* � *sj* � + *ci*

for *k* ≥ 3. The matrix form is expressed by

$$\mathbf{x}[k] = B\mathbf{u}[k] + \mathbf{c} \tag{14}$$

where **<sup>x</sup>**[*k*] = (*x*1[*k*], ··· , *<sup>x</sup>*4[*k*]) *<sup>T</sup>* and

$$\mathbf{u}[k] = \begin{pmatrix} \mathbf{x}[k-1] \\ \mathbf{x}[k-2] \\ \mathbf{x}[k-3] \end{pmatrix} \text{ \textbf{\textquotedblleft}}$$

*T* denotes transpose and

$$B = \begin{bmatrix} 0 & a\_{12} \ a\_{13} \ a\_{14} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ a\_{21} & 0 & 0 & 0 & 0 & a\_{23} \ a\_{24} & 0 & 0 & 0 & 0 \\ 0 & a\_{32} & 0 & 0 & a\_{31} & 0 & 0 & 0 & 0 & 0 & a\_{34} \\ 0 & 0 & a\_{42} & 0 & 0 & a\_{42} & 0 & 0 & a\_{41} & 0 & 0 & 0 \end{bmatrix}$$

10.5772/57217

289

 

 

78

a

   

 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

a

74

a

63

a

85

0 0 0 0 0 0 0

 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

34 35 36 37 38

http://dx.doi.org/10.5772/57217

a a a a a

Tracking Mean Field Dynamics by Synchronous Computations of Recurrent Multilayer Perceptrons

0 0 0

52

a

3

B

41

a

7

B

 

a

81

 

0

0

0

0

0

*<sup>k</sup>*=1.

0

0

 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

64

a

53

a

**Figure 3.** The representation of matrix {*Bk*}<sup>7</sup>

75

a

86

0

 

**Figure 4.** A diagram for illustrating the structure of {*Bk*}<sup>7</sup>

a

54

a

43

a

32

a

65

a

12 13 14 15 16 17 18

a a a a a a a

 

0

21

a

1

B

0 0

31

a

42

a

2

B

 

 

*<sup>k</sup>*=<sup>1</sup> for *N* = 8.

87

a

76

a

23 24 25 26 27 28

a a a a a a

**Figure 2.** A linear recurrent system for synchronous computations. The triangle denotes time delay.

For initialization, **x**[0] is copied three times to form **u**[*N*] where *N* = 4. Figure 2 shows a recurrent linear network for synchronous computations of equation (14). The circular connection transmits the current output to the input layer at the upcoming step. In general, **u**[*k*] is given by

$$\mathbf{u}[k] = \begin{pmatrix} \mathbf{x}[k-1] \\ \mathbf{x}[k-2] \\ \vdots \\ \mathbf{x}[k-N+1] \end{pmatrix}.$$

which concatenates *<sup>N</sup>* − 1 consecutive steps of mean activations and *<sup>B</sup>* = [*B*1*B*<sup>2</sup> ··· *BN*−1] is composed of *<sup>N</sup>* <sup>−</sup> 1 submatrices. Figure 3 and 4 show the structure of matrices {*Bn*}*N*−<sup>1</sup> *<sup>n</sup>*=<sup>1</sup> . Distinct colors represent nonzero entries. Figure 5 shows the flow chart of creating matrix *B*. Figure 6 shows the flow chart of simulating asynchronous updating by linear recurrent computations where repmat is a matlab bulit-in function for matrix replication.

### **2.2. Mean field dynamics**

Asynchronous updating (11) can be regarded as a special case of asynchronous updating (9) of mean field dynamics. Let *<sup>f</sup>* denote a post-nonlinear function and *vi* = �*si*� for general situations. Synchronous parallel updating is explored for emulating asynchronous updating (9) for tracking mean field dynamics.

Asynchronous updating rule is rewritten as follows,

$$v\_i \gets f\left(\beta \left[\sum\_{j \neq i} w\_{ij} v\_j + c\_i\right]\right) \equiv g(v\_{1'} \cdot \cdots \cdot v\_{N'} c\_i)$$

 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 87 76 65 54 43 32 21 12 13 14 15 16 17 18 1 a a a a a a a a a a a a a a B 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 86 75 64 53 42 31 23 24 25 26 27 28 2 a a a a a a a a a a a a B 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 85 74 63 52 41 34 35 36 37 38 3 a a a a a a a a a a B 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 81 78 7 a a B 

**Figure 3.** The representation of matrix {*Bk*}<sup>7</sup> *<sup>k</sup>*=<sup>1</sup> for *N* = 8.

6 Computational and Numerical Simulations

**u**[*k*] is given by

**2.2. Mean field dynamics**

(9) for tracking mean field dynamics.

Asynchronous updating rule is rewritten as follows,

 *β* ∑ *j*�=*i*

*wijvj* + *ci*

*vi* ← *<sup>f</sup>*

**x**[k ]1

**x**[k ]2

**Figure 2.** A linear recurrent system for synchronous computations. The triangle denotes time delay.

**u**[*k*] =

computations where repmat is a matlab bulit-in function for matrix replication.

which concatenates *<sup>N</sup>* − 1 consecutive steps of mean activations and *<sup>B</sup>* = [*B*1*B*<sup>2</sup> ··· *BN*−1] is composed of *<sup>N</sup>* <sup>−</sup> 1 submatrices. Figure 3 and 4 show the structure of matrices {*Bn*}*N*−<sup>1</sup> *<sup>n</sup>*=<sup>1</sup> . Distinct colors represent nonzero entries. Figure 5 shows the flow chart of creating matrix *B*. Figure 6 shows the flow chart of simulating asynchronous updating by linear recurrent

Asynchronous updating (11) can be regarded as a special case of asynchronous updating (9) of mean field dynamics. Let *<sup>f</sup>* denote a post-nonlinear function and *vi* = �*si*� for general situations. Synchronous parallel updating is explored for emulating asynchronous updating

> 

<sup>≡</sup> *<sup>g</sup>*(*v*1, ··· , *vN*, *ci*)

For initialization, **x**[0] is copied three times to form **u**[*N*] where *N* = 4. Figure 2 shows a recurrent linear network for synchronous computations of equation (14). The circular connection transmits the current output to the input layer at the upcoming step. In general,

> **x**[*k* − 1] **x**[*k* − 2] . . . **x**[*k* − *N* + 1]

**x**[ Nk ]1

B **x** k][

+

**c**

**Figure 4.** A diagram for illustrating the structure of {*Bk*}<sup>7</sup> *<sup>k</sup>*=1.

10.5772/57217

http://dx.doi.org/10.5772/57217

(15)

291

(16)

where *vi* = �*si*�. Let **<sup>v</sup>**[0] = (*v*1[0], *<sup>v</sup>*2[0], ··· , *vN*[0]) denote the initial mean configuration.

Tracking Mean Field Dynamics by Synchronous Computations of Recurrent Multilayer Perceptrons

*vn*[*<sup>k</sup>* + *<sup>n</sup>*] = *<sup>g</sup>*(*v*1[*<sup>k</sup>* + <sup>1</sup>], *<sup>v</sup>*2[*<sup>k</sup>* + <sup>2</sup>], ··· , *vn*−1[*<sup>k</sup>* + *<sup>n</sup>* − <sup>1</sup>], *vn*+1[*k*], ··· , *vN*[*k*], *cn*)

where *vi*[*k*] is the instance of *vi* at the *<sup>k</sup>*th step for *<sup>k</sup>* ≥ 0 and *ci* is a constant. The mean activation of each processing element is asynchronously updated. The system (15) is translated to synchronous updating by replacing index *k* + *n* with *k* in the row of updating

*vn*[*k*] = *<sup>g</sup>*(*v*1[*<sup>k</sup>* − *<sup>n</sup>* + <sup>1</sup>], *<sup>v</sup>*2[*<sup>k</sup>* − *<sup>n</sup>* + <sup>2</sup>], ··· , *vn*−1[*<sup>k</sup>* − <sup>1</sup>], *vn*+1[*<sup>k</sup>* − *<sup>n</sup>*], ··· , *vN*[*<sup>k</sup>* − *<sup>n</sup>*], *cn*)

The matrix *B* can be determined by the flow chart in figure 5 for translating mean field

**v**[*k* − 1] **v**[*k* − 2] . . . **v**[*k* − *N* + 1] *T*

**v**[*k*] = tanh (*βB***u**[*k*]) (17)

*vN*[*k*] = *<sup>g</sup>*(*v*1[*<sup>k</sup>* − *<sup>N</sup>* + <sup>1</sup>], *<sup>v</sup>*2[*<sup>k</sup>* − *<sup>N</sup>* + <sup>2</sup>], *<sup>v</sup>*3[*<sup>k</sup>* − *<sup>N</sup>* + <sup>3</sup>], ··· , *vN*−1[*<sup>k</sup>* − <sup>1</sup>], *cN*)

**u**[*k*] =

**<sup>v</sup>**[*k*] = (*v*1[*k*], *<sup>v</sup>*2[*k*], ··· , *vN*[*k*])

*vN*[*<sup>k</sup>* + *<sup>N</sup>*] = *<sup>g</sup>*(*v*1[*<sup>k</sup>* + <sup>1</sup>], *<sup>v</sup>*2[*<sup>k</sup>* + <sup>2</sup>], *<sup>v</sup>*3[*<sup>k</sup>* + <sup>3</sup>], ··· , *vN*−1[*<sup>k</sup>* + *<sup>N</sup>* − <sup>1</sup>], *cN*)

The leave-one-out asynchronous updating is expressed as

*<sup>v</sup>*1[*<sup>k</sup>* + <sup>1</sup>] = *<sup>g</sup>*(*v*2[*k*], *<sup>v</sup>*3[*k*], *<sup>v</sup>*4[*k*], ··· , *vN*[*k*], *<sup>c</sup>*1) *<sup>v</sup>*2[*<sup>k</sup>* + <sup>2</sup>] = *<sup>g</sup>*(*v*1[*<sup>k</sup>* + <sup>1</sup>], *<sup>v</sup>*3[*k*], *<sup>v</sup>*4[*k*], ··· , *vN*[*k*], *<sup>c</sup>*2) *<sup>v</sup>*3[*<sup>k</sup>* + <sup>3</sup>] = *<sup>g</sup>*(*v*1[*<sup>k</sup>* + <sup>1</sup>], *<sup>v</sup>*[*<sup>k</sup>* + <sup>2</sup>], *<sup>v</sup>*4[*k*], ··· , *vN*[*k*], *<sup>c</sup>*3)

*<sup>v</sup>*1[*k*] = *<sup>g</sup>*(*v*2[*<sup>k</sup>* − <sup>1</sup>], *<sup>v</sup>*3[*<sup>k</sup>* − <sup>1</sup>], *<sup>v</sup>*4[*<sup>k</sup>* − <sup>1</sup>], ··· , *vN*[*<sup>k</sup>* − <sup>1</sup>], *<sup>c</sup>*1) *<sup>v</sup>*2[*k*] = *<sup>g</sup>*(*v*1[*<sup>k</sup>* − <sup>1</sup>], *<sup>v</sup>*3[*<sup>k</sup>* − <sup>2</sup>], *<sup>v</sup>*4[*<sup>k</sup>* − <sup>2</sup>], ··· , *vN*[*<sup>k</sup>* − <sup>2</sup>], *<sup>c</sup>*2) *<sup>v</sup>*3[*k*] = *<sup>g</sup>*(*v*1[*<sup>k</sup>* − <sup>2</sup>], *<sup>v</sup>*2[*<sup>k</sup>* − <sup>1</sup>], *<sup>v</sup>*4[*<sup>k</sup>* − <sup>3</sup>], ··· , *vN*[*<sup>k</sup>* − <sup>3</sup>], *<sup>c</sup>*3)

. . .

. . .

*vn*

. . .

. . .

where *k* ≥ *N*.

where

and

dynamics to the following form

denotes the mean configuration at the *k*th step.

**Figure 5.** The flow chart of forming *B*.

**Figure 6.** The flow chart of solving linear system by synchronous parallel computations.

where *vi* = �*si*�. Let **<sup>v</sup>**[0] = (*v*1[0], *<sup>v</sup>*2[0], ··· , *vN*[0]) denote the initial mean configuration. The leave-one-out asynchronous updating is expressed as

$$\begin{array}{l}v\_{1}[k+1] &= \emptyset \langle v\_{2}[k], v\_{3}[k], v\_{4}[k] \rangle \cdots \,\_{\prime}v\_{N}[k] \,\_{\prime}c\_{1} \rangle \\ v\_{2}[k+2] &= \emptyset \langle v\_{1}[k+1], v\_{3}[k], v\_{4}[k] \rangle \cdots \,\_{\prime}v\_{N}[k] \,\_{\prime}c\_{2} \rangle \\ v\_{3}[k+3] &= \emptyset \langle v\_{1}[k+1], v\_{7}[k+2], v\_{4}[k] \rangle \cdots \,\_{\prime}v\_{N}[k] \,\_{\prime}c\_{3} \rangle \\ &\vdots \\ v\_{n}[k+n] &= \emptyset \langle v\_{1}[k+1], v\_{2}[k+2] \rangle \cdots \,\_{\prime}v\_{n-1}[k+n-1], v\_{n+1}[k] \rangle \cdots \,\_{\prime}v\_{N}[k] \,\_{\prime}c\_{n} \rangle \\ &\vdots \\ v\_{N}[k+N] &= \emptyset \langle v\_{1}[k+1], v\_{2}[k+2] \,\_{\prime}v\_{3}[k+3] \rangle \cdots \,\_{\prime}v\_{N-1}[k+N-1] \,\_{\prime}c\_{N} \rangle \end{array} \tag{15}$$

where *vi*[*k*] is the instance of *vi* at the *<sup>k</sup>*th step for *<sup>k</sup>* ≥ 0 and *ci* is a constant. The mean activation of each processing element is asynchronously updated. The system (15) is translated to synchronous updating by replacing index *k* + *n* with *k* in the row of updating *vn*

$$\begin{array}{l} v\_1[k] = \emptyset \langle v\_2[k-1], v\_3[k-1], v\_4[k-1], \dots, v\_N[k-1], c\_1 \rangle \\ v\_2[k] = \emptyset \langle v\_1[k-1], v\_3[k-2], v\_4[k-2], \dots, v\_N[k-2], c\_2 \rangle \\ v\_3[k] = \emptyset \langle v\_1[k-2], v\_2[k-1], v\_4[k-3], \dots, v\_N[k-3], c\_3 \rangle \\ \vdots \\ v\_n[k] = \emptyset \langle v\_1[k-n+1], v\_2[k-n+2], \dots, v\_{n-1}[k-1], v\_{n+1}[k-n], \dots, v\_N[k-n], c\_n \rangle \\ \vdots \\ v\_N[k] = \emptyset \langle v\_1[k-N+1], v\_2[k-N+2], v\_3[k-N+3], \dots, v\_{N-1}[k-1], c\_N \rangle \end{array} \tag{16}$$

where *k* ≥ *N*.

8 Computational and Numerical Simulations

**Figure 5.** The flow chart of forming *B*.

n :1 N

B ] [

return

BB ,[ nn N]:1 ,[ nnA N]:1

G nA nN N

BB n nN N H

[ ,:1 ]:1 )( )(

B BB B] [

] [ )( Initialize , Input ,

 **x** 

A

**c**

NN size A

**xe** A **cx**

B myformB NA ),(

**u** repmat **x**,1( ,N )1

**e** 

:1( ( 2))

EXIT

**x** B**u c**

N N

] [

**u wx w u** 

**Figure 6.** The flow chart of solving linear system by synchronous parallel computations.

**e x** A**xc**

[ ,:1 ]:1

H diag g g diag G

 

BB zeros N)(

The matrix *B* can be determined by the flow chart in figure 5 for translating mean field dynamics to the following form

$$\mathbf{v}[k] = \tanh\left(\beta B\mathbf{u}[k]\right) \tag{17}$$

where

$$\mathbf{u}[k] = \begin{pmatrix} \mathbf{v}[k-1] \\ \mathbf{v}[k-2] \\ \vdots \\ \mathbf{v}[k-N+1] \end{pmatrix}.$$

and

$$\mathbf{v}[k] = \left(v\_1[k], v\_2[k], \dots, v\_N[k]\right)^T$$

denotes the mean configuration at the *k*th step.

10.5772/57217

293

http://dx.doi.org/10.5772/57217

(a) (b)

Tracking Mean Field Dynamics by Synchronous Computations of Recurrent Multilayer Perceptrons

The linear recurrent relation (14) is verified by numerical simulations for solving the

<sup>10</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> <sup>1</sup>

<sup>8</sup> *<sup>x</sup>*<sup>2</sup> <sup>+</sup> <sup>1</sup>

The flow charts in figures 5 and 6 are implemented in Matlab codes. The initial

The experiment simultaneously simulates asynchronous updating (11) and synchronous updating (14) of linear recurrence. Both asynchronous updating and synchronous updating

asynchronous updating and synchronous updating along time steps and (b) shows errors after the 25th step. The numerical results show the error of asynchronous updating coverages slower than that of synchronous updating. This illustrates the advantage of synchronous updating. When parallel computations like vector codes are employed, synchronous

<sup>11</sup> *<sup>x</sup>*<sup>1</sup> <sup>+</sup> <sup>0</sup> <sup>+</sup> <sup>1</sup>

<sup>5</sup> *<sup>x</sup>*<sup>1</sup> <sup>+</sup> <sup>1</sup>

value **<sup>x</sup>**[0]=[*x*1[0], *<sup>x</sup>*2[0], *<sup>x</sup>*3[0], *<sup>x</sup>*4[0]] is sampled from the hypercube [−1, 1]

updating is more efficient than asynchronous updating for numerical simulations.

<sup>5</sup> *<sup>x</sup>*<sup>3</sup> <sup>+</sup> <sup>0</sup>*x*<sup>4</sup> <sup>+</sup> <sup>3</sup>

<sup>8</sup> *<sup>x</sup>*<sup>3</sup> <sup>+</sup> <sup>0</sup> <sup>+</sup> <sup>15</sup>

<sup>11</sup> *<sup>x</sup>*<sup>3</sup> <sup>−</sup> <sup>3</sup>

<sup>10</sup> *<sup>x</sup>*<sup>2</sup> <sup>+</sup> <sup>0</sup> <sup>−</sup> <sup>1</sup>

5

8

<sup>4</sup> uniformly.

*<sup>T</sup>*. Figure 9(a) shows errors of

<sup>11</sup> *<sup>x</sup>*<sup>4</sup> <sup>+</sup> <sup>25</sup> 11

<sup>10</sup> *<sup>x</sup>*<sup>4</sup> <sup>−</sup> <sup>11</sup> 10

**Figure 9.** Errors of asynchronous update and asynchronous update along time steps.

*x*<sup>1</sup> = 0 + <sup>1</sup>

*<sup>x</sup>*<sup>4</sup> = <sup>0</sup> − −<sup>3</sup>

*x*<sup>2</sup> = <sup>1</sup>

*<sup>x</sup>*<sup>3</sup> = −<sup>1</sup>

attain the numerical solution [1.0404, 1.991, −1.2067, 0.9775]

**3. Numerical simulation**

**3.1. Solving linear systems**

following linear system,

**Figure 7.** Nonlinear recurrent multilayer perceptrons.

**Figure 8.** The flow chart of synchronous evolutionary simulations of mean field dynamics.

The structure of MIMO recurrent multilayer perceptrons is shown in Figure 7. The derived recurrent multilayer perceptrons track mean field dynamics by parallel and synchronous computations. As in the previous work [6], an annealing process is employed to schedule *β* from sufficiently small to large values for problem solving. Figure 8 shows the flow chart of simulating synchronous and parallel computations of recurrent multilayer perceptrons for tracking mean field dynamics.

<sup>292</sup> Computational and Numerical Simulations Tracking Mean Field Dynamics by Synchronous Computations of Recurrent Multilayer Perceptrons 11 10.5772/57217 Tracking Mean Field Dynamics by Synchronous Computations of Recurrent Multilayer Perceptrons http://dx.doi.org/10.5772/57217 293

**Figure 9.** Errors of asynchronous update and asynchronous update along time steps.

### **3. Numerical simulation**

10 Computational and Numerical Simulations

**v**[k ]1

**v**[k ]2

sufficient ly small, near one. Set near zero for all ,,,1

B myformB NW ),(

**u** repmat **v**,1( ,N )1

mean(**v**.^2) < 0.98

B**u**)

:1( ( 2))

The structure of MIMO recurrent multilayer perceptrons is shown in Figure 7. The derived recurrent multilayer perceptrons track mean field dynamics by parallel and synchronous computations. As in the previous work [6], an annealing process is employed to schedule *β* from sufficiently small to large values for problem solving. Figure 8 shows the flow chart of simulating synchronous and parallel computations of recurrent multilayer perceptrons for

N N

**v** tanh(

 /

] [

**u wv w u** 

**Figure 8.** The flow chart of synchronous evolutionary simulations of mean field dynamics.

tracking mean field dynamics.

**v**<sup>i</sup> i N

**v**[ Nk ]1

**Figure 7.** Nonlinear recurrent multilayer perceptrons.

B tanh(

) **v** k][

EXIT

### **3.1. Solving linear systems**

The linear recurrent relation (14) is verified by numerical simulations for solving the following linear system,

$$\begin{array}{ccccccccc} \mathbf{x\_1} &=& \mathbf{0} & + & \frac{1}{10} \mathbf{x\_2} & - & \frac{1}{5} \mathbf{x\_3} & + & \mathbf{0} \mathbf{x\_4} & + & \frac{3}{5} \\\\ \mathbf{x\_2} &=& \frac{1}{11} \mathbf{x\_1} & + & \mathbf{0} & + & \frac{1}{11} \mathbf{x\_3} & - & \frac{3}{11} \mathbf{x\_4} & + & \frac{25}{11} \\\\ \mathbf{x\_3} &=& -\frac{1}{5} \mathbf{x\_1} + & \frac{1}{10} \mathbf{x\_2} & + & \mathbf{0} & - & \frac{1}{10} \mathbf{x\_4} & - & \frac{11}{10} \\\\ \mathbf{x\_4} &=& \mathbf{0} & - & -\frac{3}{8} \mathbf{x\_2} + & \frac{1}{8} \mathbf{x\_3} & + & \mathbf{0} & + & \frac{15}{8} \end{array}$$

The flow charts in figures 5 and 6 are implemented in Matlab codes. The initial value **<sup>x</sup>**[0]=[*x*1[0], *<sup>x</sup>*2[0], *<sup>x</sup>*3[0], *<sup>x</sup>*4[0]] is sampled from the hypercube [−1, 1] <sup>4</sup> uniformly. The experiment simultaneously simulates asynchronous updating (11) and synchronous updating (14) of linear recurrence. Both asynchronous updating and synchronous updating attain the numerical solution [1.0404, 1.991, −1.2067, 0.9775] *<sup>T</sup>*. Figure 9(a) shows errors of asynchronous updating and synchronous updating along time steps and (b) shows errors after the 25th step. The numerical results show the error of asynchronous updating coverages slower than that of synchronous updating. This illustrates the advantage of synchronous updating. When parallel computations like vector codes are employed, synchronous updating is more efficient than asynchronous updating for numerical simulations.

### **3.2. Graph bisection problem**

The graph bisection problem [4] is stated to partition *N* nodes into two equal sets such that net edges crossing two sets in size is minimized. Let *si* ∈ {−1, 1} denote the membership of the *i*th node to two non-overlapping sets and *Tij* denote the connectivity, where

$$T\_{ij} = \begin{cases} 1, \text{if nodes } i \text{ and } j \text{ are connected} \\ 0, \text{otherwise} \end{cases}$$

*si* denotes the partition of node *i* to two disjoint subsets. Node *i* is in one subset if *si* = 1 and belongs to the other if *si* = −1. As in [4], *<sup>E</sup>*(**s**) for problem solving is given by ,

$$E(\mathbf{s}) = -\frac{1}{2} \sum\_{i=1}^{N} \sum\_{j \neq i}^{N} T\_{ij} \mathbf{s}\_i \mathbf{s}\_j + \frac{a}{2} \left( \sum\_{i=1}^{N} s\_i \right)^2 \tag{18}$$

10.5772/57217

295

http://dx.doi.org/10.5772/57217

(a) (b)

**Figure 10.** The change of the stability and 1/*β* for solving graph bisection problem by synchronous update and asynchronous

Annealed asynchronous updating Annealed synchronous updating

**Figure 11.** The change of cutsize and free energy for solving graph bisection problem by synchronous update and asynchronous

**4. Parallel and distributed processes of tracking mean field dynamics of**

This section discusses the case of sparse interconnection among processing units. In the case, a processing connects only with processing units in a small neighborhood. Sparsely interconnected processing units are partitioned to *K* clusters such that the cutting size of interconnections crossing distinct clusters is minimized. This formulates a typical problem of *K*-set partition to a sparse graph. Mean field dynamics for *K*-set graph partition has been proposed in [6]. As argued previously, parallel and synchronous computations by recurrent multilayer perceptrons can be obtained for tracking mean field dynamics of resolving *K*-set

(b)

*<sup>k</sup>*=<sup>1</sup> be the partitioned *K* clusters of sparsely interconnected

update.

update.

**sparse connectivity**

graph partition. Let {*Sk*}*<sup>K</sup>*

(a)

asynchronous updating synchronous updating

Tracking Mean Field Dynamics by Synchronous Computations of Recurrent Multilayer Perceptrons

where *a* is the Lagrange multiplier which forces ∑*<sup>N</sup> <sup>i</sup>*=<sup>1</sup> *si* to zero. *Tijsisj* is zero if *Tij* =0. Otherwise, it is 1 if nodes *i* and *j* belong the same subset and −1 if node *i* belongs to one set and node *j* to the other. Therefore, the first term quantifies the number of net edges crossing two subsets. The last forces equal cut. As in Appendix A, *E*(*s*) can be rewritten as

$$E(\mathbf{s}) = -\frac{1}{2} \sum\_{i=1}^{N} \sum\_{j \neq i}^{N} \mathcal{W}\_{ij} \mathbf{s}\_i \mathbf{s}\_j \tag{19}$$

where *Wij* = *Tij* − *<sup>A</sup>* and *Wii* = 0.

We further explore the performances of synchronous updating by annealed recurrent multilayer perceptrons for graph bisection. In our simulations, each connection *Tij* between nodes *i* and *j* is set to one if a uniform random number within (0, 1) less than 0.2 is generated, and zero otherwise. The parameter *a* is 2. The halting condition is set to *χ*(**v**) > 0.99 where

$$\chi(\mathbf{v}) = \frac{1}{N} \sum\_{i=1}^{N} v\_i^2.$$

The temperature-like parameter *β* is always scheduled from sufficiently low to high values.

Figure 10 shows the convergence of annealed asynchronous updating (9) and annealed synchronous updating (17) for tracking mean field dynamics of solving a 100-nodes graph bisection problem, where the blue and red curves respectively show the change of the stability and 1/*β* along time steps. Figure 11 shows the change of cutsize and free energy by blue and red curve, respectively. The histograms of cutsize obtained by 50 executions of annealed asynchronous updating and annealed synchronous updating are plotted in Figure 12, where the mean of cutsize by annealed synchronous updating is 361.84, which is compatible to 358.5 of annealed asynchronous updating.

<sup>294</sup> Computational and Numerical Simulations Tracking Mean Field Dynamics by Synchronous Computations of Recurrent Multilayer Perceptrons 13 10.5772/57217 Tracking Mean Field Dynamics by Synchronous Computations of Recurrent Multilayer Perceptrons http://dx.doi.org/10.5772/57217 295

12 Computational and Numerical Simulations

**3.2. Graph bisection problem**

where *Wij* = *Tij* − *<sup>A</sup>* and *Wii* = 0.

The graph bisection problem [4] is stated to partition *N* nodes into two equal sets such that net edges crossing two sets in size is minimized. Let *si* ∈ {−1, 1} denote the membership of

*si* denotes the partition of node *i* to two disjoint subsets. Node *i* is in one subset if *si* = 1 and

Otherwise, it is 1 if nodes *i* and *j* belong the same subset and −1 if node *i* belongs to one set and node *j* to the other. Therefore, the first term quantifies the number of net edges crossing

> *N* ∑ *i*=1

We further explore the performances of synchronous updating by annealed recurrent multilayer perceptrons for graph bisection. In our simulations, each connection *Tij* between nodes *i* and *j* is set to one if a uniform random number within (0, 1) less than 0.2 is generated, and zero otherwise. The parameter *a* is 2. The halting condition is set to *χ*(**v**) > 0.99 where

*N*

The temperature-like parameter *β* is always scheduled from sufficiently low to high values. Figure 10 shows the convergence of annealed asynchronous updating (9) and annealed synchronous updating (17) for tracking mean field dynamics of solving a 100-nodes graph bisection problem, where the blue and red curves respectively show the change of the stability and 1/*β* along time steps. Figure 11 shows the change of cutsize and free energy by blue and red curve, respectively. The histograms of cutsize obtained by 50 executions of annealed asynchronous updating and annealed synchronous updating are plotted in Figure 12, where the mean of cutsize by annealed synchronous updating is 361.84, which

*N* ∑ *i*=1 *v*2 *i* .

*<sup>χ</sup>*(**v**) = <sup>1</sup>

*N* ∑ *j*�=*i*

2

1, if nodes *i* and *j* are connected

*Tijsisj* +

*a* 2  *N* ∑ *i*=1 *si* 2

*<sup>i</sup>*=<sup>1</sup> *si* to zero. *Tijsisj* is zero if *Tij* =0.

*Wijsisj* (19)

(18)

the *i*th node to two non-overlapping sets and *Tij* denote the connectivity, where

0, otherwise

belongs to the other if *si* = −1. As in [4], *<sup>E</sup>*(**s**) for problem solving is given by ,

*N* ∑ *i*=1

two subsets. The last forces equal cut. As in Appendix A, *E*(*s*) can be rewritten as

*<sup>E</sup>*(*s*) = <sup>−</sup><sup>1</sup>

*N* ∑ *j*�=*i*

2

*Tij* =

where *a* is the Lagrange multiplier which forces ∑*<sup>N</sup>*

is compatible to 358.5 of annealed asynchronous updating.

*<sup>E</sup>*(**s**) = <sup>−</sup><sup>1</sup>

**Figure 10.** The change of the stability and 1/*β* for solving graph bisection problem by synchronous update and asynchronous update.

**Figure 11.** The change of cutsize and free energy for solving graph bisection problem by synchronous update and asynchronous update.
