**1. Introduction**

[37] Valentini F, O'Neil T M, Dubin D.H.E. Excitation of Nonlinear Electron Acoustic

[38] Kline J L, Montgomery D S, Bezerrides B, Cobble J A, DuBois D F, Johnson R P, Rose H A, Yin L, Vu H X. Observation of a Transition from Fluid to Kinetic Nonlinearities for Langmuir Waves Driven by Stimulated Raman Backscatter. Phys. Rev. Lett. 2005;

[39] Bénisti D, Gremillet L. Nonlinear Plasma Response to a Slowly-Varying Electrostatic Wave, and Application to Stimulated Raman Scattering. Phys. Plasmas 2007; 14,

[40] Rousseaux C, Baton S D, Bénisti D, Gremillet L, Adam J C, Héron A, Strozzi D J, Amiranoff F. Experimental Evidence of Predominantly Transverse Electron Plasma Waves Driven by Stimulated Raman Scattering of Picosecond Laser Pulses. Phys.

Waves. Phys. Plasmas 2006; 13, 052303/1-7

94, 175003/1-4

282 Computational and Numerical Simulations

Rev. Lett. 2009; 102, 185003/1-4

042304

Mean field dynamics have been extensively applied for organizing neural networks in the field of computational neuroscience, since Hopfield pioneered collective decisions of interconnected processing elements for combinatorial optimization [1–6] and memory association [7, 8]. Both nonlinear transfer functions and synapses in a Hopfield neural network are a subsequence of mean field dynamics that characterize the mean configuration of a large scaled physical system at thermal equilibrium in the field of statistical mechanism. In the past decades, the mean field dynamics has been extensively applied for deriving interactive neural dynamics of solving complex tasks, such as combinatorial optimization [4, 6, 9], self-organization [10], clustering analysis [11][12], independent component analysis [13], and regression [14][15].

Mean field equations characterize feasible configurations for problem solving. Let *si* ∈ {−1, 1} denote a binary random variable for modeling a stochastic two-alternative processing element and **<sup>s</sup>** = {*si*}*<sup>i</sup>* represent a configuration for problem solving. The feasibility of **<sup>s</sup>** to the attacked problem is inversely quantified by an energy function *E*(**s**). Minimizing *E*(**s**) with respect to **s** means to seek the optimal solution. Under the Boltzmann assumption, the joint probability of all *si* is proportional to *exp*(−*βE*(**s**)), where *<sup>β</sup>* denotes the inverse of a temperature-like parameter. As in previous works [4][5][6], the Kullback-Leibler divergence between the product of marginal probabilities and the joint probability of all *si* induces a tractable free energy function *<sup>ψ</sup>* that depends on the expectation of *si*, denoted by �*si*�, for all *<sup>i</sup>*.

The following mean field dynamics exactly characterize the saddle point of a typical tractable free energy function,

$$\mu\_{l} = -\frac{\partial E(\langle \mathbf{s} \rangle)}{\partial \,\langle s\_{l} \rangle} \tag{1}$$

$$
\langle s\_i \rangle = f(u\_i) \equiv \tanh(\beta u\_i) \tag{2}
$$

©2012 Wu, Ban and Wu, licensee InTech. This is an open access chapter distributed under the terms of the

where *ui* denotes an external field, *<sup>f</sup>* ≡ tanh is a sigmoid-like transfer function and �*si*� denotes the mean activation. In previous works [4][6], *E* is quadratic and *ui* measures a weight sum of activations other than �*si*�, such as

$$\mu\_{i} = \sum\_{j \neq i} w\_{ij} \left< s\_{j} \right> + c\_{i} \tag{3}$$

10.5772/57217

285

http://dx.doi.org/10.5772/57217

Equations (4)-(6) describe synchronous data transmissions and parallel signal processes, by

Tracking Mean Field Dynamics by Synchronous Computations of Recurrent Multilayer Perceptrons

A recurrent network of multilayer perceptrons is further equipped with circular connections from the output to input layers. By feedback circular connections, the current output becomes the network input at the upcoming time step. Let *R* be an identity matrix. Setting **x** to **y***n* and **y** to **y***n*+<sup>1</sup> leads to the following recursive function realized by recurrent multilayer

Since perceptrons and adalines perform post-nonlinear projection, the organized multilayer neural network realizes a high dimensional nonlinear mapping from the input domain to the output range, which has been shown significant for solving complex tasks against traditional linear systems. Recurrent multilayer perceptrons perform parallel and synchronous computations for realizing the behavior of MIMO (multiple input multiple output) recurrent relation or characterizing nonlinear autoregression of time series. Recurrent multilayer perceptrons have been applied for nonlinear autoregressive modeling of chaotic time series

This work applies recurrent multilayer perceptrons for tracking mean field dynamics by synchronous and parallel computations. A systematic approach is proposed for translating mean field dynamics (1) and (2) to the nonlinear recursive function (7) such that recurrent multilayer perceptrons can track the saddle point of *ψ* by parallel and synchronous computations. The strategy is to introduce time delays and auxiliary variables for expanding local memories of storing individual states, and translate loosely coupled or densely coupled first order mean field equations to a system of post-nonlinear recursive functions, which can be evaluated directly by iterative synchronous computations of recurrent multilayer

Section 2 applies parallel and synchronous computations of recurrent multilayer perceptrons for tracking mean field dynamics. Asynchronous updating of tracking linear dynamics and mean field dynamics is translated to equivalent synchronous updating. Section 3 applies the transformation to derive synchronous updating of tracking mean field dynamics for solving graph bisection problem and verifies the proposed approach by numerical simulations. Section 4 further presents a hybrid of asynchronous and synchronous processes for tracking

By asynchronous updating at each time step numerical simulations select one processing element and refine its mean activation under fixed mean activations of the others. Let *<sup>ψ</sup>i*(�*si*�)

<sup>2</sup> log <sup>1</sup> <sup>+</sup> �*si*�

<sup>2</sup> <sup>+</sup>

<sup>1</sup> − �*si*�

<sup>2</sup> log <sup>1</sup> <sup>−</sup> �*si*�

2

, (8)

sparse large scaled mean field dynamics of sparse connectivity for problem solving.

**2. Synchronous computation of tracking mean field dynamics**

<sup>1</sup> + �*si*�

**y***n*+<sup>1</sup> = *F*(*A***y***n*) (7)

which it only takes three time clocks to translate **x** to **y**.

prediction [19] and financial time series [20].

perceptrons

perceptrons.

denote *<sup>ψ</sup>* with fixed

*sj* 

*<sup>ψ</sup>i*(�*si*�) = *hi* �*si*� + *ci* −

, *j* �= *i*,

where *wij* denotes the synapse that connects neural processing elements *i* and *j*. For fixed *β*, equation (2) defines the transfer function of interconnected processing elements and equation (3) sketches synapses. The realized information processes are distributed and with computational features of fault tolerance and collective decision. All interconnected processing elements in a Hopfield neural network asynchronously operate to seek a stable configuration under an annealing process [6] that carefully scheduling *β* from sufficiently small to large values.

At each intermediate *β*, a stable configuration means a result of minimizing the mean energy function against maximizing the entropy for emulating thermal equilibrium of statistical mechanism. At the end of the annealing process, by equation (2), �*si*� ∈ {−1, 1} and the mean configuration �**s**� is a vector of *N* binary values, well representing a feasible solution for problem solving. Empirical results in previous works [4][6] have extensively shown that the physical-like annealing process guarantees effectiveness and reliability of seeking the global or near global minimum of *E*(**s**) for problem solving. In previous works [15–18], mean field dynamics have been extended for multi-state Potts modeling and applied for unsupervised learning and supervised learning of neural networks toward solving self organization, independent component analysis, function approximation and discriminate analysis.

However from the perspective of numerical simulations, asynchronous operation of interconnected processing elements means one-by-one sequential updating of neural variables. It is more efficient to simulate synchronous and parallel updating of neural variables by vector codes. Multilayer perceptrons or Adalines have been organized for parallel and synchronous processes. Significant computational features include synchronous data transmission and parallel signal processes through multilayer perceptrons. A network of multilayer perceptrons is typically composed of input, hidden, output layers as well as inter-connections among consecutive layers. The input **x** ∈ *R<sup>d</sup>* transmits through interconnections to form external fields,

$$h = A\mathbf{x} + \mathbf{c},\tag{4}$$

and the nonlinear transfer function translates *h* to activations of hidden units,

$$\mathbf{v} = F(h) = [f(h\_1), \dots, f(h\_M)]^T,\tag{5}$$

which is multiplied by a matrix of posterior weights, denoted by *R*, to form the network output

$$\mathbf{y} = R\mathbf{v} \tag{6}$$

Equations (4)-(6) describe synchronous data transmissions and parallel signal processes, by which it only takes three time clocks to translate **x** to **y**.

2 Computational and Numerical Simulations

small to large values.

analysis.

output

interconnections to form external fields,

weight sum of activations other than �*si*�, such as

where *ui* denotes an external field, *<sup>f</sup>* ≡ tanh is a sigmoid-like transfer function and �*si*� denotes the mean activation. In previous works [4][6], *E* is quadratic and *ui* measures a

where *wij* denotes the synapse that connects neural processing elements *i* and *j*. For fixed *β*, equation (2) defines the transfer function of interconnected processing elements and equation (3) sketches synapses. The realized information processes are distributed and with computational features of fault tolerance and collective decision. All interconnected processing elements in a Hopfield neural network asynchronously operate to seek a stable configuration under an annealing process [6] that carefully scheduling *β* from sufficiently

At each intermediate *β*, a stable configuration means a result of minimizing the mean energy function against maximizing the entropy for emulating thermal equilibrium of statistical mechanism. At the end of the annealing process, by equation (2), �*si*� ∈ {−1, 1} and the mean configuration �**s**� is a vector of *N* binary values, well representing a feasible solution for problem solving. Empirical results in previous works [4][6] have extensively shown that the physical-like annealing process guarantees effectiveness and reliability of seeking the global or near global minimum of *E*(**s**) for problem solving. In previous works [15–18], mean field dynamics have been extended for multi-state Potts modeling and applied for unsupervised learning and supervised learning of neural networks toward solving self organization, independent component analysis, function approximation and discriminate

However from the perspective of numerical simulations, asynchronous operation of interconnected processing elements means one-by-one sequential updating of neural variables. It is more efficient to simulate synchronous and parallel updating of neural variables by vector codes. Multilayer perceptrons or Adalines have been organized for parallel and synchronous processes. Significant computational features include synchronous data transmission and parallel signal processes through multilayer perceptrons. A network of multilayer perceptrons is typically composed of input, hidden, output layers as well as inter-connections among consecutive layers. The input **x** ∈ *R<sup>d</sup>* transmits through

which is multiplied by a matrix of posterior weights, denoted by *R*, to form the network

and the nonlinear transfer function translates *h* to activations of hidden units,

+ *ci* (3)

*h* = *A***x** + **c**, (4)

**y** = *R***v** (6)

**v** = *F*(*h*)=[ *f*(*h*1), ..., *f*(*hM*)]*T*, (5)

*ui* <sup>=</sup> ∑ *j*�=*i wij sj*  A recurrent network of multilayer perceptrons is further equipped with circular connections from the output to input layers. By feedback circular connections, the current output becomes the network input at the upcoming time step. Let *R* be an identity matrix. Setting **x** to **y***n* and **y** to **y***n*+<sup>1</sup> leads to the following recursive function realized by recurrent multilayer perceptrons

$$\mathbf{y}\_{n+1} = F(A\mathbf{y}\_n) \tag{7}$$

Since perceptrons and adalines perform post-nonlinear projection, the organized multilayer neural network realizes a high dimensional nonlinear mapping from the input domain to the output range, which has been shown significant for solving complex tasks against traditional linear systems. Recurrent multilayer perceptrons perform parallel and synchronous computations for realizing the behavior of MIMO (multiple input multiple output) recurrent relation or characterizing nonlinear autoregression of time series. Recurrent multilayer perceptrons have been applied for nonlinear autoregressive modeling of chaotic time series prediction [19] and financial time series [20].

This work applies recurrent multilayer perceptrons for tracking mean field dynamics by synchronous and parallel computations. A systematic approach is proposed for translating mean field dynamics (1) and (2) to the nonlinear recursive function (7) such that recurrent multilayer perceptrons can track the saddle point of *ψ* by parallel and synchronous computations. The strategy is to introduce time delays and auxiliary variables for expanding local memories of storing individual states, and translate loosely coupled or densely coupled first order mean field equations to a system of post-nonlinear recursive functions, which can be evaluated directly by iterative synchronous computations of recurrent multilayer perceptrons.

Section 2 applies parallel and synchronous computations of recurrent multilayer perceptrons for tracking mean field dynamics. Asynchronous updating of tracking linear dynamics and mean field dynamics is translated to equivalent synchronous updating. Section 3 applies the transformation to derive synchronous updating of tracking mean field dynamics for solving graph bisection problem and verifies the proposed approach by numerical simulations. Section 4 further presents a hybrid of asynchronous and synchronous processes for tracking sparse large scaled mean field dynamics of sparse connectivity for problem solving.
