**4. Conclusion**

220 Recurrent Neural Networks and Soft Computing

For unification of two cycles *R*1 and *R*<sup>2</sup> , it is sufficient if the graph of the system has a cycle ABCD of length 4 such that the edge AB belongs to the cycle *R*1 and the edge CD belongs to

2. Eliminate the edge AB from the cycle and successively numerate the nodes of the cycle *R*1 in such a way that to assign number 0 to the node A and assign number <sup>1</sup> *L* 1 , where *L*<sup>1</sup> is the length of the cycle *R*<sup>1</sup> , to the edge B. Include the edge BC into

3. Eliminate the edge CD and successively numerate the nodes of the cycle *R*2 so that the node C is assigned the number *L*1, and the node D is assigned the number 1 2 *L L* 1 , where *L*<sup>2</sup> is the length of the cycle *R*<sup>2</sup> . Include the edge DA into the cycle. The unified

The cycles *R*1 and *R*<sup>2</sup> , and also the resulting cycle are marked by bold lines in Fig. 12. The edges that are not included into the above-mentioned cycles are marked by dotted

For comparison, Table 6 gives times (in seconds) of constructing Hamiltonian cycles in a 2Dmesh by the initial algorithm ( <sup>1</sup>*t* ) and by the algorithm with splitting of cycle construction ( <sup>2</sup>*t* ) with the number of subgraphs *k* = 2. The times are measured for *p* = *n*. The cycle construction time can be additionally reduced by parallel construction of cycles in

*n* 16 64 256 1024

<sup>1</sup>*t* 0.02 0.23 9.62 595.8

<sup>2</sup>*t* 0.01 0.03 2.5 156.19

The proposed approach can be applied to constructing Hamiltonian cycles in arbitrary

We can use the splitting method to construct Hamilton cycles in three-dimensional tori because the three-dimensional torus can be considered as a connected set of twodimensional tori. So, the Hamilton cycle in three-dimensional torus can be constructed as

1. Construct the Hamilton cycles in all of two-dimensional tori of the three-dimensional

The cycles *R*1 and *R*2 can be united into one cycle by using the following algorithm:

1. Find the cycle ABCD possessing the above-noted property.

cycle of length *L L* 1 2 is constructed.

Table 6. Comparison of cycle construction times in 2D-mesh

nonweighted nonoriented graphs without multiple edges and loops.

2. Unify the constructed cycles by the above unifying algorithm.

the cycle *R*<sup>2</sup> (Fig. 13).

the cycle.

lines.

subgraphs.

follows:

torus.

A problem of mapping graphs of parallel programs onto graphs of distributed computer systems by recurrent neural networks is formulated. The parameter values providing the absence of incorrect solutions are experimentally determined. Optimal solutions are found for mapping a "line"-graph onto a two-dimensional torus due to introduction into Lyapunov function of penalty coefficients for the program graph edges not-mapped onto the system graph edges.

For increasing probability of finding optimal mapping, a method for splitting the mapping is proposed. The method essence is a reducing solution matrix to a block-diagonal form. The Wang recurrent neural network is used to exclude incorrect solutions of the problem of mapping the line-graph onto three-dimensional torus. This network converges quicker than the Hopfield one.

An efficient algorithm based on a recurrent neural Wang's network and the WTA principle is proposed for the construction of Hamiltonian cycles (ring program graphs) in regular graphs (2D- and 3D-tori, and hypercubes) of distributed computer systems and 2D-tori disturbed by removing an arbitrary edge (edge defect). The neural network parameters for

Optimization of Mapping Graphs of Parallel

Germany: Springer

Intech, pp. 255-288

Plenum

Programs onto Graphs of Distributed Computer Systems by Recurrent Neural Network 223

Jagota, A. (1999). Hopfield Neural Networks and Self-Stabilization, *Chicago Journal of* 

Korte, B. & Vygen, J. (2006). *Combinatorial optimization. Theory and algorithms.* Bonn,

Malek, A. (2008). Applications of Recurrent Neural Networks to Optimization Problems,

Melamed, I.I. (1994). Neural networks and combinatorial optimization, *Automation and* 

Ortega, J.M. (1988). *Introduction to Parallel and Vector Solution of Linear Systems*, New York:

Parhami, B. (2002). *Introduction to Parallel Processing. Algorithms and Architectures*, New York:

Serpen, G. & Patwardhan, A. (2007). Enhancing Computational Promise of Neural

Siqueira, P.H., Steiner, M.T.A., & Scheer, S. (2010). Recurrent Neural Network With Soft

Smith, K. A. (1999). Neural Networks for Combinatorial Optimization: A Review of More

Tarkov, M.S. (2003). Mapping Parallel Program Structures onto Structures of Distributed

Tarkov, M.S. (2005). Decentralized Control of Resources and Tasks in Robust Distributed

Tarkov, M.S. (2006). *Neurocomputer systems*, Мoscow, Internet University of Inf.

Trafalis T.B. & Kasap S. (1999). *Neural Network Approaches for Combinatorial Optimization* 

Tel, G. (1994). *Introduction to Distributed Algorithms*, Cambridge University Press, England,

Wang, J. (1993). Analysis and Design of a Recurrent Neural Network for Linear

*Supplement, Advances in Neural Networks*, Vol. 14, No. S1, pp. 168--176 Serpen, G. (2008). Hopeld Network as Static Optimizer: Learning the Weights and Eliminating the Guesswork, *Neural Processing Letters*, Vol.27, No.1, pp. 1-15 Siqueira, P.H., Steiner, M.T.A., & Scheer, S. (2007). A New Approach to Solve The Travelling

Salesman Problem, *Neurocomputing,* Vol. 70, pp.1013-1021

Technologies: Binom. Knowledge laboratory (in Russian)

(Eds.), pp. 259-293, Kluwer Academic Publishers

*Applications,* Vol. 40, No.9, pp.613-618

Optimization for Graph-Theoretic Problems in Real-Time Environments, *DCDIS A* 

'Winner Takes All' Principle for The TSP, *Proceedings of the International Conference on Fuzzy Computation and 2nd International Conference on Neural Computation,* pp.

Than a Decade of Research, *INFORMS Journal on Computing*, Vol.11,No.1, pp.15-34.

Computer Systems*, Optoelectronics, Instrumentation and Data Processing*, Vol. 39, No.

Computer Systems*, Optoelectronics, Instrumentation and Data Processing*, Vol. 41, No.

*Problems*, Handbook of Combinatorial Optimization, D.-Z. Du and P.M. Pardalos

Programming, *IEEE Trans. On Circuits and Systems-I: Fundamental Theory and* 

*Recurrent Neural Networks*, Eds. Xiaolin Hu and P. Balasubramaniam, Croatia:

*Theoretical Computer Science*, Vol.1999,Article 6,

http://mitpress.mit.edu/CJTCS/

*remote control*, Vol.55,No.11, pp.1553-1584

Kluwer Academic Publishers

265-270, SciTePress, 2010

3, pp.72-83

5, pp.69-77

1994

the construction of Hamiltonian cycles and suboptimal cycles with a length close to that of Hamiltonian ones are determined.

Resulting algorithm allows us to construct optimal Hamilton cycles in 3D-tori with number of nodes up to 32768. The usage of this algorithm is actual in modern supercomputers having topology of the 3D-torus for organization of inter-processor communications in parallel solution of complicated problems.

Recurrent neural (Hopfield and Wang) network is a universal technique for solution of optimization problems but it is a local optimization technique, and we need additional modifications (for example, penalty coefficients and splitting) to improve the technique scalability.

The proposed algorithm for the construction of Hamiltonian cycles is less universal but more powerful because it implements a global optimization approach and so it is very more scalable than the traditional recurrent neural networks.

The traditional topology aware mappings ((Parhami, 2002; Yu, Chung & Moreira, 2006; Balaji, Gupta, Vishnu & Beckman, 2011)) are constructed especially for regular graphs (hypercubes and tori) of distributed computer systems. The proposed neural network algorithms are more universal and can be used for mapping program graphs onto graphs of distributed computer systems with defects of edges and nodes.
