**4. Evolutionary time series model for partitioning using Neuro-Memetic approach**

An evolutionary time-series model for partitioning a circuit is discussed using Neuro Memetic algorithm owing to its local search capability.

#### **Sample Data Set**

A sample example and the corresponding chromosome representation is shown in Fig 3 and Fig 4.

Fig. 3. Sample Circuit Fig. 4. Bipartition Circuit

Algorithms for CAD Tools VLSI Design 137

<sup>1</sup> xJh= 1 ( ) xpn *e wjkh* (8)

1 ( )x *e wijo* jh (9)

Take the testing samples

For sub circuit 2 data set d2

For d1=+2+2+3=7

circuit.

–0.1 to 1.

Now take the sub circuit 1 with data set (d1)

Calculate the sum of (+) credit & (-) debit for each sample data d1 & d2

d2=+3+3+2=8 so it is found that **sample fitness of data d1** is best sample.

3. Design the neural network based upon the requirement and availability.

sequence of consecutive genes, each one coding an input parameter.

<sup>1</sup> xio=

Following are the steps involved in design of the system 1. Create a input data file which consists of training pairs.

4. Simulate the software for network.

8. Calculates activation of hidden nodes.

9. Calculate the output from output layers

5. Initialize count=0, fitness=0, number of cycles.

**4.1 Design of the system to recognize sub circuit with minimum interconnection** 

The present task involves the development of Neural Network, which can train to recognize sub circuit with minimum interconnection between them, from a large circuit given.

2. In data extraction, a circuit is bipartite and chromosomes are represented for each sub

6. Generation of Initial Population. The chromosome of an individual is formulated as a

7. Initialize the weight for network. Each weight should be set to a random value between


Chromosomes representation: Sub circuit1 (A, B, C) 0010 0010 0011, Sub circuit2 (D, E, F) 0011 0011 0100


Take the testing samples

136 VLSI Design

Chromosomes representation: Sub circuit1 (A, B, C) 0010 0010 0011, Sub circuit2 (D, E, F)

Neuro-Memetic Model: Neuro-memetic model makes it possible to predict the sub

 Training Procedure: The purpose of the training process is to adjust the input and output parameters of the NN (Neural Network) model, so that the MAPE (Mean Absolute Percentage Error) measure is minimized. Training of the feed-forward neural network models is usually performed using back propagation learning algorithms. Most often, the error surface becomes trapped to local minima, usually not meeting the desired convergence criterion. The termination at a local minimum is a serious problem while the neural network is learning. In other words, such a neural network is not completely trained (Oxford Univ Press, 1995). Another issue where care must be taken is "the receptiveness to over-fitting". But, memetic algorithms offer competent search method for intricate (that is, possessing many local optima) spaces to find nearly local optima. Thus, its ability to find a better suboptimal solution or have a higher probability to obtain the local optimal solution makes it one of the preferred candidates to solve the

 Training with MA: The parameters of the neural network are tuned by a memetic algorithm (Krasnogor et al., 1998b) with arithmetic crossover and non uniform mutation. A population (P) with 200 genotypes is considered. They are randomly initialized, with maximum number of iterations fixed at 200 and MA is run for 100 generations with the same population size. The best model was found after 63 generations. In this method, the probability of crossover is 0.6 and the probability of mutation is 0.2. These probabilities are chosen by trial and error through experiments for good performance. The new population thus generated replaces the current population. The above procedures are repeated until a certain termination condition is satisfied. The number of the iterations required to train the MA-based neural network is

 Evaluate individuals using the fitness function: The objective of the fitness function is to minimize the prediction error. In order to prevent over-fitting and to give more exploration to the system, the fitness evaluation framework is changed and use the weight imbalance to calculate the fitness of a chromosome. The fitness of a chromosome

circuit from circuit with minimum interconnections between them.

2000. The range of the fitness function of neural network is (0, 1).

for the normal class is evaluated as shown in the example below.

Sub circuit 1 A, B, C total edges = 7 Sub circuit 2 D, E, F total edges =10 Cell No of edges Bipartition

A 2 1 B 2 1 C 3 1 D 3 0 E 3 0 F 4 0

0011 0011 0100

learning problem.


Now take the sub circuit 1 with data set (d1)

For sub circuit 2 data set d2

Calculate the sum of (+) credit & (-) debit for each sample data d1 & d2

For d1=+2+2+3=7

d2=+3+3+2=8 so it is found that **sample fitness of data d1** is best sample.

#### **4.1 Design of the system to recognize sub circuit with minimum interconnection**

The present task involves the development of Neural Network, which can train to recognize sub circuit with minimum interconnection between them, from a large circuit given. Following are the steps involved in design of the system


$$\text{xJh} = \frac{1}{1 + e - \left(\sum wjkh\right) \* \text{xpm}} \tag{8}$$

9. Calculate the output from output layers

$$\text{xio} = \frac{1}{1 + e - (\sum w j o) \* x \text{jh}} \tag{9}$$

Algorithms for CAD Tools VLSI Design 139

10. Compares the actual output with the desired outputs and find a measure of error. The

14. Genetic Operations: Crossover, Mutation and Reproduction to generate new weights

17. Verify the capability of neural network in recognition of sub circuit with minimum

 Development of Neural Network: In the context of recognition of sub circuit with minimum interconnection, the 3-layer neural network is employed to learn the inputoutput relationship using the MA. The layers of input neuron are responsible for inputting. The number of neurons in this output layer is determined by the size of set of desired output, with each possible output being represented by separate neuron. Neural network contains 12 input nodes, 20 neurons in the first hidden layer, 14 neurons in the second hidden layer and the output layer has 2 neurons. It results in a 12-14-2 Back propagation neural network. Sigmoid function is used as the activation function. Memetic Algorithm is employed for learning (Holstein & Moscato, 1999). For the back-propagation with momentum and adaptive learning rate, the learning rate is 0.2, the momentum constant is 0.9. During the training process the performance of

genotypes are evaluated on the basis of the fitness function. 11. If (previous fitness < current fitness value) then store current weights.

13. Selection: Two parents are selected by using the Roulette wheel mechanism.

**5. Neuro–EM and neuro-k mean clustering approach for VLSI design** 

Expectation-Maximization (EM) methodology (Kaban & Girolami, 2000).

This section is focused in use of clustering methods k-means (J. B. MacQueen, 1967) and

The system consists of three parts each dealing with data extraction, Learning stage and recognition stage. In data extraction, a circuit is bipartite and partitions it into 10 clusters, a user-defined value, by using K-means (J. B. MacQueen, 1967) and EM methodology (Kaban & Girolami, 2000), respectively. In recognition stage the parameters, that is, centroid and probability are fed into generalized delta rule algorithm separately and train the network to recognize sub-circuits with lowest amount of interconnections between them. Block diagram of model to recognize sub-circuits with lowest amount of interconnections between them using two techniques K-means and EM methodology with neural network are shown in

In recognition stage the parameters, that is, centroid and probability are fed into generalized delta rule algorithm separately and train the network to recognize sub circuit with minimum interconnection between them. Block diagram of model for Partitioning a Circuit

12. Count = Count + 1

18. End.

**partitioning** 

**5.1 Neuro-EM model** 

Fig.6 and Fig.7.

are depicted in Fig. 8.

(Apply new weights to each link). 15. If (number of cycles> count) Go to Step 7 16. training set is reduced to an acceptable value.

interconnection between them.

0.00156323 was obtained at 2000 epochs.

Fig. 5. Recognize Sub Circuit with Minimum Interconnection


Fig. 5. Recognize Sub Circuit with Minimum Interconnection

 Development of Neural Network: In the context of recognition of sub circuit with minimum interconnection, the 3-layer neural network is employed to learn the inputoutput relationship using the MA. The layers of input neuron are responsible for inputting. The number of neurons in this output layer is determined by the size of set of desired output, with each possible output being represented by separate neuron. Neural network contains 12 input nodes, 20 neurons in the first hidden layer, 14 neurons in the second hidden layer and the output layer has 2 neurons. It results in a 12-14-2 Back propagation neural network. Sigmoid function is used as the activation function. Memetic Algorithm is employed for learning (Holstein & Moscato, 1999). For the back-propagation with momentum and adaptive learning rate, the learning rate is 0.2, the momentum constant is 0.9. During the training process the performance of 0.00156323 was obtained at 2000 epochs.
