**5.4 Design of the system to recognize sub circuit with minimum interconnections**

The present task involves the development of neural network, which can train to recognize sub circuit with minimum interconnection between them from large circuit given.

Following are the steps involved in design of the system,


#### Algorithm:

The learning algorithm of back propagation network is given by "generalized delta rule".

Step 1. The algorithm takes input vector (features) to the back propagation network.

Step 2. let K be number of nodes in the layer determined by length of training vectors that is number of feature N. Let j be number of nodes in hidden layer. Let I be number of nodes in output layer. Denote activation of hidden layer as xjh and in output layer is xio. Weight connecting input layer and hidden layer are wjkh and weight connecting hidden layer and output layer is wijo.

Algorithm is similar to K-mean procedure, in that sets of parameters are re-computed until

 Use the probability density function for normal distribution to compute cluster probability for each instance. For example in the case of two-cluster one will have the two probability distribution formulae each having different mean and standard

 The algorithm terminates when formula that measure cluster quality exists no longer. The tool shed output of this algorithm would be the probability for each cluster. EM assigns a probability distribution to each instance, which indicates the probability of it belonging to

In the context of recognizing the sub circuit from circuit with minimum interconnections between them, artificial neurons is structured into three normal types of layers input, hidden and output which can create artificial neural networks. The layers of input neuron are responsible for inputting a feature vectors that is, centroid and probability, which are extracted from K-means and EM algorithms respectively. The number of neurons in this output layer is determined by size of set of desired output, with each possible output being represented by separate neuron. Between these two layers there can be many hidden layers. These internal layers contain many of the neuron in various interconnected structures.

**5.4 Design of the system to recognize sub circuit with minimum interconnections** 

2. In data extraction, a circuit is bipartite and data are represented for each sub circuit. 3. Centroid and probability features are extracted from K-means and EM algorithms

6. Train the network using input data files until error falls below the tolerance level.

The learning algorithm of back propagation network is given by "generalized delta rule".

Step 2. let K be number of nodes in the layer determined by length of training vectors that is number of feature N. Let j be number of nodes in hidden layer. Let I be number of nodes in output layer. Denote activation of hidden layer as xjh and in output layer is xio. Weight connecting input layer and hidden layer are wjkh and weight connecting hidden layer and

Step 1. The algorithm takes input vector (features) to the back propagation network.

sub circuit with minimum interconnection between them from large circuit given.

4. Design the neural network based upon the requirement and availability.

7. Verify the capability of neural network in recognition of test data

The present task involves the development of neural network, which can train to recognize

desired convergence value is achieved. General procedure is

Use the probability scores to re-estimate the parameter.

Following are the steps involved in design of the system, 1. Create a input data file which consists of training pairs.

5. Simulate the software for network.

Algorithm:

output layer is wijo.

Initialize parameters.

deviation values.

Return to step 2.

each of the clusters.

Step 3. Initialize the weight for network. Each weight should be set to a random value between –0.1 to 1.

Step 4. Calculates activation of hidden nodes

$$\text{xJh} = \text{g} \left( \sum wjklh \ast \text{xpm} \right) = \frac{1}{1 + e - \left( \sum wjklh \ast \text{xpm} \right)} \tag{12}$$

Step 5. Calculate the output from output layers

$$\text{x} \text{x} \text{o} = \text{g} (\sum wjko \,\,\,\text{x} \text{y} \text{h}) \text{y} = \frac{1}{1 + e - (\sum wijo \,\,\, \text{x} \text{j} \text{h})} \tag{13}$$

Step 6. Compares the actual output with desired outputs and finds a measure of error.

Step 7. After comparison it finds in which direction (+ or -) to change each weight in order to reduce error.

Step 8. Find the amount by which to change each weight. It applies the corrections to the weight and repeat all above steps with all training vectors until the error for all the vectors in training set is reduced to an acceptable value.

Step 9: End.

#### **6. Evaluation of fuzzy ARTMAP with DBSCAN in VLSI partition application**

This section describes a new model for partitioning a circuit using DBSCAN and fuzzy ARTMAP neural network.

#### **6.1 Overview of art map**

The basic ART system is an unsupervised learning model. It typically consists of a comparison field and a recognition field composed of neurons, a vigilance parameter, and a reset module. The vigilance parameter has considerable influence on the system, higher vigilance produces highly detailed memories (many, fine-grained categories), while lower vigilance results in more general memories (fewer, more-general categories). The comparison field takes an input vector (a one-dimensional array of values) and transfers it to its best match in the recognition field. Its best match is the single neuron whose set of weights (weight vector) most closely matches the input vector. Each recognition field neuron outputs a negative signal (proportional to that neuron's quality of match to the input vector) to each of the other recognition field neurons and inhibits their output accordingly. In this way the recognition field exhibits lateral inhibition, allowing each neuron in it, to represent a category to which input vectors they are classified. After the input vector is classified, the reset module compares the strength of the recognition match to the vigilance parameter. If the vigilance threshold is met, training commences. Otherwise, if the match level does not meet the vigilance parameter, the firing recognition neuron is inhibited until a new input vector is applied. The training commences only upon completion of a search procedure. In the search procedure, recognition neurons are disabled one by one, by the reset function until the vigilance parameter is satisfied by a recognition match. If no

Algorithms for CAD Tools VLSI Design 147

The first principle of Adaptive Resonance Theory (ART) was first introduced by Grossberg in 1976 (Carpenter,1997), whose structure resembles those of feed-forward networks. The simplest variety of ART networks is accepting only binary inputs which is called as ART (Grossberg,1987,2003). It was then extended for network capabilities to support continuous inputs called as ART-2 (Carpenter & Grossberg ,1987).ARTMAP (Carpenter et al.,1987),also known as Predictive ART, combines two slightly modified ART-1 or ART-2 units into a supervised learning structure where the first unit takes the input data and the second unit takes the correct output data and then used to make the minimum possible adjustment of

Fuzzy logic with the combination of Adaptive Resonance Theory gives Fuzzy ARTMAP, is a class of neural network that perform supervised training of recognition pattern and maps in response to input vectors generated. Fuzzy ART (Carpenter et al.,1991) implements fuzzy logic into ART's pattern recognition, thus enhancing generalizability. An optional (and very useful) feature of fuzzy ART is complement coding, a means of incorporating the absence of features into pattern classifications, which goes a long way towards preventing inefficient and unnecessary category proliferation. The performance of fuzzy ARTMAP depends on a set of user-defined hyper-parameters, and these parameters should normally be fine-tuned to each specific problem (Carpenter et al.,1992). The influence of hyper-parameter values is rarely addressed in ARTMAP literature. Moreover, the few techniques that are found in the literature for automated hyper-parameter optimization,example(Canuto et al., 2000; Dubrawski, 1997; Gamba & DellAcqua, 2003; C. Lim,1999), focus mostly on the vigilance parameter, even though there are four inter-dependent parameters (vigilance, learning, choice, and match tracking). A popular choice consists in setting hyperparameter values such that network resources (the number of internal category neurons, the number of training epochs, etc.) are minimized (Carpenter,1997). This choice of parameters may however lead to overtraining and significantly degrade the network. An effective supervised learning strategy could involve co-jointly optimizing both network (weights and architecture) and all its hyper-parameter values for a given problem, based on a consistent performance objective. Fuzzy ARTMAP neural networks are known to suffer from overtraining or over fitting, which is directly connected to a category proliferation problem. Overtraining generally occurs when a neural network has learned not only the basic mapping associated training subset patterns, but also the subtle nuances and even the errors specific to the training subset. If too much learning occurs, the network tends to memorize the training subset and loses its ability to generalize on unknown patterns. The impact of overtraining on fuzzy ARTMAP performance is two fold that is, an increase in the

the vigilance parameter in the first unit in order to make the correct classification.

generalization error and in the resources requirements.

**6.3 DBSCAN (Density-Based Spatial Clustering Of Applications with Noise)** 

DBSCAN is a data clustering algorithm proposed by Martin Ester, Hans-Peter Kriegel, Jörg Sander and Xiaovei Xui in 1996(Ester, 1996). It is a density based clustering algorithm because it finds a number of clusters starting from the estimated density distribution of corresponding nodes. DBSCAN is one of the most common clustering algorithms and also

**6.2 Fuzzy ARTMAP** 

committed recognition neuron's match meets the vigilance threshold, then an uncommitted neuron is committed and adjusted towards matching the input vector.

There are two basic methods of training ART-based neural networks: slow and fast. In the slow learning method, the degree of training of the recognition neuron's weights towards the input vector is calculated to continuous values with differential equations and is thus dependent on the length of time the input vector is presented. The basic structure of the ART based neural network is shown in Fig 12 With fast learning, algebraic equations are used to calculate degree of weight adjustments to be made, and binary values are used. While fast learning is effective and efficient for a variety of tasks, the slow learning method is more biologically plausible and can be used with continuous-time networks (that is, when the input vector can vary continuously). Fig 13 shows the fast learning ART-based neural network.

123 I=(i ,i ,i ,.....i ) *<sup>m</sup>*

Fig. 12. Basic ART Structure

Fig. 13. Fast learning ART-based neural network

committed recognition neuron's match meets the vigilance threshold, then an uncommitted

There are two basic methods of training ART-based neural networks: slow and fast. In the slow learning method, the degree of training of the recognition neuron's weights towards the input vector is calculated to continuous values with differential equations and is thus dependent on the length of time the input vector is presented. The basic structure of the ART based neural network is shown in Fig 12 With fast learning, algebraic equations are used to calculate degree of weight adjustments to be made, and binary values are used. While fast learning is effective and efficient for a variety of tasks, the slow learning method is more biologically plausible and can be used with continuous-time networks (that is, when the input vector can vary continuously). Fig 13 shows the fast learning ART-based neural network.

neuron is committed and adjusted towards matching the input vector.

 123 I=(i ,i ,i ,.....i ) *<sup>m</sup>* 

Fig. 12. Basic ART Structure

Fig. 13. Fast learning ART-based neural network

The first principle of Adaptive Resonance Theory (ART) was first introduced by Grossberg in 1976 (Carpenter,1997), whose structure resembles those of feed-forward networks. The simplest variety of ART networks is accepting only binary inputs which is called as ART (Grossberg,1987,2003). It was then extended for network capabilities to support continuous inputs called as ART-2 (Carpenter & Grossberg ,1987).ARTMAP (Carpenter et al.,1987),also known as Predictive ART, combines two slightly modified ART-1 or ART-2 units into a supervised learning structure where the first unit takes the input data and the second unit takes the correct output data and then used to make the minimum possible adjustment of the vigilance parameter in the first unit in order to make the correct classification.
