**6.2 Fuzzy ARTMAP**

Fuzzy logic with the combination of Adaptive Resonance Theory gives Fuzzy ARTMAP, is a class of neural network that perform supervised training of recognition pattern and maps in response to input vectors generated. Fuzzy ART (Carpenter et al.,1991) implements fuzzy logic into ART's pattern recognition, thus enhancing generalizability. An optional (and very useful) feature of fuzzy ART is complement coding, a means of incorporating the absence of features into pattern classifications, which goes a long way towards preventing inefficient and unnecessary category proliferation. The performance of fuzzy ARTMAP depends on a set of user-defined hyper-parameters, and these parameters should normally be fine-tuned to each specific problem (Carpenter et al.,1992). The influence of hyper-parameter values is rarely addressed in ARTMAP literature. Moreover, the few techniques that are found in the literature for automated hyper-parameter optimization,example(Canuto et al., 2000; Dubrawski, 1997; Gamba & DellAcqua, 2003; C. Lim,1999), focus mostly on the vigilance parameter, even though there are four inter-dependent parameters (vigilance, learning, choice, and match tracking). A popular choice consists in setting hyperparameter values such that network resources (the number of internal category neurons, the number of training epochs, etc.) are minimized (Carpenter,1997). This choice of parameters may however lead to overtraining and significantly degrade the network. An effective supervised learning strategy could involve co-jointly optimizing both network (weights and architecture) and all its hyper-parameter values for a given problem, based on a consistent performance objective. Fuzzy ARTMAP neural networks are known to suffer from overtraining or over fitting, which is directly connected to a category proliferation problem. Overtraining generally occurs when a neural network has learned not only the basic mapping associated training subset patterns, but also the subtle nuances and even the errors specific to the training subset. If too much learning occurs, the network tends to memorize the training subset and loses its ability to generalize on unknown patterns. The impact of overtraining on fuzzy ARTMAP performance is two fold that is, an increase in the generalization error and in the resources requirements.

#### **6.3 DBSCAN (Density-Based Spatial Clustering Of Applications with Noise)**

DBSCAN is a data clustering algorithm proposed by Martin Ester, Hans-Peter Kriegel, Jörg Sander and Xiaovei Xui in 1996(Ester, 1996). It is a density based clustering algorithm because it finds a number of clusters starting from the estimated density distribution of corresponding nodes. DBSCAN is one of the most common clustering algorithms and also

Algorithms for CAD Tools VLSI Design 149

Fig. 15. Sample Circuit

total edges =10

Table 2. Bipartition Matrix

Sub circuit2 (D, E, F) 0011 0011 0100

Fig. 16. Sample Bi-parted Circuit with data

Data representation: Sub circuit1 (A, B, C) 0010 0010 0011,

Sample circuit bi parted Sub circuit 1 A, B, C total edges = 7 Sub circuit 2 D, E, F

Cell No of edges Bipartition A 2 1 B 2 1 C 3 1 D 3 0 E 3 0 F 4 0

most cited in scientific literature. The basic DBSCAN algorithm has been used as a base for many other developments.

The overall structure of model is illustrated in Fig14 and Fig 15, Fig 16 show a sample circuit bipartite with related data set used . The feature extractor obtains feature vector for subcircuit, and is sent to training or inference module. The SFAM (simplified fuzzy ARTMAP) (Carpenter,1997) has two modules, that is, training and inference module. The feature vector of training subcircuits and the categories to which they belongs are specified to SFAM's training module. Once the training phase is complete, the vector represents the subcircuit with minimum interconnection. The test subcircuit pattern which is to be recognized with minimum interconnection is fed to inference module. Classifications of sub circuits are done by associating the feature vector with the top-down weight vectors (Carpenter et l., 1992;Caudell et al., 1994) in SFAM. The system can handle both symmetric and asymmetric circuit. In symmetric pattern, only distinct portion of circuit is trained whereas in asymmetric (1/2n)th portion of circuit is considered.

Fig. 14. Block diagram of recognition module for partitioning in VLSI Design

most cited in scientific literature. The basic DBSCAN algorithm has been used as a base for

The overall structure of model is illustrated in Fig14 and Fig 15, Fig 16 show a sample circuit bipartite with related data set used . The feature extractor obtains feature vector for subcircuit, and is sent to training or inference module. The SFAM (simplified fuzzy ARTMAP) (Carpenter,1997) has two modules, that is, training and inference module. The feature vector of training subcircuits and the categories to which they belongs are specified to SFAM's training module. Once the training phase is complete, the vector represents the subcircuit with minimum interconnection. The test subcircuit pattern which is to be recognized with minimum interconnection is fed to inference module. Classifications of sub circuits are done by associating the feature vector with the top-down weight vectors (Carpenter et l., 1992;Caudell et al., 1994) in SFAM. The system can handle both symmetric and asymmetric circuit. In symmetric pattern, only distinct portion of circuit is trained

whereas in asymmetric (1/2n)th portion of circuit is considered.

Fig. 14. Block diagram of recognition module for partitioning in VLSI Design

many other developments.

Fig. 15. Sample Circuit

Fig. 16. Sample Bi-parted Circuit with data

Sample circuit bi parted Sub circuit 1 A, B, C total edges = 7 Sub circuit 2 D, E, F total edges =10


Table 2. Bipartition Matrix

Data representation: Sub circuit1 (A, B, C) 0010 0010 0011, Sub circuit2 (D, E, F) 0011 0011 0100

Algorithms for CAD Tools VLSI Design 151

section deals with a range of partitioning methodological aspects which predicts to divide the circuit into sub circuits with minimum interconnections between them. This approach considers two clustering algorithms proposed by ( Li & Behjat, 2006) Nearest Neighbor(NN) and Partitioning Around Mediods(PAM) clustering algorithm for dividing the circuits into sub circuits. The experimental results show that PAM clustering algorithm yields better subcircuits than Nearest Neighbour. The experimental results are compared using

Data mining algorithms have to be adapted to work on very large databases. Data reside on hard disks because they are too large to fit in main memory, therefore, algorithms have to make as few passes as possible over the data, as secondary memory fetch cycle increases the computational time and therefore reduces the run time performance. Quadratic algorithms are too expensive, that is the execution time of the operations in clustering algorithms is quadratic and so it becomes an important constraint in choosing an algorithm for the problem at hand. The aim in the thesis is to reduce the interconnections between the circuits with minimum amount of error,hence prototype based clustering is used. The attributes in the data set were less important, so the proximity matrix was created. Since both PAM and NNA belong to partitional and prototype based clustering and also the intention was to get

benchmark data provided by MCNC standard cell placement bench netlists.

the partition with the minimum interconnections these two algorithms were used.

considered. The bipartite graph considered for the approach is shown in Fig 17.

The implementation consists of three stages consisting of data extraction, partitioning and result using VHDL (VHSIC (Very High Speed Integrated Circuit) Hardware Description Language) as a tool. In data extraction, a VLSI circuit represented as a bipartite graph is

**7.1 Considerations in choosing the right algorithm** 

**7.2 Implementation** 

Fig. 17. Bipartition Circuit
