**3. Cognitive Computing**

68 Earthquake Engineering

displacements.

manifestations -ground motions- at a site (epistemic uncertainty) that must be improved thorough more scientific seismic analyses. A strategic factor in seismic hazard analysis is the ground motion model or attenuation relation. These attenuation relationships has been developed based on magnitude, distance and site category, however, there is a tendency to incorporate other parameters, which are now known to be significant, as the tectonic environment, style of faulting and the effects of topography, deep basin edges and rupture directivity. These distinctions are recognized in North America, Japan and New Zealand [3- 6], but ignored in most other regions of the world [7]. Despite recorded data suggest that ground motions depend, in a significant way, on these aspects, these inclusions did not have had a remarkable effect on the predictions confidence and the geotechnical earthquake engineer prefers the basic and clear-cut approximations on those that demand a *blind* use of

A key practice in current aseismic design is to develop design spectrum compatible time histories. This development entails the modification of a time history so that its response spectrum matches within a prescribed tolerance level, the target design spectrum. In such matching it is important to retain the phase characteristics of the selected ground motion time history. Many of the techniques used to develop compatible motions do not retain the phase [8]. The response spectrum alone does not adequately characterize specific-fault ground motion. Near-fault ground motions must be characterized by a long period pulse of strong motion of a fairly brief duration rather than the stochastic process of long duration that characterizes more distant ground motions. Spectrum compatible with these specific motions will not have these characteristics unless the basic motion being modified to ensure compatibility has these effects included. Spectral compatible motions could match the entire spectrum but the problem arises on finding a "real" earthquake time series that match the specific nature of ground motion. For nonlinear analysis of structures, spectrum compatible motions should also correspond to the particular energy input [9], for this reason, designers should be cautious about using spectrum compatible motions when estimating the displacements of embankment dams and earth structures under strong shaking, if the acceptable performance of these structures is specified by criteria based on tolerable

Another important seismic phenomenon is the liquefaction. Liquefaction is associated with significant loss of stiffness and strength in the shaken soil and consequent large ground deformation. Particularly damaging for engineering structures are cyclic ground movements during the period of shaking and excessive residual deformations such as settlements of the ground and lateral spreads. Ground surface disruption including surface cracking, dislocation, ground distortion, slumping and permanent deformations, large settlements and lateral spreads are commonly observed at liquefied sites. In sloping ground and backfills behind retaining structures in waterfront areas, liquefaction often results in large permanent ground displacements in the down-slope direction or towards waterways (lateral spreads). Dams, embankments and sloping ground near riverbanks where certain shear strength is required for stability under gravity loads are particularly prone to such failures. Clay soils may also suffer some loss of strength during shaking but are not subject

coefficients or an intricate determination of soil/fault conditions.

Cognitive Computing CC as a discipline in a narrow sense, is an application of computers to solve a given computational problem by imperative instructions; while in a broad sense, it is a process to implement the instructive intelligence by a system that transfers a set of given information or instructions into expected behaviors. According to theories of cognitive informatics [16-18], computing technologies and systems may be classified into the categories of imperative, autonomic, and cognitive from the bottom up. Imperative computing is a traditional and passive technology based on stored-program controlled behaviors for data processing [19-24]. An autonomic computing is goal-driven and selfdecision-driven technologies that do not rely on instructive and procedural information [25- 28]. Cognitive computing is more intelligent technologies beyond imperative and autonomic computing, which embodies major natural intelligence behaviors of the brain such as thinking, inference, learning, and perceptions.

Cognitive computing is an emerging paradigm of intelligent computing methodologies and systems, which implements computational intelligence by autonomous inferences and perceptions mimicking the mechanisms of the brain. This section presents a brief description on the theoretical framework and architectural techniques of cognitive computing beyond conventional imperative and autonomic computing technologies. Cognitive models are explored on the basis of the latest advances in applying computational intelligence. These applications of cognitive computing are described from the aspects of cognitive search engines, which demonstrate how machine and computational intelligence technologies can drive us toward autonomous knowledge processing.

### **3.1. Computational intelligence: Soft Computing technologies**

The *computational intelligence* is a synergistic integration of essentially three computing paradigms, viz. neural networks, fuzzy logic and evolutionary computation entailing probabilistic reasoning (belief networks, genetic algorithms and chaotic systems) [29]. This synergism provides a framework for flexible information processing applications designed to operate in the real world and is commonly called *Soft Computing SC* [30]. Soft computing technologies are robust by design, and operate by trading off precision for tractability. Since they can handle uncertainty with ease, they conform better to real world situations and provide lower cost solutions.

A Cognitive Look at Geotechnical Earthquake Engineering: Understanding the Multidimensionality of the Phenomena 71

> Soft Computing- base of Computational intelligence with high MIQ

Hard Computing- base of classical Artificial intelligence

**Figure 1.** Soft Computing Components

rules for improving the performance of the system.

Probabilistic reasoning Neuronal Networks

Computing technologies

Fuzzy Logic- Kernel of Soft Computing

Genetic Algorithms Chaos Theory

Hybrid Systems

Fuzzy logic is the leading constituent of Soft Computing. In Soft Computing, fuzzy logic plays a unique role. FL serves to provide a methodology for computing [36]. It has been successfully applied to many industrial spheres, robotics, complex decision making and diagnosis, data compression, and many other areas. To design a system processor for handling knowledge represented in a linguistic or uncertain numerical form we need a fuzzy model of the system. Fuzzy sets can be used as a universal approximator, which is very important for modeling unknown objects. If an operator cannot tell linguistically what kind of action he or she takes in a specific situation, then it is quite useful to model his/her control actions using numerical data. However, fuzzy logic in its so called *pure form* is not always useful for easily constructing intelligent systems. For example, when a designer does not have sufficient prior information (knowledge) about the system, development of acceptable fuzzy rule base becomes impossible. As the complexity of the system increases, it becomes difficult to specify a correct set of rules and membership functions for describing adequately the behavior of the system. Fuzzy systems also have the disadvantage of not being able to extract additional knowledge from the experience and correcting the fuzzy

Another important component of Soft Computing is neural networks. Neural networks NN viewed as parallel computational models, are parallel fine-grained implementation of nonlinear static or dynamic systems. A very important feature of these networks is their

The three components of soft computing differ from one another in more than one way. Neural networks operate in a numeric framework, and are well known for their learning and generalization capabilities. Fuzzy systems [31] operate in a linguistic framework, and their strength lies in their capability to handle linguistic information and perform approximate reasoning. The evolutionary computation techniques provide powerful search and optimization methodologies. All the three facets of soft computing differ from one another in their time scales of operation and in the extent to which they embed *a priori*  knowledge.

Figure 1 shows a general structure of Soft Computing technology. The following main components of SC are known by now: fuzzy logic FL, neural networks NN, probabilistic reasoning PR, genetic algorithms GA, and chaos theory ChT (Figure 1). In SC FL is mainly concerned with imprecision and approximate reasoning, NN with learning, PR with uncertainty and propagation of belief, GA with global optimization and search and ChT with nonlinear dynamics. Each of these computational paradigms (emerging reasoning technologies) provides us with complementary reasoning and searching methods to solve complex, real-world problems. In large scope, FL, NN, PR, and GA are complementary rather that competitive [32-34]. The interrelations between the components of SC, shown in Figure 1, make the theoretical foundation of Hybrid Intelligent Systems. As noted by L. Zadeh: "… the term hybrid intelligent systems is gaining currency as a descriptor of systems in which FL, NC, and PR are used in combination. In my view, hybrid intelligent systems are the wave of the future" [35]. The use of Hybrid Intelligent Systems are leading to the development of numerous manufacturing system, multimedia system, intelligent robots, trading systems, which exhibits a high level of MIQ (machine intelligence quotient).

#### *3.1.1. Comparative characteristics of SC tools*

The constituents of SC can be used independently (fuzzy computing, neural computing, evolutionary computing etc.), and more often in combination [36, 37, 38- 40, 41]. Based on independent use of the constituents of Soft Computing, fuzzy technology, neural technology, chaos technology and others have been recently applied as emerging technologies to both industrial and non-industrial areas.

A Cognitive Look at Geotechnical Earthquake Engineering: Understanding the Multidimensionality of the Phenomena 71

**Figure 1.** Soft Computing Components

70 Earthquake Engineering

provide lower cost solutions.

knowledge.

quotient).

*3.1.1. Comparative characteristics of SC tools* 

technologies to both industrial and non-industrial areas.

**3.1. Computational intelligence: Soft Computing technologies** 

The *computational intelligence* is a synergistic integration of essentially three computing paradigms, viz. neural networks, fuzzy logic and evolutionary computation entailing probabilistic reasoning (belief networks, genetic algorithms and chaotic systems) [29]. This synergism provides a framework for flexible information processing applications designed to operate in the real world and is commonly called *Soft Computing SC* [30]. Soft computing technologies are robust by design, and operate by trading off precision for tractability. Since they can handle uncertainty with ease, they conform better to real world situations and

The three components of soft computing differ from one another in more than one way. Neural networks operate in a numeric framework, and are well known for their learning and generalization capabilities. Fuzzy systems [31] operate in a linguistic framework, and their strength lies in their capability to handle linguistic information and perform approximate reasoning. The evolutionary computation techniques provide powerful search and optimization methodologies. All the three facets of soft computing differ from one another in their time scales of operation and in the extent to which they embed *a priori* 

Figure 1 shows a general structure of Soft Computing technology. The following main components of SC are known by now: fuzzy logic FL, neural networks NN, probabilistic reasoning PR, genetic algorithms GA, and chaos theory ChT (Figure 1). In SC FL is mainly concerned with imprecision and approximate reasoning, NN with learning, PR with uncertainty and propagation of belief, GA with global optimization and search and ChT with nonlinear dynamics. Each of these computational paradigms (emerging reasoning technologies) provides us with complementary reasoning and searching methods to solve complex, real-world problems. In large scope, FL, NN, PR, and GA are complementary rather that competitive [32-34]. The interrelations between the components of SC, shown in Figure 1, make the theoretical foundation of Hybrid Intelligent Systems. As noted by L. Zadeh: "… the term hybrid intelligent systems is gaining currency as a descriptor of systems in which FL, NC, and PR are used in combination. In my view, hybrid intelligent systems are the wave of the future" [35]. The use of Hybrid Intelligent Systems are leading to the development of numerous manufacturing system, multimedia system, intelligent robots, trading systems, which exhibits a high level of MIQ (machine intelligence

The constituents of SC can be used independently (fuzzy computing, neural computing, evolutionary computing etc.), and more often in combination [36, 37, 38- 40, 41]. Based on independent use of the constituents of Soft Computing, fuzzy technology, neural technology, chaos technology and others have been recently applied as emerging Fuzzy logic is the leading constituent of Soft Computing. In Soft Computing, fuzzy logic plays a unique role. FL serves to provide a methodology for computing [36]. It has been successfully applied to many industrial spheres, robotics, complex decision making and diagnosis, data compression, and many other areas. To design a system processor for handling knowledge represented in a linguistic or uncertain numerical form we need a fuzzy model of the system. Fuzzy sets can be used as a universal approximator, which is very important for modeling unknown objects. If an operator cannot tell linguistically what kind of action he or she takes in a specific situation, then it is quite useful to model his/her control actions using numerical data. However, fuzzy logic in its so called *pure form* is not always useful for easily constructing intelligent systems. For example, when a designer does not have sufficient prior information (knowledge) about the system, development of acceptable fuzzy rule base becomes impossible. As the complexity of the system increases, it becomes difficult to specify a correct set of rules and membership functions for describing adequately the behavior of the system. Fuzzy systems also have the disadvantage of not being able to extract additional knowledge from the experience and correcting the fuzzy rules for improving the performance of the system.

Another important component of Soft Computing is neural networks. Neural networks NN viewed as parallel computational models, are parallel fine-grained implementation of nonlinear static or dynamic systems. A very important feature of these networks is their

adaptive nature, where "learning by example" replaces traditional "programming" in problems solving. Another key feature is the intrinsic parallelism that allows fast computations. Neural networks are viable computational models for a wide variety of problems including pattern classification, speech synthesis and recognition, curve fitting, approximation capability, image data compression, associative memory, and modeling and control of non-linear unknown systems [42, 43]. NN are favorably distinguished for efficiency of their computations and hardware implementations. Another advantage of NN is generalization ability, which is the ability to classify correctly new patterns. A significant disadvantage of NN is their poor interpretability. One of the main criticisms addressed to neural networks concerns their black box nature [35].

A Cognitive Look at Geotechnical Earthquake Engineering: Understanding the Multidimensionality of the Phenomena 73

**Reasoning Chaotic computing**

•Computational complexity •Chaos identification complexity

•Nonlinear dynamics simulation •Discovering chaos in observed data (with noise) •Determinig the predictability •Prediction startegies formulation

**Probabilistic**

•Limitation of the axioms of Probability Theory •Lack of complete knowledge •Copmputational complexity

•Rigorous framework •Well understanding

**Fuzzy Sets Artificial Neural** 

**Table 1.** Central characteristics of Soft Computing technologies

**Weaknesses** •Knowledge acquisition

**Strengths**

the above drawbacks.

•Learning

•Interpretability •Transparency •Plausibility •Graduality •Modeling •Reasoning •Tolerance to imprecision

**Networks**

•Black Box interpretability

•Learning •Adaptation •Fault tolerance •Curve fiting •Generalization ability •Approximation ability

**Evolutionary Computing, GA**

•Coding •Computational speed

Computational efficiency •Global optimization

The combination of genetic algorithms with neural networks yields promising results as well. It is known that one of main problems in development of artificial neural systems is selection of a suitable learning method for tuning the parameters of a neural network (weights, thresholds, and structure). The most known algorithm is the "error back propagation" algorithm. Unfortunately, there are some difficulties with "back propagation". First, the effectiveness of the learning considerably depends on initial set of weights, which are generated randomly. Second, the "back propagation", like any other gradient-based method, does not avoid local minima. Third, if the learning rate is too slow, it requires too much time to find the solution. If, on the other hand, the learning rate is too high it can generate oscillations around the desired point in the weight space. Fourth, "back propagation" requires the activation functions to be differentiable. This condition does not hold for many types of neural networks. Genetic algorithms used for solving many optimization problems when the "strong" methods fail to find appropriate solution, can be successfully applied for learning neural networks, because they are free of

The models of artificial neurons, which use linear, threshold, sigmoidal and other transfer functions, are effective for neural computing. However, it should be noted that such models are very simplified. For example, reaction of a biological axon is chaotic even if the input is periodical. In this aspect the more adequate model of neurons seems to be chaotic. Model of a chaotic neuron can be used as an element of chaotic neural networks. The more adequate results can be obtained if using fuzzy chaotic neural networks, which are closer to biological computation. Fuzzy systems with If-Then rules can model non-linear dynamic systems and capture chaotic attractors easily and accurately. Combination of Fuzzy Logic and Chaos Theory gives us useful tool for building system's chaotic behavior into rule structure. Identification of chaos allows us to determine predicting strategies. If we use a Neural Network Predictor for predicting the system's behavior, the parameters of the strange attractor (in particular fractal dimension) tell us how much data are necessary to train the

Evolutionary Computing EC is a revolutionary approach to optimization. One part of EC genetic algorithms—are algorithms for global optimization. Genetic algorithms GAs are based on the mechanisms of natural selection and genetics [44]. One advantage of genetic algorithms is that they effectively implement parallel multi-criteria search. The mechanism of genetic algorithms is simple. Simplicity of operations and powerful computational effect are the two main advantages of genetic algorithms. The disadvantages are the problem of convergence and the absence of strong theoretical foundation. The requirement of coding the domain of the real variables' into bit strings also seems to be a drawback of genetic algorithms. It should be also noted that the computational speed of genetic algorithms is low.

Because in this investigation PR and ChT are not exploited, they are not going to be explained. For the interested reader [41] is recommended. Table 1 presents the comparative characteristics of the components of Soft Computing. For each component of Soft Computing there is a specific class of problems, where the use of other components is inadequate.

#### *3.1.2. Intelligent Combinations of SC*

As it was shown above, the components of SC complement each other, rather than compete. It becomes clear that FL, NC and GA are more effective when used in combinations. Lack of interpretability of neural networks and poor learning capability of fuzzy systems are similar problems that limit the application of these tools. Neurofuzzy systems are hybrid systems which try to solve this problem by combining the learning capability of connectionist models with the interpretability property of fuzzy systems. As it was noted above, in case of dynamic work environment, the automatic knowledge base correction in fuzzy systems becomes necessary. On the other hand, artificial neural networks are successfully used in problems connected to knowledge acquisition using learning by examples with the required degree of precision.

Incorporating neural networks in fuzzy systems for fuzzification, construction of fuzzy rules, optimization and adaptation of fuzzy knowledge base and implementation of fuzzy reasoning is the essence of the Neurofuzzy approach.


**Table 1.** Central characteristics of Soft Computing technologies

low.

inadequate.

degree of precision.

*3.1.2. Intelligent Combinations of SC* 

reasoning is the essence of the Neurofuzzy approach.

neural networks concerns their black box nature [35].

adaptive nature, where "learning by example" replaces traditional "programming" in problems solving. Another key feature is the intrinsic parallelism that allows fast computations. Neural networks are viable computational models for a wide variety of problems including pattern classification, speech synthesis and recognition, curve fitting, approximation capability, image data compression, associative memory, and modeling and control of non-linear unknown systems [42, 43]. NN are favorably distinguished for efficiency of their computations and hardware implementations. Another advantage of NN is generalization ability, which is the ability to classify correctly new patterns. A significant disadvantage of NN is their poor interpretability. One of the main criticisms addressed to

Evolutionary Computing EC is a revolutionary approach to optimization. One part of EC genetic algorithms—are algorithms for global optimization. Genetic algorithms GAs are based on the mechanisms of natural selection and genetics [44]. One advantage of genetic algorithms is that they effectively implement parallel multi-criteria search. The mechanism of genetic algorithms is simple. Simplicity of operations and powerful computational effect are the two main advantages of genetic algorithms. The disadvantages are the problem of convergence and the absence of strong theoretical foundation. The requirement of coding the domain of the real variables' into bit strings also seems to be a drawback of genetic algorithms. It should be also noted that the computational speed of genetic algorithms is

Because in this investigation PR and ChT are not exploited, they are not going to be explained. For the interested reader [41] is recommended. Table 1 presents the comparative characteristics of the components of Soft Computing. For each component of Soft Computing there is a specific class of problems, where the use of other components is

As it was shown above, the components of SC complement each other, rather than compete. It becomes clear that FL, NC and GA are more effective when used in combinations. Lack of interpretability of neural networks and poor learning capability of fuzzy systems are similar problems that limit the application of these tools. Neurofuzzy systems are hybrid systems which try to solve this problem by combining the learning capability of connectionist models with the interpretability property of fuzzy systems. As it was noted above, in case of dynamic work environment, the automatic knowledge base correction in fuzzy systems becomes necessary. On the other hand, artificial neural networks are successfully used in problems connected to knowledge acquisition using learning by examples with the required

Incorporating neural networks in fuzzy systems for fuzzification, construction of fuzzy rules, optimization and adaptation of fuzzy knowledge base and implementation of fuzzy The combination of genetic algorithms with neural networks yields promising results as well. It is known that one of main problems in development of artificial neural systems is selection of a suitable learning method for tuning the parameters of a neural network (weights, thresholds, and structure). The most known algorithm is the "error back propagation" algorithm. Unfortunately, there are some difficulties with "back propagation". First, the effectiveness of the learning considerably depends on initial set of weights, which are generated randomly. Second, the "back propagation", like any other gradient-based method, does not avoid local minima. Third, if the learning rate is too slow, it requires too much time to find the solution. If, on the other hand, the learning rate is too high it can generate oscillations around the desired point in the weight space. Fourth, "back propagation" requires the activation functions to be differentiable. This condition does not hold for many types of neural networks. Genetic algorithms used for solving many optimization problems when the "strong" methods fail to find appropriate solution, can be successfully applied for learning neural networks, because they are free of the above drawbacks.

The models of artificial neurons, which use linear, threshold, sigmoidal and other transfer functions, are effective for neural computing. However, it should be noted that such models are very simplified. For example, reaction of a biological axon is chaotic even if the input is periodical. In this aspect the more adequate model of neurons seems to be chaotic. Model of a chaotic neuron can be used as an element of chaotic neural networks. The more adequate results can be obtained if using fuzzy chaotic neural networks, which are closer to biological computation. Fuzzy systems with If-Then rules can model non-linear dynamic systems and capture chaotic attractors easily and accurately. Combination of Fuzzy Logic and Chaos Theory gives us useful tool for building system's chaotic behavior into rule structure. Identification of chaos allows us to determine predicting strategies. If we use a Neural Network Predictor for predicting the system's behavior, the parameters of the strange attractor (in particular fractal dimension) tell us how much data are necessary to train the neural network. The combination of Neurocomputing and Chaotic computing technologies can be very helpful for prediction and control.

A Cognitive Look at Geotechnical Earthquake Engineering: Understanding the Multidimensionality of the Phenomena 75

also on uniformity or regularity of encountered soil deposits. Even the most detail soil maps are not efficient enough for predicting a specific soil property because it changes from place to place, even for the same soil type. Consequently interpolation techniques have been extensively exploited. The most commonly used methods are kriging and co-kriging but for better estimations they require a great number of measurements available for each soil type,

Based on the high cost of collecting soil attribute data at many locations across landscape, new interpolation methods must be tested in order to improve the estimation of soil properties. The integration of GIS and Soft Computing SC offers a potential mechanism to lower the cost of analysis of geotechnical information by reducing the amount of time spent understanding data. Applying GIS to large sites, where historical data can be organized to develop multiple databases for analytical and stratigraphic interpretation, originates the establishment of spatial/chronological efficient methodologies for interpreting properties (soil exploration) and behaviors (in situ measured). GIS-SC modeling/simulation of natural systems represents a new methodology for building predictive models, in this investigation NN and GAs, nonparametric cognitive methods, are used to analyze physical, mechanical and geometrical parameters in a geographical context. This kind of spatial analysis can handle uncertain, vague and incomplete/redundant data when modeling intricate relationships between multiple variables. This means that a NN has not constraints about the spacing (minimum distance) between the drill holes used for building (training) the SC model. The NNs-GAs acts as computerized architectures that can approximate nonlinear functions of several variables, this scheme represent the relations between the spatial patterns of the stratigraphy without restrictive assumptions or excessive geometrical and

The geotechnical data requirements (geo-referenced properties) for an easy integration of the SC technologies are explained through an application example: a geo-referenced threedimensional model of the soils underlying Mexico City. The classification/prediction criterion for this very complex urban area is established according to two variables: the cone penetration resistance *<sup>c</sup> q* (mechanical property) and the shear wave velocity *Vs* (dynamic property). The expected result is a 3D-model of the soils underlying the city area that would eventually be improved for a more complex and comprehensive model adding others

Cone-tip penetration resistances and shear wave velocities have been measured along 16 bore holes spreaded throughout the clay deposits of Mexico City (Figure 2). This information was used as the set of examples inputs (latitude, longitude and depth) → output ( *<sup>c</sup> <sup>q</sup>* / *Vs* ). The analysis was carried out in an approximate area of 125 <sup>2</sup> *km* of Mexico City downtown. It is important to point out that 20% of these patterns (sample points and complete variables information) are not used in the training stage; they will be presented for testing the generalization capabilities of the closed system components (once the training is

mechanical, physical or geometrical geo-referenced parameters.

what is generally impossible.

physical simplifications.

stopped).

The cooperation between these formalisms gives a useful tool for modeling and reasoning under uncertainty in complicated real-world problems. Such cooperation is of particular importance for constructing perception-based intelligent information systems. We hope that the mentioned intelligent combinations will develop further, and the new ones will be proposed. These SC paradigms will form the basis for creation and development of Computational Intelligence.
