**4. Cognitive models of ground motions**

The existence of numerous databases in the field of civil engineering, and in particular in the field of geotechnical earthquake, has opened new research lines through the introduction of analysis based on soft computing. Three methods are mainly applied in this emerging field: the ones based on the Neural Networks NN, the ones created using Fuzzy Sets FS theory and the ones developed from the Evolutionary Computation [45].

The SC hybrids used in this investigation are directed to tasks of prediction (classification and/or regression). The central objective is obtaining numerical and/or categorical values that mimic input-output conditions from experimentation and in situ measurements and then, through the recorded data and accumulated experience, predict future behaviors. The examples presented herein have been developed by an engineering committee that works for generating useful guidance to geotechnical practitioners with geotechnical seismic design. This effort could help to minimize the perceived significant and undesirable variability within geotechnical earthquake practice. Some urgency in producing the alternative guidelines was seen, after the most recent earthquakes disasters, as being necessary with a desire to avoid a long and protracted process. To this end, a two stage approach was suggested with the first stage being a cognitive interpretation of well-known procedures with appropriate factors for geotechnical design, and a posterior step identifying the relevant philosophy for a new geotechnical seismic design.

#### **4.1. Spatial variation of soil dynamic properties**

The spatial variability of subsoil properties constitutes a major challenge in both the design and construction phases of most geo-engineering projects. Subsoil investigation is an imperative step in any civil engineering project. The purpose of an exploratory investigation is to infer accurate information about actual soil and rock conditions at the site. Soil exploration, testing, evaluation, and field observation are well-established and routine procedures that, if carried out conscientiously, will invariably lead to good engineering design and construction. It is impossible to determine the optimum spacing of borings before an investigation begins because the spacing depends not only on type of structure but also on uniformity or regularity of encountered soil deposits. Even the most detail soil maps are not efficient enough for predicting a specific soil property because it changes from place to place, even for the same soil type. Consequently interpolation techniques have been extensively exploited. The most commonly used methods are kriging and co-kriging but for better estimations they require a great number of measurements available for each soil type, what is generally impossible.

74 Earthquake Engineering

Computational Intelligence.

seismic design.

can be very helpful for prediction and control.

**4. Cognitive models of ground motions** 

and the ones developed from the Evolutionary Computation [45].

**4.1. Spatial variation of soil dynamic properties** 

neural network. The combination of Neurocomputing and Chaotic computing technologies

The cooperation between these formalisms gives a useful tool for modeling and reasoning under uncertainty in complicated real-world problems. Such cooperation is of particular importance for constructing perception-based intelligent information systems. We hope that the mentioned intelligent combinations will develop further, and the new ones will be proposed. These SC paradigms will form the basis for creation and development of

The existence of numerous databases in the field of civil engineering, and in particular in the field of geotechnical earthquake, has opened new research lines through the introduction of analysis based on soft computing. Three methods are mainly applied in this emerging field: the ones based on the Neural Networks NN, the ones created using Fuzzy Sets FS theory

The SC hybrids used in this investigation are directed to tasks of prediction (classification and/or regression). The central objective is obtaining numerical and/or categorical values that mimic input-output conditions from experimentation and in situ measurements and then, through the recorded data and accumulated experience, predict future behaviors. The examples presented herein have been developed by an engineering committee that works for generating useful guidance to geotechnical practitioners with geotechnical seismic design. This effort could help to minimize the perceived significant and undesirable variability within geotechnical earthquake practice. Some urgency in producing the alternative guidelines was seen, after the most recent earthquakes disasters, as being necessary with a desire to avoid a long and protracted process. To this end, a two stage approach was suggested with the first stage being a cognitive interpretation of well-known procedures with appropriate factors for geotechnical design, and a posterior step identifying the relevant philosophy for a new geotechnical

The spatial variability of subsoil properties constitutes a major challenge in both the design and construction phases of most geo-engineering projects. Subsoil investigation is an imperative step in any civil engineering project. The purpose of an exploratory investigation is to infer accurate information about actual soil and rock conditions at the site. Soil exploration, testing, evaluation, and field observation are well-established and routine procedures that, if carried out conscientiously, will invariably lead to good engineering design and construction. It is impossible to determine the optimum spacing of borings before an investigation begins because the spacing depends not only on type of structure but Based on the high cost of collecting soil attribute data at many locations across landscape, new interpolation methods must be tested in order to improve the estimation of soil properties. The integration of GIS and Soft Computing SC offers a potential mechanism to lower the cost of analysis of geotechnical information by reducing the amount of time spent understanding data. Applying GIS to large sites, where historical data can be organized to develop multiple databases for analytical and stratigraphic interpretation, originates the establishment of spatial/chronological efficient methodologies for interpreting properties (soil exploration) and behaviors (in situ measured). GIS-SC modeling/simulation of natural systems represents a new methodology for building predictive models, in this investigation NN and GAs, nonparametric cognitive methods, are used to analyze physical, mechanical and geometrical parameters in a geographical context. This kind of spatial analysis can handle uncertain, vague and incomplete/redundant data when modeling intricate relationships between multiple variables. This means that a NN has not constraints about the spacing (minimum distance) between the drill holes used for building (training) the SC model. The NNs-GAs acts as computerized architectures that can approximate nonlinear functions of several variables, this scheme represent the relations between the spatial patterns of the stratigraphy without restrictive assumptions or excessive geometrical and physical simplifications.

The geotechnical data requirements (geo-referenced properties) for an easy integration of the SC technologies are explained through an application example: a geo-referenced threedimensional model of the soils underlying Mexico City. The classification/prediction criterion for this very complex urban area is established according to two variables: the cone penetration resistance *<sup>c</sup> q* (mechanical property) and the shear wave velocity *Vs* (dynamic property). The expected result is a 3D-model of the soils underlying the city area that would eventually be improved for a more complex and comprehensive model adding others mechanical, physical or geometrical geo-referenced parameters.

Cone-tip penetration resistances and shear wave velocities have been measured along 16 bore holes spreaded throughout the clay deposits of Mexico City (Figure 2). This information was used as the set of examples inputs (latitude, longitude and depth) → output ( *<sup>c</sup> <sup>q</sup>* / *Vs* ). The analysis was carried out in an approximate area of 125 <sup>2</sup> *km* of Mexico City downtown. It is important to point out that 20% of these patterns (sample points and complete variables information) are not used in the training stage; they will be presented for testing the generalization capabilities of the closed system components (once the training is stopped).

A Cognitive Look at Geotechnical Earthquake Engineering: Understanding the Multidimensionality of the Phenomena 77

700

700

300

0 100 200 300 400 500 600 700 800 900 0 40.0 80.0 0 300 700

training training

, m/s NN ESTIMATIONS

Vs

0 40.0 80.0 0 300 700

test test

3. For visual environment requirements a grid is constructed using raw information and neurogenetic estimations for defining the spatial variation of properties (Figure 4). The 3D view of the studied zone represents an easier and more understandable engineering system. The 3D neurogenetic-database also permits to display propertycontour lines for specific depths. Using the neurogenetic contour maps, the spatial distribution of the mechanical/dynamic variables can be visually appreciated. The 3D model is able to reflect the stratigraphical patterns (Figure 5), indicating that the proposed networks are effective in site characterization with remarkable advantages if comparing with geostatistical approximations: it is easier to use, to understand and to develop graphical user interfaces. The confidence and practical advantages of the defined neurogenetic layers is evident. Precision of predictions depends on neighborhood structure, grid size, and variance response, but based on the results we can conclude that despite of the grid cell (size) is not too small the spatial correlation extends beyond the training neighborhood, but the higher confidence is obviously

0

300

Vs shear wave velocity, m/s MEASURED

0 100 200 300 400 500 600 700

qc cone penetration resistance, kg/cm2 MEASURED

**Figure 3.** Neural estimations of mechanical and dynamic parameters

0 0

a) b)

80.0

qc , kg/cm2

NN ESTIMATIONS

40.0

0

only within.

80.0

40.0

**Figure 2.** Mexico City Zonation

In the 3D-neurogenetic analysis, the functions { ( , , )} / { ( , , )} *c c s s q q XYZ V V XYZ* are to be approximated using the procedure outlined below:


A Cognitive Look at Geotechnical Earthquake Engineering: Understanding the Multidimensionality of the Phenomena 77

**Figure 3.** Neural estimations of mechanical and dynamic parameters

76 Earthquake Engineering

19.50

19.45

19.40

19.35

19.25

19.30

LATITUDE

Hill Zone Transition Zone Lake zone

approximated using the procedure outlined below:

calculations and "real" values is over 0.9.


**Xochimilco**

**16**

**C de la Estrella**

**9**

**4**

**12**

**15**

**6 7**

**P del Marques**

**13 14**

**Tepeyac**

**<sup>1</sup> <sup>2</sup>**

**<sup>10</sup> <sup>11</sup>**

**5**

**17**

**3**

**Figure 2.** Mexico City Zonation

interpretation.

**8**

N

REAL BORINGS VIRTUAL BORINGS **Number SITE**

1 Tlatelolco 2 Alameda 3 Plaza Córdoba 4 Velódromo 5 SCT 6 CAF 7 CDAO 8 CUPJ 9 Eugenia 10 El Águila 11 Línea B 12 Av. 510 13 Calle Urano 14 Plaza Aragón 15 Río Remedios 16 Tláhuac 17 5 de Febrero

**5 Km**

**Tlahuac**

In the 3D-neurogenetic analysis, the functions { ( , , )} / { ( , , )} *c c s s q q XYZ V V XYZ* are to be

1. Generate the database including identification of the site [borings or stations] (X,Y – geographical coordinates, Z –depth, and a CODE –ID number), elevation reference (meters above de sea level, m.a.s.l.), thickness of predetermined structures (layers), and additional information related to geotechnical zoning that could be useful for results

2. Use the database to train an initial neural topology whose weights and layers are tuned by an evolutive algorithm (see [46] for details), until the minimum error between calculated and measured values ( , , )} / { ( , , )} *c s q fNN X Y Z V fNN X Y Z* is achieved (Figure 3a). The generalization capabilities of the optimal 3D neural model are tested presenting real work cases (information from borings not included in the training set) to the net. Figure 3b presents the comparison between the measured *<sup>c</sup> q* , *Vs* values and the NN calculations for testing cases. Through the neurogenetic results for unseen situations we can conclude that the procedure works extremely well in identifying the general trend in materials resistance (stiffness). The correlation between NN

**Sierra de Sta. Catarina**

LONGITUDE

3. For visual environment requirements a grid is constructed using raw information and neurogenetic estimations for defining the spatial variation of properties (Figure 4). The 3D view of the studied zone represents an easier and more understandable engineering system. The 3D neurogenetic-database also permits to display propertycontour lines for specific depths. Using the neurogenetic contour maps, the spatial distribution of the mechanical/dynamic variables can be visually appreciated. The 3D model is able to reflect the stratigraphical patterns (Figure 5), indicating that the proposed networks are effective in site characterization with remarkable advantages if comparing with geostatistical approximations: it is easier to use, to understand and to develop graphical user interfaces. The confidence and practical advantages of the defined neurogenetic layers is evident. Precision of predictions depends on neighborhood structure, grid size, and variance response, but based on the results we can conclude that despite of the grid cell (size) is not too small the spatial correlation extends beyond the training neighborhood, but the higher confidence is obviously only within.

0 10 20 30 40 50 60 70 80 90 100

0 10 20 30 40 50 60 70 80 90 100

A Cognitive Look at Geotechnical Earthquake Engineering: Understanding the Multidimensionality of the Phenomena 79

*epicenter*

propagation processes and the ray path characteristics from source to site, commonly the predictions from attenuation regression analyses are inaccurate. As an effort to recognize these aspects, multiparametric attenuation relations have been proposed by several researchers [47-53]. However, most of these authors have concluded that the governing parameters are still source, ray path, and site conditions. In this section an empirical NN formulation that uses the minimal information about magnitude, epicentral distance, and focal depth for subduction-zone earthquakes is developed to predict the peak ground

The NN model was training from existing information compiled in the Mexican strong motion database. The NN uses earthquake moment magnitude *Mw* , epicentral distance *DE* , and focal depth *DF* from hundreds of events recorded during Mexican subduction earthquakes (Figure 6) from 1964 to 2007. To test the predicting capabilities of the neuronal model, 186 records were excluded from the data set used in the learning phase. Epicentral distance *DE* is considered to be the length from the point where fault-rupture starts to the recording site, and the focal depth *DF* is not declared as mechanism classes, the NN should identify the event type through the *DF* crisp value coupled with the others input parameters [54, 47, 55], The interval of *Mw* goes from 3 to 8.1 approximately and the events were recorded at near (a few km) and far field stations (about 690 km). The depth of the zone of

Longitude

EVENT SUMMARY

Longitude

Events 80 from 1964 to 2007

distance from 112 to 690 km

Modeling of the data base has been performed using backpropagation learning algorithm. Horizontal (mutually orthogonal *<sup>h</sup>*<sup>1</sup> *PGA* , N-S component, and *<sup>h</sup>*<sup>2</sup> *PGA* , E-W component) and vertical components ( *<sup>v</sup> PGA* ) are included as outputs for neural mapping. After trying many

Magnitudes from 3.9 to 8.1 Focal depth from <3 to 360 km

Epicentral

Epicentral coordinates

Latitude from 13.98° to 18.74° Longitude from 92.79° to 104.67°

**Cocos** 105 100 95 90


acceleration PGA and spectral accelerations *<sup>a</sup> S* at a rock-like site in Mexico City.

energy release ranged from very shallow to about 360 km.

15°

**Figure 6.** Earthquakes characteristics

Latitude

Latitude

20°

**Figure 4.** 3D Neural response

**Figure 5.** Stratigraphy sequence obtained using the 3D Neural estimations

#### **4.2. Attenuation laws for rock site (outcropping motions)**

Source, path, and local site response are factors that should be considered in seismic hazard analyses when using attenuation relations. These relations, obtained from statistical regression, are derived from strong motion recordings to define the occurrence of an earthquake with a specific magnitude at a particular distance from the site. Because of the uncertainties inherent in the variables describing the source (e.g. magnitude, epicentral distance, focal depth and fault rupture dimension), the difficulty to define broad categories to classify the site (e.g. rock or soil) and our lack of understanding regarding wave propagation processes and the ray path characteristics from source to site, commonly the predictions from attenuation regression analyses are inaccurate. As an effort to recognize these aspects, multiparametric attenuation relations have been proposed by several researchers [47-53]. However, most of these authors have concluded that the governing parameters are still source, ray path, and site conditions. In this section an empirical NN formulation that uses the minimal information about magnitude, epicentral distance, and focal depth for subduction-zone earthquakes is developed to predict the peak ground acceleration PGA and spectral accelerations *<sup>a</sup> S* at a rock-like site in Mexico City.

The NN model was training from existing information compiled in the Mexican strong motion database. The NN uses earthquake moment magnitude *Mw* , epicentral distance *DE* , and focal depth *DF* from hundreds of events recorded during Mexican subduction earthquakes (Figure 6) from 1964 to 2007. To test the predicting capabilities of the neuronal model, 186 records were excluded from the data set used in the learning phase. Epicentral distance *DE* is considered to be the length from the point where fault-rupture starts to the recording site, and the focal depth *DF* is not declared as mechanism classes, the NN should identify the event type through the *DF* crisp value coupled with the others input parameters [54, 47, 55], The interval of *Mw* goes from 3 to 8.1 approximately and the events were recorded at near (a few km) and far field stations (about 690 km). The depth of the zone of energy release ranged from very shallow to about 360 km.

**Figure 6.** Earthquakes characteristics

78 Earthquake Engineering

**Figure 4.** 3D Neural response

Site CUPJ

W

**Hill Zone**

Z(m)

0 50 100 150 200

Z(m)

**Transition Zone**

**Figure 5.** Stratigraphy sequence obtained using the 3D Neural estimations

**4.2. Attenuation laws for rock site (outcropping motions)** 

0 50 100 150 200

Site ALAMEDA

0 10 20 30 40 50 60 70 80 90 100

0 10 20 30 40 50 60 70 80 90 100

Site EUGENIA

0 50 100 150 200

Z =30 m Z =45 m

Z =5 m Z =15 m

Z(m)

**CUPJ Eugenia Alameda Av. 510**

**Lake Zone**

Vs (m/s) estimations at:

0 10 20 30 40 50 60 70 80 90 100

0 10 20 30 40 50 60 70 80 90 100

Site AV. 510

0 20 40 60 80

E

Z(m)

Source, path, and local site response are factors that should be considered in seismic hazard analyses when using attenuation relations. These relations, obtained from statistical regression, are derived from strong motion recordings to define the occurrence of an earthquake with a specific magnitude at a particular distance from the site. Because of the uncertainties inherent in the variables describing the source (e.g. magnitude, epicentral distance, focal depth and fault rupture dimension), the difficulty to define broad categories to classify the site (e.g. rock or soil) and our lack of understanding regarding wave

Modeling of the data base has been performed using backpropagation learning algorithm. Horizontal (mutually orthogonal *<sup>h</sup>*<sup>1</sup> *PGA* , N-S component, and *<sup>h</sup>*<sup>2</sup> *PGA* , E-W component) and vertical components ( *<sup>v</sup> PGA* ) are included as outputs for neural mapping. After trying many topologies, the best horizontal and vertical modules with quite acceptable approximations were the simpler alternatives (BP backpropagation, 2 hidden layers/15 units or nodes each). The neuronal attenuation model for 1 2 { , , } { , , } *Mw DD h h v E F PGA PGA PGA* was evaluated by performing testing analyses. The predictive capabilities of the NNs were verified by comparing the estimated PGA's to those induced by the 186 events excluded from the original database (data for training stage). In Figure 7 are compared the computed PGA's during training and testing stages to the measured values. The relative correlation factors ( <sup>2</sup> *R* 0.97 ), obtained in the training phase, indicate that those topologies selected as optimal behave consistently within the full range of intensity, distances and focal depths depicted by the patterns. Once the networks converge to the selected stop criterion, learning is finished and each of these black-boxes become a nonlinear multidimensional functional. Following this procedure 20 NN are trained to evaluate de *<sup>a</sup> S* at different response spectra periods (from T= 0.1 s to T= 5.0 s with DT=0.25 s). Forecasting of the spectral components is reliable enough for practical applications.

A Cognitive Look at Geotechnical Earthquake Engineering: Understanding the Multidimensionality of the Phenomena 81

In Figure 8 two case histories correspond to large and medium size events are shown, the estimated values obtained for these events using the relationships proposed by Gómez, Ordaz &Tena [56], Youngs et al. [47], Atkinson and Boore [55] –proposed for rock sites– and Crouse et al. [51] –proposed for stiff soil sites– and the predictions obtained with the *h h* 1 2 *PGA* modules are shown. It can be seen that the estimation obtained with Gómez, Ordaz y Tena [56] seems to underestimate the response for the large magnitude event. However, for the lower magnitude event follows closely both the measured responses and NN predictions. Youngs et al. [47] attenuation relationship follows closely the overall trend

Furthermore, it should be stressed the fact that, as can be seen in Figure 9 the neural attenuation model is capable to follow the general behavior of the measure data expressed as spectra while the traditional functional approaches are not able to reproduce. A neural sensitivity study for the input variables was conducted for the neuronal modules. The results are strictly valid only for the data base utilized, nevertheless, after several sensitivity analyses conducted changing the database composition, it was found that the following trend prevails; the *Mw* would be the most relevant parameter then would follow *DE* coupled with *DF* . However, for near site events the epicentral distance could become as relevant as the magnitude, particularly, for the vertical component and for minor

but tends to fall sharply for long epicentral distances.

**Figure 8.** Attenuation laws comparisons

earthquakes (M low) the *DF* becomes very transcendental.

**Figure 7.** Some examples of measured and NN-estimated PGA values

In Figure 8 two case histories correspond to large and medium size events are shown, the estimated values obtained for these events using the relationships proposed by Gómez, Ordaz &Tena [56], Youngs et al. [47], Atkinson and Boore [55] –proposed for rock sites– and Crouse et al. [51] –proposed for stiff soil sites– and the predictions obtained with the *h h* 1 2 *PGA* modules are shown. It can be seen that the estimation obtained with Gómez, Ordaz y Tena [56] seems to underestimate the response for the large magnitude event. However, for the lower magnitude event follows closely both the measured responses and NN predictions. Youngs et al. [47] attenuation relationship follows closely the overall trend but tends to fall sharply for long epicentral distances.

**Figure 8.** Attenuation laws comparisons

80 Earthquake Engineering

reliable enough for practical applications.

topologies, the best horizontal and vertical modules with quite acceptable approximations were the simpler alternatives (BP backpropagation, 2 hidden layers/15 units or nodes each). The neuronal attenuation model for 1 2 { , , } { , , } *Mw DD h h v E F PGA PGA PGA* was evaluated by performing testing analyses. The predictive capabilities of the NNs were verified by comparing the estimated PGA's to those induced by the 186 events excluded from the original database (data for training stage). In Figure 7 are compared the computed PGA's during training and testing stages to the measured values. The relative correlation factors ( <sup>2</sup> *R* 0.97 ), obtained in the training phase, indicate that those topologies selected as optimal behave consistently within the full range of intensity, distances and focal depths depicted by the patterns. Once the networks converge to the selected stop criterion, learning is finished and each of these black-boxes become a nonlinear multidimensional functional. Following this procedure 20 NN are trained to evaluate de *<sup>a</sup> S* at different response spectra periods (from T= 0.1 s to T= 5.0 s with DT=0.25 s). Forecasting of the spectral components is

**Output: Amax\_H2**

**Output: Amax\_V**

0.1 1 10 100 Peak ground acceleration recorded (gals)

**Output: Amax\_H1**

0.1

100

0.1

**Figure 7.** Some examples of measured and NN-estimated PGA values

1

10

Acel\_Max (gals)

1

10

Acel\_Max (gals)

100

0.1

1

10

Acel\_Max (gals)

100

Furthermore, it should be stressed the fact that, as can be seen in Figure 9 the neural attenuation model is capable to follow the general behavior of the measure data expressed as spectra while the traditional functional approaches are not able to reproduce. A neural sensitivity study for the input variables was conducted for the neuronal modules. The results are strictly valid only for the data base utilized, nevertheless, after several sensitivity analyses conducted changing the database composition, it was found that the following trend prevails; the *Mw* would be the most relevant parameter then would follow *DE* coupled with *DF* . However, for near site events the epicentral distance could become as relevant as the magnitude, particularly, for the vertical component and for minor earthquakes (M low) the *DF* becomes very transcendental.

A Cognitive Look at Geotechnical Earthquake Engineering: Understanding the Multidimensionality of the Phenomena 83

accelerograms rather than in terms of just one or two "typical" records. This situation has thus created a need for the generation of synthetic (artificial) strong-motion time histories that simulate realistic ground motions from different points of views and/or with different degrees of sophistication. To provide the ground motions for analysis and design, various methods have been developed: i) frequency-domain methods where the frequency content of recorded signals is manipulated [57-60] and ii) time-domain methods where the recorded ground motions amplitude is controlled [61, 62]. Regardless of the method, first, one or more time histories are selected subjectively, and then scaling mechanisms for spectrum matching are applied. This is a trial and error procedure that leads artificial signals very far

In this investigation a Genetic Generator of Signals is presented. This genetic generator is a tool for finding the coefficients of a pre-specified functional form, which fit a given sampling of values of the dependent variable associated with particular given values of the independent variable(s). When the genetic generator is applied to synthetic accelerograms construction, the proposed tool is capable of i) searching, under specific soil and seismic conditions (within thousands of earthquake records) and recommending a desired subset that better match a target design spectrum, and ii) through processes that mimic mating, natural selection, and mutation, producing new generations of accelerograms until an optimum individual is obtained. The procedure is fast and reliable and results in time series that match any type of target spectrum with minimal tampering and deviation from

The objective of the genetic generator, when applied to synthetic earthquakes construction, is to produce compatible artificial signals with specific design spectra. In this model specific seismic (fault rupture, magnitude, distance, focal depth) and site characteristics (soil/ rock) are the first set of inputs. They are included to take into consideration that a typical strong motion record consists of a variety of waves whose contribution depends on the earthquake source mechanism (wave path) and its particular characteristics are influenced by the distance between the source and the site, some measure of the size of the earthquake, and the surrounding geology and site conditions; and that the design spectra can be an envelope or integration of many expected ground motions that are possible to occur in certain period of time, or the result of a formulation that involves earthquake magnitude, distance and soil conditions. The second set of inputs consist of the target spectrum, the period range for the matching, lower- and upper-bound acceptable values for scaling signal shape, and a collection of GAs parameters (a population size, number of generations, crossover ratio, and mutation ratio). The output is the more success individual with a chromosome array

The algorithm (see Figure 10) is started with a set of solutions (each solution is called a chromosome). A solution is composed of thousands of components or genes (accelerations recorded at the time), each one encoding a particular trait. The initial solutions (original population) are selected based on the seismic parameters at a site (defined previously by the user): fault mechanism, moment magnitude, epicentral distance, focal depth, geotechnical and geological site classification, depth of sediments. If the user does not have a priori

from real-earthquake time series.

recorded earthquakes characteristics.

generated from "real" accelerograms parents (a set of).

**Figure 9.** Response Spectra: NN-calculated vs traditional functions

Through { , , } { , } *Mw DD hi a E F PGA S* mapping, this neuronal approach offers the flexibility to fit arbitrarily complex trends in magnitude and distance dependence and to recognize and select among the tradeoffs that are present in fitting the observed parameters within the range of magnitudes and distances present in data. This approach seems to be a promising alternative to describe earthquake phenomena despite of the limited observations and qualitative knowledge of the recording stations geotechnical site conditions, which leads to a reasoning of a partially defined behavior.

#### **4.3. Generation of artificial time series: Accelerograms application**

For nonlinear seismic response analysis, where the superposition techniques do not apply, earthquake acceleration time histories are required as inputs. Virtually all seismic design codes and guidelines require scaling of selected ground motion time histories so that they match or exceed the controlling design spectrum within a period range of interest**.** Considerable variability in the characteristics of the recorded strong-motions under similar conditions may still require a characterization of future shaking in terms of an ensemble of accelerograms rather than in terms of just one or two "typical" records. This situation has thus created a need for the generation of synthetic (artificial) strong-motion time histories that simulate realistic ground motions from different points of views and/or with different degrees of sophistication. To provide the ground motions for analysis and design, various methods have been developed: i) frequency-domain methods where the frequency content of recorded signals is manipulated [57-60] and ii) time-domain methods where the recorded ground motions amplitude is controlled [61, 62]. Regardless of the method, first, one or more time histories are selected subjectively, and then scaling mechanisms for spectrum matching are applied. This is a trial and error procedure that leads artificial signals very far from real-earthquake time series.

82 Earthquake Engineering

a reasoning of a partially defined behavior.

**Spectral Acceleration (gals)**

012345

**December 22, 1997**

Through { , , } { , } *Mw DD hi a E F PGA S* mapping, this neuronal approach offers the flexibility to fit arbitrarily complex trends in magnitude and distance dependence and to recognize and select among the tradeoffs that are present in fitting the observed parameters within the range of magnitudes and distances present in data. This approach seems to be a promising alternative to describe earthquake phenomena despite of the limited observations and qualitative knowledge of the recording stations geotechnical site conditions, which leads to

For nonlinear seismic response analysis, where the superposition techniques do not apply, earthquake acceleration time histories are required as inputs. Virtually all seismic design codes and guidelines require scaling of selected ground motion time histories so that they match or exceed the controlling design spectrum within a period range of interest**.** Considerable variability in the characteristics of the recorded strong-motions under similar conditions may still require a characterization of future shaking in terms of an ensemble of

ANN Recorded Joyner & Boore (1988) Esteva (1967)

**Period, T (s)**

**4.3. Generation of artificial time series: Accelerograms application** 

**Figure 9.** Response Spectra: NN-calculated vs traditional functions

**Spectral Accleration (gals)**

**Spectral Acceleration (gals)**

**September 19, 1985 September 19, 1985**

**June 15, 1997 June 15, 1997**

012345 **Period, T (s)**

**December 22, 1997**

In this investigation a Genetic Generator of Signals is presented. This genetic generator is a tool for finding the coefficients of a pre-specified functional form, which fit a given sampling of values of the dependent variable associated with particular given values of the independent variable(s). When the genetic generator is applied to synthetic accelerograms construction, the proposed tool is capable of i) searching, under specific soil and seismic conditions (within thousands of earthquake records) and recommending a desired subset that better match a target design spectrum, and ii) through processes that mimic mating, natural selection, and mutation, producing new generations of accelerograms until an optimum individual is obtained. The procedure is fast and reliable and results in time series that match any type of target spectrum with minimal tampering and deviation from recorded earthquakes characteristics.

The objective of the genetic generator, when applied to synthetic earthquakes construction, is to produce compatible artificial signals with specific design spectra. In this model specific seismic (fault rupture, magnitude, distance, focal depth) and site characteristics (soil/ rock) are the first set of inputs. They are included to take into consideration that a typical strong motion record consists of a variety of waves whose contribution depends on the earthquake source mechanism (wave path) and its particular characteristics are influenced by the distance between the source and the site, some measure of the size of the earthquake, and the surrounding geology and site conditions; and that the design spectra can be an envelope or integration of many expected ground motions that are possible to occur in certain period of time, or the result of a formulation that involves earthquake magnitude, distance and soil conditions. The second set of inputs consist of the target spectrum, the period range for the matching, lower- and upper-bound acceptable values for scaling signal shape, and a collection of GAs parameters (a population size, number of generations, crossover ratio, and mutation ratio). The output is the more success individual with a chromosome array generated from "real" accelerograms parents (a set of).

The algorithm (see Figure 10) is started with a set of solutions (each solution is called a chromosome). A solution is composed of thousands of components or genes (accelerations recorded at the time), each one encoding a particular trait. The initial solutions (original population) are selected based on the seismic parameters at a site (defined previously by the user): fault mechanism, moment magnitude, epicentral distance, focal depth, geotechnical and geological site classification, depth of sediments. If the user does not have a priori seismic/site knowledge, the genetic generator could select the initial population randomly (Figure 11). Once the model has found the seed-accelerogram(s) or chromosome(s), the space of all feasible solutions can be called accelerograms space (state space). Each point in this search space represents one feasible solution and can be "marked" by its value or fitness for the problem. The looking for a solution is then equal to a looking for some extreme (minimum or maximum) in the space.

A Cognitive Look at Geotechnical Earthquake Engineering: Understanding the Multidimensionality of the Phenomena 85

> **GA parameters**

*optional*

**INITIAL POPULATION**

GENES

*REPRODUCTION* (TO CHANGE THE GENES)

0.40 -0.204213616 0.32 20.24880758 0.24 8.236883187 0.16 -2.020109823 0.08 -7.504511877 0.00 -5.253237657 **t** acceleration

START

. . .

•population size, •number of generations, •crossover ratio, and •mutation ratio.

**Target spectrum**

T0 Tn

randomly generation

**Figure 11.** Genetic Generator: working phase diagram

**TARGET FUNCTION** minimize Z = |OE – OE |

**Figure 12.** Iteration process of the Genetic Generator

OE

T(s)

•moment magnitude, •epicentral distance, •geotechnical and geological •site classification, •depth of sediments and •component direction

**Seismic/Soil conditions**

searching data base

*calculate* RESPONSE SPECTRUM

T 48.80 35.68 <sup>N</sup> = 3.00 2.00 65.00 48.99 1.00 95.50 58.96 0.50 127.00 62.39 0.30 148.00 124.2 0.15 134.00 112.36 T 60.00 56 <sup>0</sup> = 0.01 **OE OC** *Period*

END

values can be modified without retraining or a change on model' structure.

**Z = Error**

no

**Z** condition is satisfied? yes

One of the genetic advantages is the possibility of modifying on line the image of the expected earthquake. While the genetic model is running the user interface shows the chromosome per epoch and its response spectra in the same window, if the duration time, the highest intensities interval or the *t* are not convenient for the user's interests, these

**INPUTS**

. . . genes

0.40 -0.204213616 0.32 20.24880758 0.24 8.236883187 0.16 -2.020109823 0.08 -7.504511877 0.00 -5.253237657 **t** acceleration

According to the individuals' fitness, expressed by difference between the target design spectrum and the chromosome response spectrum, the problem is formulated as the minimization of the error function, Z, between the actual and the target spectrum in a certain period range. Solutions with highest fitness are selected to form new solutions (offspring). During reproduction, the recombination (or crossover) and mutation permits to change the genes (accelerations) from parents (earthquake signals) in some way that the whole new chromosome (synthetic signal) contains the older organisms attributes that assure success. This is repeated until some user's condition (for example number of populations or improvement of the best solution) is satisfied (Figure 12).

**Figure 10.** Genetic Generator: flow diagram

A Cognitive Look at Geotechnical Earthquake Engineering: Understanding the Multidimensionality of the Phenomena 85

.

**Figure 11.** Genetic Generator: working phase diagram

84 Earthquake Engineering

(minimum or maximum) in the space.

**Figure 10.** Genetic Generator: flow diagram

seismic/site knowledge, the genetic generator could select the initial population randomly (Figure 11). Once the model has found the seed-accelerogram(s) or chromosome(s), the space of all feasible solutions can be called accelerograms space (state space). Each point in this search space represents one feasible solution and can be "marked" by its value or fitness for the problem. The looking for a solution is then equal to a looking for some extreme

According to the individuals' fitness, expressed by difference between the target design spectrum and the chromosome response spectrum, the problem is formulated as the minimization of the error function, Z, between the actual and the target spectrum in a certain period range. Solutions with highest fitness are selected to form new solutions (offspring). During reproduction, the recombination (or crossover) and mutation permits to change the genes (accelerations) from parents (earthquake signals) in some way that the whole new chromosome (synthetic signal) contains the older organisms attributes that assure success. This is repeated until some user's condition (for example number of

*GENES Algorithm* 

**[Start]** Generate random or Select specific population of *n* chromosomes (suitable solutions for the problem)

**[Fitness]** Evaluate the fitness *f(x)* (differences between actual and design spectrum) of each chromosome *x* in the population

**[New population]** Create a new population by repeating following steps

**[Selection]** Select two parent chromosomes (two accelerograms) from a population

**[Crossover]** With a crossover probability cross over the parents to form a new offspring (children). If no crossover was performed, offspring is an exact copy of parents.

> **[Mutation]** With a mutation probability mutate new offspring at each locus (position in chromosome).

> > **[Accepting]** Place new offspring in a new population

**[Replace]** Use new generated population for a further run of algorithm

**[Test]** If the end condition is satisfied, **stop**, and return the best solution in current population

**[Loop]** Go to step **2**

until the new population is complete

according to their fitness (the better fitness, the bigger chance to be selected

populations or improvement of the best solution) is satisfied (Figure 12).

**Figure 12.** Iteration process of the Genetic Generator

One of the genetic advantages is the possibility of modifying on line the image of the expected earthquake. While the genetic model is running the user interface shows the chromosome per epoch and its response spectra in the same window, if the duration time, the highest intensities interval or the *t* are not convenient for the user's interests, these values can be modified without retraining or a change on model' structure.

In Figure 13 are shown three examples of signals recovered following this methodology. The examples illustrate the application of the genetic methodology to select any number of records to match a given target spectrum (only the more successful individuals for each target are shown in the figure). It can be noticed the stability of the genetic algorithm in adapting itself to smooth, code or scarped spectrum shapes. The procedure is fast and reliable as results in records match the target spectrum with minimal deviation. The genetic procedure has been applied successfully to generate synthetic ground motions having different amplitudes, duration and combinations of moment magnitude and epicentral distance. Although the variations in the target spectra, the genetic signals maintain the nonlinear and nonstationary characteristics of real earthquakes. It is still under development an additional toolbox that will permit to use advanced signal analysis instruments because, as it has been demonstrated [63] [64], studying nonstationary signals through Fourier or response spectra is not convenient for all applications.

A Cognitive Look at Geotechnical Earthquake Engineering: Understanding the Multidimensionality of the Phenomena 87

**4.4. Effects of local site conditions on ground motions** 

still a challenging research area.

soft layers.

mechanisms.

Geotechnical and structural engineers must take into account two fundamental characteristics of earthquake shaking: 1) how ground shaking propagates through the Earth, especially near the surface (site effects), and 2) how buildings respond to this ground motion. Because neither characteristic is completely understood, the seismic phenomenon is

Site effects play a very important role in forecasting seismic ground responses because they may strongly amplify (or deamplify) seismic motions at the last moment just before reaching the surface of the ground or the basement of man-made structures. For much of the history of seismological research, site effects have received much less attention than they should, with the exception of Japan, where they have been well recognized through pioneering work by Sezawa and Ishimoto as early as the 1930's [65]. The situation was drastically changed by the catastrophic disaster in Mexico City during the Michoacan, Mexico earthquake of 1985, in which strong amplification due to extremely soft clay layers caused many high-rise buildings to collapse despite their long distance from the source. The cause of the astounding intensity and long duration of shaking during this earthquake is not well resolved yet even though considerable research has been conducted since then, however, there is no room for doubt that the primary cause of the large amplitude of strong motions in the soft soil (lakebed) zone relative to those in the hill zone is a site effect of these

The traditional data-analysis methods to study site effects are all based on linear and stationary assumptions. Unfortunately, in most soil systems, natural or manmade ones, the data are most likely to be both nonlinear and nonstationary. Discrepancies between calculated responses (using code site amplification factors) and recent strong motion evidence point out serious inaccuracies may be committed when analyzing amplification phenomena. The problem might be due partly because of the lack of understanding regarding the fundamental causes in soil response but also a consequence of the distorted soil amplification quantification and the incomplete characterization of nonlinearityinduced nonstationary features exposed in motion recordings [66]. The objective of this investigation is to illustrate a manner in which site effects can be dealt with for the case of Mexico City soils, making use of response spectra calculated from the motions recorded at different sites during extreme and minor events (see Figure 6). The variations in the spectral shapes, related to local site conditions, are used to feed a multilayer neural network that represent a very advantageous nonlinear-amplification relation. The database is composed by registered information earthquakes affecting Mexico City originated by different source

The most damaging shocks, however, are associated to the subduction of the Cocos Plate into the Continental Plate, off the Mexican Pacific Coast. Even though epicentral distances are rather large, these earthquakes have recurrently damaged structures and produced severe losses in Mexico City. The singular geotechnical environment that prevails in Mexico

#### TARGET RESPONSE SPECTRA

**Figure 13.** Some Generator results: accelerograms application

#### **4.4. Effects of local site conditions on ground motions**

86 Earthquake Engineering

In Figure 13 are shown three examples of signals recovered following this methodology. The examples illustrate the application of the genetic methodology to select any number of records to match a given target spectrum (only the more successful individuals for each target are shown in the figure). It can be noticed the stability of the genetic algorithm in adapting itself to smooth, code or scarped spectrum shapes. The procedure is fast and reliable as results in records match the target spectrum with minimal deviation. The genetic procedure has been applied successfully to generate synthetic ground motions having different amplitudes, duration and combinations of moment magnitude and epicentral distance. Although the variations in the target spectra, the genetic signals maintain the nonlinear and nonstationary characteristics of real earthquakes. It is still under development an additional toolbox that will permit to use advanced signal analysis instruments because, as it has been demonstrated [63] [64], studying nonstationary signals through Fourier or

TARGET RESPONSE SPECTRA

T (s)

*smooth* target signalGENES 1

**smooth code escarped**


Accelerations


Accelerations


Accelerations

T (s)

Señal 1

Señal 2

t (s)

Señal 3

Spectral acceleration

0 5 10 15 20 25

0 5 10 15 20 25

0 5 10 15 20 25

T(s) t (s)

T(s) t (s)

*scarped* target signalGENES 3

*code* target signalGENES 2

GENES TARGET

response spectra is not convenient for all applications.

T (s)

0123

0123

0123

**Figure 13.** Some Generator results: accelerograms application

Espectro 1

Espectro 2

T(s)

Espectro 3

Spectral acceleration

Spectral acceleration

0

Spectral acc.

Spectral acc.

500

Spectral acc.

1000

Geotechnical and structural engineers must take into account two fundamental characteristics of earthquake shaking: 1) how ground shaking propagates through the Earth, especially near the surface (site effects), and 2) how buildings respond to this ground motion. Because neither characteristic is completely understood, the seismic phenomenon is still a challenging research area.

Site effects play a very important role in forecasting seismic ground responses because they may strongly amplify (or deamplify) seismic motions at the last moment just before reaching the surface of the ground or the basement of man-made structures. For much of the history of seismological research, site effects have received much less attention than they should, with the exception of Japan, where they have been well recognized through pioneering work by Sezawa and Ishimoto as early as the 1930's [65]. The situation was drastically changed by the catastrophic disaster in Mexico City during the Michoacan, Mexico earthquake of 1985, in which strong amplification due to extremely soft clay layers caused many high-rise buildings to collapse despite their long distance from the source. The cause of the astounding intensity and long duration of shaking during this earthquake is not well resolved yet even though considerable research has been conducted since then, however, there is no room for doubt that the primary cause of the large amplitude of strong motions in the soft soil (lakebed) zone relative to those in the hill zone is a site effect of these soft layers.

The traditional data-analysis methods to study site effects are all based on linear and stationary assumptions. Unfortunately, in most soil systems, natural or manmade ones, the data are most likely to be both nonlinear and nonstationary. Discrepancies between calculated responses (using code site amplification factors) and recent strong motion evidence point out serious inaccuracies may be committed when analyzing amplification phenomena. The problem might be due partly because of the lack of understanding regarding the fundamental causes in soil response but also a consequence of the distorted soil amplification quantification and the incomplete characterization of nonlinearityinduced nonstationary features exposed in motion recordings [66]. The objective of this investigation is to illustrate a manner in which site effects can be dealt with for the case of Mexico City soils, making use of response spectra calculated from the motions recorded at different sites during extreme and minor events (see Figure 6). The variations in the spectral shapes, related to local site conditions, are used to feed a multilayer neural network that represent a very advantageous nonlinear-amplification relation. The database is composed by registered information earthquakes affecting Mexico City originated by different source mechanisms.

The most damaging shocks, however, are associated to the subduction of the Cocos Plate into the Continental Plate, off the Mexican Pacific Coast. Even though epicentral distances are rather large, these earthquakes have recurrently damaged structures and produced severe losses in Mexico City. The singular geotechnical environment that prevails in Mexico City is the one most important factor to be accounted for in explaining the huge amplification of seismic movements [67-70]. The soils in Mexico City were formed by the deposition into a lacustrine basin of air and water transported materials. From the view point of geotechnical engineering, the relevant strata extend down to depths of 50 m to 80 m, approximately. The superficial layers formed the bed of a lake system that has been subjected to dessication for the last 350 years. Three types of soils may be broadly distinguished: in Zone I, firm soils and rock-like materials prevail; in Zone III, very soft clay formations with large amounts of microorganisms interbedded by thin seams of silty sands, fly ash and volcanic glass are found; and in Zone II, which is a transition between zones I and III, sequences of clay layers an coarse material strata are present (Figure 14).

A Cognitive Look at Geotechnical Earthquake Engineering: Understanding the Multidimensionality of the Phenomena 89

measurements or estimations makes crucial the consideration of subjectivity to evaluate and to derive numerical conclusions according to the phenomena behavior. The neuronal training process starts with the training of four input variables booked: *DE* , *DF* and *Mw* . The output linguistic variables are *<sup>h</sup>*<sup>1</sup> *PGA* (peak ground acceleration horizontal, component 1) and 2 , *<sup>h</sup> PGA* (peak ground acceleration, horizontal component 2) registered in a rock-like site in Zone I .The second training process is linked *feed-forward* with the previous module (PGA for rock-like site) and the new seismic inputs are Seismogenic Zone and PGArock and the Latitude and Longitude coordinates are the geo-referenced position needed to draw the deposition variation into the basin. This neuro-training runs one step after the first training phase and until the minimum difference between the *<sup>a</sup> S* and the neuronal calculations is

0

This second NN represents the geo-referenced amplification ratio that take into consideration the topographical, geotechnical and geographical conditions, implicit in the recorded accelerograms. The results of these two NNs are summarized in Figure 16. These graphs show the predicting capabilities of the neural system comparing the measured values with those obtained in neural-working phase. It can be observed a good correspondence throughout the full distance and magnitude range for the seismogenic

0 10 20 30 40 50

PGA (cm/s2) MEASURED

0 50 100 150 200

Training H2 component

\*results for selected sites

Test H2 component

10

20

30

40

50

0

50

100

150

200

attained. In Figure 15 some results from training and testing modes are shown.

0 50 100 150 200

Training H1 component

0 50 100 150 200

PGA (cm/s2) MEASURED

**Figure 15.** Neural estimations for PGA in Lake Zone sites

Test H1 component

zones considered in this study for the whole studied area (Lake Zone).

0

0

50

100

150

200

PGA (cm/s2) NN ESTIMATED

50

100

150

200

**Figure 14.** Accelerographic stations used in this study

Due to space limitations, reference is made only to two seismic events: the June 15, 1999 and the October 9, 1995. This module was developed based in a previous study (see section 4.2 of this Chapter) where the effect of the parameters *DE* , *DF* and *Mw* on the ground motion attenuation from epicentre to the site, were found to be the most significant [71]. The recent disaster experience showed that the imprecision that is inherent to most variables measurements or estimations makes crucial the consideration of subjectivity to evaluate and to derive numerical conclusions according to the phenomena behavior. The neuronal training process starts with the training of four input variables booked: *DE* , *DF* and *Mw* . The output linguistic variables are *<sup>h</sup>*<sup>1</sup> *PGA* (peak ground acceleration horizontal, component 1) and 2 , *<sup>h</sup> PGA* (peak ground acceleration, horizontal component 2) registered in a rock-like site in Zone I .The second training process is linked *feed-forward* with the previous module (PGA for rock-like site) and the new seismic inputs are Seismogenic Zone and PGArock and the Latitude and Longitude coordinates are the geo-referenced position needed to draw the deposition variation into the basin. This neuro-training runs one step after the first training phase and until the minimum difference between the *<sup>a</sup> S* and the neuronal calculations is attained. In Figure 15 some results from training and testing modes are shown.

**Figure 15.** Neural estimations for PGA in Lake Zone sites

88 Earthquake Engineering

19.50

19.45

Latitude

19.40

19.35

19.30

**Figure 14.** Accelerographic stations used in this study

City is the one most important factor to be accounted for in explaining the huge amplification of seismic movements [67-70]. The soils in Mexico City were formed by the deposition into a lacustrine basin of air and water transported materials. From the view point of geotechnical engineering, the relevant strata extend down to depths of 50 m to 80 m, approximately. The superficial layers formed the bed of a lake system that has been subjected to dessication for the last 350 years. Three types of soils may be broadly distinguished: in Zone I, firm soils and rock-like materials prevail; in Zone III, very soft clay formations with large amounts of microorganisms interbedded by thin seams of silty sands, fly ash and volcanic glass are found; and in Zone II, which is a transition between zones I

> **P del Marques**

> > **Sierra de Sta. Catarina**

LONGITUDE

Longitude

N

**C de la Estrella**


**3**

Hill Zone Transition Zone Lake zone

(°) (°)

19.410 99.145

19.463 99.128

19.425 99.130

No. STATION LATITUDE LONGITUDE

2 C.U. Juárez 19.410 99.157 3 CDAO 19.373 99.098 4 SCT 19.393 99.147 5 CUPJ 19.410 99.157 6 Alameda 19.436 99.145 7 Garibaldi 19.439 99.140

10 Xochipilli 19.420 99.135 11 Tlatelolco 19.436 99.143

Due to space limitations, reference is made only to two seismic events: the June 15, 1999 and the October 9, 1995. This module was developed based in a previous study (see section 4.2 of this Chapter) where the effect of the parameters *DE* , *DF* and *Mw* on the ground motion attenuation from epicentre to the site, were found to be the most significant [71]. The recent disaster experience showed that the imprecision that is inherent to most variables

and III, sequences of clay layers an coarse material strata are present (Figure 14).

**11 <sup>6</sup> <sup>7</sup>**

**8**

**4**

1 Buenos Aires

8 Rodolfo Menendez

9 Hospital Juárez

**3**

**2**

**C.U.**

This second NN represents the geo-referenced amplification ratio that take into consideration the topographical, geotechnical and geographical conditions, implicit in the recorded accelerograms. The results of these two NNs are summarized in Figure 16. These graphs show the predicting capabilities of the neural system comparing the measured values with those obtained in neural-working phase. It can be observed a good correspondence throughout the full distance and magnitude range for the seismogenic zones considered in this study for the whole studied area (Lake Zone).

A Cognitive Look at Geotechnical Earthquake Engineering: Understanding the Multidimensionality of the Phenomena 91

loading laboratory tests had been used to evaluate the liquefaction potential of a soil [72] but due to difficulties in obtaining undisturbed samples of loose sandy soils, many researchers have preferred to use *in situ* tests [73]. In a semi-empirical approach the theoretical considerations and experimental findings provides the ability to make sense out of the field observations, tying them together, and thereby having more confidence in the validity of the approach as it is used to interpolate or extrapolate to areas with insufficient field data to constrain a purely empirical solution. Empirical field-based procedures for determining liquefaction potential have two critical constituents: i) the analytical framework to organize past experiences, and ii) an appropriate *in situ* index to represent soil liquefaction characteristics. The original simplified procedure [74] for estimating earthquake-induced cyclic shear stresses continues to be an essential component of the analysis framework. The refinements to the various elements of this context include improvements in the in-situ index tests (e.g., SPT, CPT, BPT, *Vs*), and the compilation of liquefaction/no-liquefaction

The objective of the present study is to produce an empirical machine learning ML method for evaluating liquefaction potential. ML is a scientific discipline concerned with the design and development of algorithms that allow computers to evolve behaviours based on empirical data, such as from sensor data or databases. Data can be seen as examples that illustrate relations between observed variables. A major focus of ML research is to automatically learn to recognize complex patterns and make intelligent decisions based on data; the difficulty lies in the fact that the set of all possible behaviours given all possible inputs is too large to be covered by the set of observed examples (training data). Hence the learner must generalize from the given examples, so as to be able to produce a useful output in new cases. In the following two ML tools, Neural Networks NN and Classification Trees CTs, are used to evaluate liquefaction potential and to find out the liquefaction control parameters, including earthquake and soil conditions. For each of these parameters, the emphasis has been on developing relations that capture the essential physics while being as simplified as possible. The proposed cognitive environment permits an improved definition of i) seismic loading or cyclic stress ratio CSR, and ii) the *resistance* of the soil to triggering of

The factor of safety FS against the initiation of liquefaction of a soil under a given seismic loading is commonly described as the ratio of cyclic resistance ratio (CRR), which is a measure of liquefaction resistance, over cyclic stress ratio (CSR), which is a representation of seismic loading that causes liquefaction, symbolically, *FS CRR CSR* / . The reader is referred to Seed and Idriss [74], Youd et al. [75], and Idriss and Boulanger [76] for a historical perspective of

horizontal ground surface acceleration max a , a depth-dependent shear stress reduction factor

r (dimensionless), a magnitude scaling factor MSF (dimensionless). For CRR, different in situ-resistance measurements and overburden correction factors are included in its determination; both terms operate depending of the geotechnical conditions. Details about the

total stress of the soil vo <sup>σ</sup> at the depth considered, the vertical effective stress '

theory behind this topic in Idriss and Boulanger, [76] and Youd et al. [75].

 

' 0.65, , , , , *CSR f a r MSF vo max vo d* is function of the vertical

vo σ , the peak

cases.

d

liquefaction or cyclic resistance ratio CRR.

this approach. The term CSR

**Figure 16.** Spectral accelerations in some Lake-Zone sites: measured vs NN

#### **4.5. Liquefaction phenomena: potential assessment and lateral displacements estimation**

Over the past forty years, scientists have conducted extensive research and have proposed many methods to predict the occurrence of liquefaction. In the beginning, undrained cyclic loading laboratory tests had been used to evaluate the liquefaction potential of a soil [72] but due to difficulties in obtaining undisturbed samples of loose sandy soils, many researchers have preferred to use *in situ* tests [73]. In a semi-empirical approach the theoretical considerations and experimental findings provides the ability to make sense out of the field observations, tying them together, and thereby having more confidence in the validity of the approach as it is used to interpolate or extrapolate to areas with insufficient field data to constrain a purely empirical solution. Empirical field-based procedures for determining liquefaction potential have two critical constituents: i) the analytical framework to organize past experiences, and ii) an appropriate *in situ* index to represent soil liquefaction characteristics. The original simplified procedure [74] for estimating earthquake-induced cyclic shear stresses continues to be an essential component of the analysis framework. The refinements to the various elements of this context include improvements in the in-situ index tests (e.g., SPT, CPT, BPT, *Vs*), and the compilation of liquefaction/no-liquefaction cases.

90 Earthquake Engineering

Spectral acceleration (gals)

Spectral acceleration (gals)

**estimation** 

012345

012345

012345

012345

**Figure 16.** Spectral accelerations in some Lake-Zone sites: measured vs NN

Outcropping NN estimation Measured

Outcropping NN estimation Measured

**4.5. Liquefaction phenomena: potential assessment and lateral displacements** 

Over the past forty years, scientists have conducted extensive research and have proposed many methods to predict the occurrence of liquefaction. In the beginning, undrained cyclic

*Site*: Tlatelolco *Site*: Xochipilli

012345

Event: June 15th,1999 Mw : 7.0 FD: 69 Comp: H1

012345

012345

Event: October 9th,1995 Mw : 7.5 FD: 5 Comp: H1

012345

Period (s) Period (s)

*Site*: Tlatelolco *Site*: Xochipilli

Period (s) Period (s)

*Site*: CU Juárez *Site*: Garibaldi

*Site*: CU Juárez *Site*: Garibaldi

The objective of the present study is to produce an empirical machine learning ML method for evaluating liquefaction potential. ML is a scientific discipline concerned with the design and development of algorithms that allow computers to evolve behaviours based on empirical data, such as from sensor data or databases. Data can be seen as examples that illustrate relations between observed variables. A major focus of ML research is to automatically learn to recognize complex patterns and make intelligent decisions based on data; the difficulty lies in the fact that the set of all possible behaviours given all possible inputs is too large to be covered by the set of observed examples (training data). Hence the learner must generalize from the given examples, so as to be able to produce a useful output in new cases. In the following two ML tools, Neural Networks NN and Classification Trees CTs, are used to evaluate liquefaction potential and to find out the liquefaction control parameters, including earthquake and soil conditions. For each of these parameters, the emphasis has been on developing relations that capture the essential physics while being as simplified as possible. The proposed cognitive environment permits an improved definition of i) seismic loading or cyclic stress ratio CSR, and ii) the *resistance* of the soil to triggering of liquefaction or cyclic resistance ratio CRR.

The factor of safety FS against the initiation of liquefaction of a soil under a given seismic loading is commonly described as the ratio of cyclic resistance ratio (CRR), which is a measure of liquefaction resistance, over cyclic stress ratio (CSR), which is a representation of seismic loading that causes liquefaction, symbolically, *FS CRR CSR* / . The reader is referred to Seed and Idriss [74], Youd et al. [75], and Idriss and Boulanger [76] for a historical perspective of this approach. The term CSR ' 0.65, , , , , *CSR f a r MSF vo max vo d* is function of the vertical total stress of the soil vo <sup>σ</sup> at the depth considered, the vertical effective stress ' vo σ , the peak horizontal ground surface acceleration max a , a depth-dependent shear stress reduction factor d r (dimensionless), a magnitude scaling factor MSF (dimensionless). For CRR, different in situ-resistance measurements and overburden correction factors are included in its determination; both terms operate depending of the geotechnical conditions. Details about the theory behind this topic in Idriss and Boulanger, [76] and Youd et al. [75].

Many correction/adjustment factors have been included in the conventional analytical frameworks to organize and to interpret the historical data. The correction factors improve the consistency between the geotechnical/seismological parameters and the observed liquefaction behavior, but they are a consequence of a constrained analysis space: a 2D plot [CSR *vs.* CRR] where regression formulas (simple equations) intend to relate complicated nonlinear/multidimensional information. In this investigation the ML methods are applied to discover unknown, valid patterns and relationships between geotechnical, seismological and engineering descriptions using the relevant available information of liquefaction phenomena (expressed as empirical prior knowledge and/or input-output data). These ML techniques "work" and "produce" accurate predictions based on few logical conditions and they are not restricted for the mathematical/analytical environment. The ML techniques establish a *natural* connection between experimental and theoretical findings.

A Cognitive Look at Geotechnical Earthquake Engineering: Understanding the Multidimensionality of the Phenomena 93

**Patterns** 

, amax, M 21 Juang et al., 1999

, amax, M 242 Juang, 2003

,qc, amax 466 Chern and Lee,

, Soil Class, Vs 80 Andrus and

**Ref.** 

al., 1988

2009

Stokoe, 1997; 2000

56 Fatemi-Agdha et

variable is "Liquefaction?" and it can take the values "YES/NO" (Figure 17). Once the NN is trained the number of cases that was correctly evaluated was 100% and applied to "unseen" cases (separated for testing) less than 10% of these examples were not fitted. The CT has a minor efficiency during the training showing 85% of cases correctly predicted, but when the CT runs on the unseen patterns its capability is not diminished and it asserts the same proportion. From these findings it is concluded that the neuro system is capable of predicting the in situ measurements with a high degree of accuracy but if improvement of knowledge is necessary or there are missed, vague even contradictory values in the

D D50, amax, σ0' , σ0 , M, Fs, qc, SPT, Z 170 Baziar, 2003

Figure 18 shows the pruned liquefaction trees (two, one runs using *<sup>c</sup> q* values and the other through the *Vs* measurements) with YES/NO as terminal nodes. In the Figure 19, some examples of tree reading are presented. The trees incorporate soil type dependence through the resistance values ( *<sup>c</sup> q* , and *Vs* ) and fine content, and it is not necessary to label the material as "sand" or "silt". The most general geometrical branches that split the behaviors are the Water table depth and the Layer thickness but only when the soil description is based on *Vs* , when *<sup>c</sup> q* , serves as rigidity parameter this geometrical inputs are not explicit exploited. This finding can be related to the nature of the measurement: the cone penetration value contains the effect of the saturated material while the shear wave velocities need the inclusion of this situation explicitly. Without potentially confusing regression strategies, the liquefaction trees results can be seen as an indication of how effectively the ML model maps the assigned predictor variables to the response parameter. Using data from all regions and wide parameters ranges, the prediction capabilities of the neural network and classification trees are superior to many other approximations used in common practice, but the most important remark is the generation of meaningful clues about the reliability of physical

parameters, measurement and calculation process and practice recommendations.

**Set Input Parameters Number .of** 

Geomorphological units, Geological units, Site amplification, amax

 **Total: 1035**

analyzed case, the CT is a better option.

A Z, ZNAF; H, Soil Class,

B Z, qc, Fs, σ0, σ0'

C Z, qc, Fs, σ0, σ0'

E M, σ0, σ0'

F ZNAF, Z, H, σ0, σ0'

**Table 2.** Database for liquefaction analysis

Following the format of the simplified method pioneered by Seed and Idriss [74], in this investigation a nonlinear and adaptative *limit state* (a fuzzy-boundary that separates liquefied cases from nonliquefied cases) is proposed (Figure 17). The database used in the present study was constructed using the information included in Table 3 and it was compiled by Agdha et al., [77], Juang et al., [78], Juang [79], Baziar, [80] and Chern and Lee [81]. The cases are derived from cone penetration tests CPT, and shear wave velocities *Vs* measurements and different world seismic conditions (U.S., China, Taiwan, Romania, Canada and Japan). The soils types ranges from clean sand and silty sand to silt mixtures (sandy and clayey silt). Diverse geological and geomorphological characteristics are included. The reader is referred to the citations in Table 3 for details.

**Figure 17.** An schematic view of the nonlinear-liquefaction boundary

The ML formulation uses Geotechnical ( *<sup>c</sup> q* , *Vs* , Unit weight, Soil Type, Total vertical stresses, Effective vertical stresses, Geometrical (Layer thickness, Water Level Depth, Top Layer Depth) and Seismological (Magnitude, PGA) input parameters and the output variable is "Liquefaction?" and it can take the values "YES/NO" (Figure 17). Once the NN is trained the number of cases that was correctly evaluated was 100% and applied to "unseen" cases (separated for testing) less than 10% of these examples were not fitted. The CT has a minor efficiency during the training showing 85% of cases correctly predicted, but when the CT runs on the unseen patterns its capability is not diminished and it asserts the same proportion. From these findings it is concluded that the neuro system is capable of predicting the in situ measurements with a high degree of accuracy but if improvement of knowledge is necessary or there are missed, vague even contradictory values in the analyzed case, the CT is a better option.


#### **Table 2.** Database for liquefaction analysis

92 Earthquake Engineering

Many correction/adjustment factors have been included in the conventional analytical frameworks to organize and to interpret the historical data. The correction factors improve the consistency between the geotechnical/seismological parameters and the observed liquefaction behavior, but they are a consequence of a constrained analysis space: a 2D plot [CSR *vs.* CRR] where regression formulas (simple equations) intend to relate complicated nonlinear/multidimensional information. In this investigation the ML methods are applied to discover unknown, valid patterns and relationships between geotechnical, seismological and engineering descriptions using the relevant available information of liquefaction phenomena (expressed as empirical prior knowledge and/or input-output data). These ML techniques "work" and "produce" accurate predictions based on few logical conditions and they are not restricted for the mathematical/analytical environment. The ML techniques

Following the format of the simplified method pioneered by Seed and Idriss [74], in this investigation a nonlinear and adaptative *limit state* (a fuzzy-boundary that separates liquefied cases from nonliquefied cases) is proposed (Figure 17). The database used in the present study was constructed using the information included in Table 3 and it was compiled by Agdha et al., [77], Juang et al., [78], Juang [79], Baziar, [80] and Chern and Lee [81]. The cases are derived from cone penetration tests CPT, and shear wave velocities *Vs* measurements and different world seismic conditions (U.S., China, Taiwan, Romania, Canada and Japan). The soils types ranges from clean sand and silty sand to silt mixtures (sandy and clayey silt). Diverse geological and geomorphological characteristics are

a

ii)

establish a *natural* connection between experimental and theoretical findings.

included. The reader is referred to the citations in Table 3 for details.

**Liquefied Zone**

a

**Non-liquefied Zone In-Situ Soil Strength**

**Figure 17.** An schematic view of the nonlinear-liquefaction boundary

b

b

The ML formulation uses Geotechnical ( *<sup>c</sup> q* , *Vs* , Unit weight, Soil Type, Total vertical stresses, Effective vertical stresses, Geometrical (Layer thickness, Water Level Depth, Top Layer Depth) and Seismological (Magnitude, PGA) input parameters and the output

**Seismic Load**

i)

Figure 18 shows the pruned liquefaction trees (two, one runs using *<sup>c</sup> q* values and the other through the *Vs* measurements) with YES/NO as terminal nodes. In the Figure 19, some examples of tree reading are presented. The trees incorporate soil type dependence through the resistance values ( *<sup>c</sup> q* , and *Vs* ) and fine content, and it is not necessary to label the material as "sand" or "silt". The most general geometrical branches that split the behaviors are the Water table depth and the Layer thickness but only when the soil description is based on *Vs* , when *<sup>c</sup> q* , serves as rigidity parameter this geometrical inputs are not explicit exploited. This finding can be related to the nature of the measurement: the cone penetration value contains the effect of the saturated material while the shear wave velocities need the inclusion of this situation explicitly. Without potentially confusing regression strategies, the liquefaction trees results can be seen as an indication of how effectively the ML model maps the assigned predictor variables to the response parameter. Using data from all regions and wide parameters ranges, the prediction capabilities of the neural network and classification trees are superior to many other approximations used in common practice, but the most important remark is the generation of meaningful clues about the reliability of physical parameters, measurement and calculation process and practice recommendations.

A Cognitive Look at Geotechnical Earthquake Engineering: Understanding the Multidimensionality of the Phenomena 95

> yes

**YES**

no

Rf

amax

**NO**

yes

**YES**

<=0.3

>0.3

<=0.22

>0.22

no

**NO**

yes

**YES**

no

**NO**

qc

**qc**

<=8.5

>8.5

no

amax

**NO**

>0.2

Magnitude

**Figure 18.** Classification tree for liquefaction potential assessment

M

qc

**qc**

<=5.4

>5.4

<=6.5

>6.5

<=0.2

no

**NO**

<=0.16

amax

>0.16

yes

**YES**

no

**NO**

yes

**YES**

no

**NO**

qc

**qc**

<=7.8

>7.8

<=0.24

>0.24

amax

<=3.8

>3.8

qc

**qc**

<=4.8

>4.8

<=116.9

0

'

 >116.9

no

**NO**

z

**Z Top** 

<=10.4

>10.4

no

**NO**

<=2.6

>2.6

yes

Rf

**YES**

z

**Z Top** 

A Cognitive Look at Geotechnical Earthquake Engineering: Understanding the Multidimensionality of the Phenomena 95

**Figure 18.** Classification tree for liquefaction potential assessment

Magnitude

M

Z\_TOP

<=6.5

>6.5

no

**NO**

Vs

> 6.6

<=6.6

<=0.23

>0.23

amax

no

**NO**

amax

> amax

>0.16

<=0.36

>0.36

<=0.48

amax

>0.48

<=0.16

>87.7

0'

<=183

>183

<=87.7

no

**NO**

Magnitude

M

<=6.5

> 6.5

>0.19

<=0.19

amax

no

**NO**

Layer\_H

Z\_w

<=2.1

>2.1

<=4.8

>4.8

no

**NO**

yes

**YES**

0'

yes

**YES**

<=135

>135

<=1.5

>1.5

yes

**YES**

Vs

Z\_W

no

**NO**

yes

**YES**

no

**NO**

yes

**YES**

Z\_TOP

<=3

>3

yes

**YES**

no

**NO**

no

**NO**

amax

<=46.9

>46.9

no

**NO**

<=0.46

>0.46

yes

**YES**

yes

**YES**

A Cognitive Look at Geotechnical Earthquake Engineering: Understanding the Multidimensionality of the Phenomena 97

One of the most important NEFLAS advantages is its capability of dealing with the imprecision, inherent in geoseismic engineering, to evaluate concepts and derive conclusions. It is well known that engineers use words to classify qualities ("strong earthquake", "poor graduated soil" or "soft clay" for example), to predict and to validate "first principle" theories, to enumerate phenomena, to suggest new hypothesis and to point the limits of knowledge. NEFLAS mimics this method. See the technical quantity "magnitude" (earthquake input) depicted in Figure 20. The degree to which a crisp magnitude belongs to LOW, MEDIUM or HIGH linguistic label is called the degree of membership. Based on the figure, the expression, "the magnitude is LOW" would be true to the degree of 0.5 for a *Mw* of 5.7. Here, the degree of membership in a set becomes the

On the other hand, the human logic in engineering solutions generates sets of behavior rules defined for particular cases (parametric conditions) and supported on numerical analysis. In the neurofuzzy methods the human concepts are re-defined through a flexible computational process (training) putting (empirical or analytical) knowledge into simple "ifthen" relations (Figure 20). The fuzzy system uses 1) variables composing the antecedents (premises) of implications; 2) membership functions of the fuzzy sets in the premises, and 3)

NEFLAS considers the character of the earthquake, topographical, regional and geological components that influence lateral spreading and works through three modules: Reg-NEFLAS, appropriate for predicting horizontal displacements in geographic regions where seismic hazard surveys have been identified; Site- NEFLAS, proper for predictions of horizontal displacements for site-specific studies with minimal data on geotechnical conditions and Geotech-NEFLAS allows more refined predictions of horizontal displacements when additional data is available from geotechnical soil borings. The

parameters in consequences for finding simpler solutions with less design time.

degree of truth of a statement.

**Figure 20.** Neurofuzzy estimation of lateral spread

**Figure 19.** CT classification examples

The intricacy and nonlinearity of the phenomena, an inconsistent and contradictory database, and many subjective interpretations about the observed behavior, make SC an attractive alternative for estimation of liquefaction induced lateral spread. NEFLAS [82], NEuroFuzzy estimation of liquefaction induced LAteral Spread, profits from fuzzy and neural paradigms through an architecture that uses a fuzzy system to represent knowledge in an interpretable manner and proceeds from the learning ability of a neural network to optimize its parameters. This blending can constitute an interpretable model that is capable of learning the problem-specific prior knowledge.

NEFLAS is based on the Takagi-Sugeno model structure and it was constructed according the information compiled by Bartlett and Youd [83] and extended later by Youd et al. [84]. The output considered in NEFLAS is the horizontal displacements due to liquefaction, dependent of moment magnitude, the PGA, the nearest distance from the source in kilometers; the free face ratio, the gradient of the surface topography or the slope of the liquefied layer base, the cumulative thickness of saturated cohesionless sediments with number of blows (modified by overburden and energy delivered to the standard penetration probe, in this case 60%) , the average of fines content, and the mean grain size.

One of the most important NEFLAS advantages is its capability of dealing with the imprecision, inherent in geoseismic engineering, to evaluate concepts and derive conclusions. It is well known that engineers use words to classify qualities ("strong earthquake", "poor graduated soil" or "soft clay" for example), to predict and to validate "first principle" theories, to enumerate phenomena, to suggest new hypothesis and to point the limits of knowledge. NEFLAS mimics this method. See the technical quantity "magnitude" (earthquake input) depicted in Figure 20. The degree to which a crisp magnitude belongs to LOW, MEDIUM or HIGH linguistic label is called the degree of membership. Based on the figure, the expression, "the magnitude is LOW" would be true to the degree of 0.5 for a *Mw* of 5.7. Here, the degree of membership in a set becomes the degree of truth of a statement.

On the other hand, the human logic in engineering solutions generates sets of behavior rules defined for particular cases (parametric conditions) and supported on numerical analysis. In the neurofuzzy methods the human concepts are re-defined through a flexible computational process (training) putting (empirical or analytical) knowledge into simple "ifthen" relations (Figure 20). The fuzzy system uses 1) variables composing the antecedents (premises) of implications; 2) membership functions of the fuzzy sets in the premises, and 3) parameters in consequences for finding simpler solutions with less design time.

**Figure 20.** Neurofuzzy estimation of lateral spread

96 Earthquake Engineering

Magnitude

**(Mw) Depth (m) Z\_w** 

amax

amax

**(m)**

<=8.5

**Z\_TOP (m)**

qc

qc

**Magnitude** 

**Magnitude** 

**Figure 19.** CT classification examples

**(Mw) Depth (m) Z\_w** 

of learning the problem-specific prior knowledge.

**(m)**

**Z\_TOP (m)**

**Layer\_H (m)**

Magnitude

no

**NO**

**qc (MPA)**

qc

**qc (MPA)**

**Vs (m/s)** **amax (g) Liquefied**

qc

<=5.4

**Soil**

**Vs (m/s)** **amax (g) Liquefied**

>87.7

**Soil**

0'

**(kPa) Rf (%) Type of** 

yes

**σ'v** 

6.60 4.00 - - - 56.00 2.80 - 2.0 - 0.8 yes

The intricacy and nonlinearity of the phenomena, an inconsistent and contradictory database, and many subjective interpretations about the observed behavior, make SC an attractive alternative for estimation of liquefaction induced lateral spread. NEFLAS [82], NEuroFuzzy estimation of liquefaction induced LAteral Spread, profits from fuzzy and neural paradigms through an architecture that uses a fuzzy system to represent knowledge in an interpretable manner and proceeds from the learning ability of a neural network to optimize its parameters. This blending can constitute an interpretable model that is capable

NEFLAS is based on the Takagi-Sugeno model structure and it was constructed according the information compiled by Bartlett and Youd [83] and extended later by Youd et al. [84]. The output considered in NEFLAS is the horizontal displacements due to liquefaction, dependent of moment magnitude, the PGA, the nearest distance from the source in kilometers; the free face ratio, the gradient of the surface topography or the slope of the liquefied layer base, the cumulative thickness of saturated cohesionless sediments with number of blows (modified by overburden and energy delivered to the standard penetration probe, in this case 60%) , the average of fines content, and the mean grain size.

**YES**

**(kPa) Rf (%) Type of** 

Example: Heber Road

Example: Nigata City amax

<=0.23

> 6.6

**Layer\_H (m)**

**σ'v** 

>0.2

7.50 - 5.00 5.00 2.50 97.70 - silt - 163.00 0.16 no

NEFLAS considers the character of the earthquake, topographical, regional and geological components that influence lateral spreading and works through three modules: Reg-NEFLAS, appropriate for predicting horizontal displacements in geographic regions where seismic hazard surveys have been identified; Site- NEFLAS, proper for predictions of horizontal displacements for site-specific studies with minimal data on geotechnical conditions and Geotech-NEFLAS allows more refined predictions of horizontal displacements when additional data is available from geotechnical soil borings. The NEFLAS execution on cases not included in the database (Figure 21.b and Figure 21.c) and its higher values of correlation when they are compared with evaluations obtained from empirical procedures permit to assert that NEFLAS is a powerful tool, capable of predicting lateral spreads with high degree of confidence.

A Cognitive Look at Geotechnical Earthquake Engineering: Understanding the Multidimensionality of the Phenomena 99

Based on the results of the studies discussed in this paper, it is evident that cognitive techniques perform better than, or as well as, the conventional methods used for modeling complex and not well understood geotechnical earthquake problems. Cognitive tools are having an impact on many geotechnical and seismological operations, from predictive

The hybrid *soft* systems leverage the tolerance for imprecision, uncertainty, and incompleteness, which is intrinsic to the problems to be solved, and generate tractable, lowcost, robust solutions to such problems. The synergy derived from these hybrid systems stems from the relative ease with which we can translate problem domain knowledge into initial model structures whose parameters are further tuned by local or global search methods. This is a form of methods that do not try to solve the same problem in parallel but they do it in a mutually complementary fashion. The push for low-cost solutions combined with the need for intelligent tools will result in the deployment of hybrid systems that

Traditional earthquake geotechnical modeling, as physically-based (or knowledge-driven) can be improved using soft technologies because the underlying systems will be explained also based on data (CC data-driven models). Through the applications depicted here it is sustained that cognitive tools are able to make abstractions and generalizations of the

**5. Conclusions** 

**Author details** 

**6. References** 

Silvia Garcia

modeling to diagnosis and control.

efficiently integrate reasoning and search techniques.

process and can play a complementary role to physically-based models.

Dynamics and Earthquake Engineering 20 (2000) Elsevier

random effects model. Bull Seismol Soc Am 1992;82:505±10.

ground motions. Seismol Res Lett 1998:153pp (abstract).

Written communication, 1997.

*Geotechnical Department, Institute of Engineering, National University of Mexico, Mexico* 

[1] W.D.L. Finn State of the art of geotechnical earthquake engineering practice Soil

[2] Abrahamson NA, R R, Youngs. A stable algorithm for regression analysis using the

[3] Somerville PG, Sato T. Correlation of rise time with the style-offaulting factor in strong

[4] Somerville PG, Greaves RL. Strong ground motions of the Kobe, Japan earthquake of January 17, 1995, and development of a model of forward rupture directivity applicable in California. Proc. Western Regional Tech. Seminar on Earthquake Eng. for Dams.

[5] Abrahamson NA, Silva WJ. Empirical duration relations for shallow crustal earthquakes.

Assoc. of State Dam Safety Oficials, Sacramento, CA, April 11±12, 1996.

**Figure 21.** NN estimations vs measured displacements for a) the whole data set, b)Niigata Japan, c) San Francisco USA cases
