**5. Multipoint statistics**

The objective of this work is to build a reservoir model with multiple alternatives, thereby assessing uncertainty about the reservoir, integrating information from different sources obtained at different resolutions:


This last point has been possible using a variogram in algorithms such as DSS and Co-DSS, previously described, which allow integration of well and seismic data using a pixel-based approach. Those variogram based models are inadequate in integrating geological concepts since the variogram is too limited in capturing complex geological heterogeneity and is a two-point statistics that poorly reflects the geologist's prior conceptual vision of the reservoir architecture, e.g., sand channels as in [20].

136 New Technologies in the Oil and Gas Industry

as it happens in the standard methods;

model became more elaborated.

**5. Multipoint statistics** 

obtained at different resolutions:

in the vertical direction;

layer scale to the basin scale.

full model.

**4.3. Remarks** 

would be noticeable.

The influence of the wells is no longer visible because all data of wells are integrated in the

In spite of the scarcity of the log data, the proposed method achieved extremely good results. In case of data abundance, if the data is not well calibrated it could compromise the quality of the convergence and the influence of the not calibrated wells in the final model

In this case, both synthetic seismic data and acoustic impedance cube captured the main geologic features of these complex reservoirs, noticeable in the correlation coefficients between the seismic and the synthetic amplitudes. The quality of the seismic data takes a minor role since the method overcomes the situation of imposition of artificial correlations

Since the co-simulation of the impedances uses a local coefficient correlation, it is possible to

The uncertainty of the seismic acoustic impedance could be used to access the uncertainty

Tests prove that the variation of the final correlation coefficient is about 2% with the modification of the initial seed, it means that others parameters such as the number of layers and the size of it, as other parameters can be optimized to produce better results. But these results are more difficult to reach when the complexity of the geology and the structural

To handle different geology scenarios such channels or specific shape reservoirs, an adapted

The objective of this work is to build a reservoir model with multiple alternatives, thereby assessing uncertainty about the reservoir, integrating information from different sources

Seismic data which is exhaustive but of much lower resolution, in the scale of 10's feet

Conceptual geological models, which could quantify reservoir heterogeneity from the

This last point has been possible using a variogram in algorithms such as DSS and Co-DSS, previously described, which allow integration of well and seismic data using a pixel-based approach. Those variogram based models are inadequate in integrating geological concepts since the variogram is too limited in capturing complex geological heterogeneity and is a

Well-data which is sparse but of high resolution, in the scale of a foot;

compute the local uncertainty associated to the seismic acoustic impedances;

associated with the porosity model, as presented in [24].

approach is proposed in the next part of this work.

Integration of geological information beyond two-point variogram reproduction becomes critical in order to quantify more accurately heterogeneity and assess realistically the uncertainty of model description.

A solution was initiated by practitioners from the oil industry in Norway, where Boolean objects-based algorithms were introduced in the late 1980's to simulate random geometry as in [25,26]. These parametric shapes, such as sinusoidal channel or ellipsoidal lenses, are placed in simulated volume. Through an iterative process this shape is changed, displaced or removed to fit the conditioning statistics and local data.

The simulated objects finally resemble the geologist drawings or photographs of present day depositions

Passed the enthusiasm, the limitation of objects-based simulation algorithms became obvious, this iterative, perturbation type, algorithm for data conditioning did not converge in the presence of dense hard data or could not account for diverse data types, such as seismic used as soft data.

Also the limitation of not simulating continuous variables, time and CPU demanding in large 3D cases, enroll on the drawbacks of the methodology.

Strebelle in [20], following the works of Srivastava in [27] and Caers in [28] has proposed an alternative approach of Multipoint statistics that combines the easy conditioning of pixelbased algorithms with the ability to reproduce "shapes" of object-based techniques, without relying on excessive CPU demand.

Multipoint statistics uses a training image instead of a variogram to account for geological information. The training image describes the geometrical facies patterns believed to represent the subsurface.

Training images does not need to carry any local information of the actual reservoir, they only reflect a prior geological/structural concept.

Basically, multipoint statistics consists of extracting patterns from the training image, and anchoring them to local data, i.e. well logs and seismic data. Several training images corresponding to alternative geological interpretations can be used to account for the uncertainty about the reservoir architecture.

Training images, identical to the variogram, can be a questionable subject when the model to be generated has few hard data or no pre-conceptual model has been studied by geologists or geomodelers.

Using this new tool a new objective is to illustrate the multipoint statistical methodology to seismic inversion.

#### **5.1. Seismic inversion using multipoint statistics**

For the case study, the Stanford VI synthetic reservoir dataset described in [29], was used. This synthetic reservoir (figure 15), consists of meandering channels, sand vs mud system, and each facies were populated with rock impedance, using sequential simulation.

To mimic a seismic inversion, this high resolution rock impedance model were smoothed by a low-pass filter to obtain a typical acoustic impedance response and then convoluted to an amplitude model that will be considered as reference seismic amplitude.

**Figure 15.** Facies model (left), high resolution rock impedance (middle left), smothered rock impedance (middle right) and forwarded seismic synthetic amplitude (right)

#### **5.2. Probabilistic approach**

For comparison of the methodology, is presented the application of a traditional probabilistic modeling on the Stanford VI acoustic impedance, i.e. calibrate acoustic impedance into a 3D facies probability cube, and then use it as soft data constraint for multiple-point geostatistical simulation.

Several techniques exist to calibrate one or more seismic attributes with well data, such as PCA, ANN, etc., one use a simple Bayes' approach (equation 7) as documented in [30]. In a Bayesian method one uses the histogram of impedance for each facies denoted as:

$$P\left(A\mathbf{I} \mid \text{facies}\_f\right), f = \{\mathbf{1}, \mathbf{2}, \ldots\} \tag{7}$$

Using Bayes' rule one can calculate the probability for each facies for given impedance values as (equation 8):

$$P\left(\text{facies}\_f \mid AI\right) = \frac{P(AI \mid \text{facies}\_f) \bullet P(\text{facies}\_f)}{P(AI)}\tag{8}$$

where P(faciesf) is the global proportion of faciesf and P(AI) is derived from the histogram of the impedance values. The final result is a cube of probabilities for each facies type based on the acoustic impedance cube. In figure 16, one can see the result of the application of this method to the case study data.

This probability data can be used by different geostatistical algorithms to create a facies model. In our case the simulated facies are obtained with the multi-point method snesim, conditioned to the probability cubes previously calculated, to the training image and to the rotation and affinity cubes.

**5.2. Probabilistic approach** 

values as (equation 8):

method to the case study data.

rotation and affinity cubes.

multiple-point geostatistical simulation.

**5.1. Seismic inversion using multipoint statistics** 

(middle right) and forwarded seismic synthetic amplitude (right)

For the case study, the Stanford VI synthetic reservoir dataset described in [29], was used. This synthetic reservoir (figure 15), consists of meandering channels, sand vs mud system,

To mimic a seismic inversion, this high resolution rock impedance model were smoothed by a low-pass filter to obtain a typical acoustic impedance response and then convoluted to an

**Figure 15.** Facies model (left), high resolution rock impedance (middle left), smothered rock impedance

For comparison of the methodology, is presented the application of a traditional probabilistic modeling on the Stanford VI acoustic impedance, i.e. calibrate acoustic impedance into a 3D facies probability cube, and then use it as soft data constraint for

Several techniques exist to calibrate one or more seismic attributes with well data, such as PCA, ANN, etc., one use a simple Bayes' approach (equation 7) as documented in [30]. In a

Using Bayes' rule one can calculate the probability for each facies for given impedance

(| ) ( ) <sup>|</sup>

where P(faciesf) is the global proportion of faciesf and P(AI) is derived from the histogram of the impedance values. The final result is a cube of probabilities for each facies type based on the acoustic impedance cube. In figure 16, one can see the result of the application of this

This probability data can be used by different geostatistical algorithms to create a facies model. In our case the simulated facies are obtained with the multi-point method snesim, conditioned to the probability cubes previously calculated, to the training image and to the

*P AI facies P facies P facies AI P AI*

( )

*f f*

*P AI facies f* | , 1,2,... *<sup>f</sup>* (7)

(8)

Bayesian method one uses the histogram of impedance for each facies denoted as:

*f*

and each facies were populated with rock impedance, using sequential simulation.

amplitude model that will be considered as reference seismic amplitude.

**Figure 16.** Result of the Bayesian approach for the calculus of the probabilities cubes for case study and the simulated facies model.

To verify our hypothesis that this facies model does not match the field amplitude, the facies model is forwarded simulated to a synthetic amplitude dataset. We assumed the ideal situation where an exact forward model was available. Figure 17 confirm that the forward modeled amplitude does not match the field data. In fact, the co-located correlation coefficient is 0.20.

**Figure 17.** "Real" seismic amplitude (left) *vs* forward modeled amplitude derived from the simulated facies model using a traditional probabilistic approach (right).

#### **5.3. Algorithm description using multipoint statistics**

For the case of using multi-point statistics, the seismic inversion algorithm has gone through some changes, manly caused by the simulation algorithm that does not creates acoustic impedance models directly, but facies models, that can be populated with acoustic impedance values later.

So the method is initialized by simulating N facies models only conditioned to the wells and training images using snesim, in this case, additional channel azimuth and affinity (as interpreted from seismic or geological understanding) constraints are enforced (figure 18).

**Figure 18.** Initial simulations conditioned only to the training image and rotation/affinity.

The facies models are then populated with acoustic impedances, converted in synthetic seismograms and chose and keep the best "genes" from each of the images (see figure 19). In the next step there is also a modification, since one cannot use directly the correlation values in the snesim simulator. So a modification of the probability perturbation method as in [31] is used to create a probability for each facies.

This facies probabilities cube is then used as soft constraint to generate the next set of N cubes. This is done using Journel's tau-model as in [10], to integrate probabilistic information from various sources. The process converges as the previous algorithm and it can be summarized in the following steps:


As noted in previously algorithm, in step vi), in the very first steps of the iterative process, the probability data can not have the spatial continuity pattern of the primary simulated facies, as it results from a composition of different parts of a set of simulated images. As the process continues, that soft secondary image (probability cubes) tend to have the same spatial pattern of generated images by co-simulation, i.e. the imposed training image altered by the affinity and azimuth

#### **5.4. Co-generation of acoustic images with multipoint statistics**

140 New Technologies in the Oil and Gas Industry

Training Image

Rotation

Affinity

**Figure 18.** Initial simulations conditioned only to the training image and rotation/affinity.

is used to create a probability for each facies.

can be summarized in the following steps:

i. Generate a set of initial 3D images of facies by using snesim. ii. Populate the facies models with acoustic impedances.

seismic by computing, for example local correlation coefficients.

from acoustic impedances, with a known wavelet.

probability cube to each facies to be simulated.

The facies models are then populated with acoustic impedances, converted in synthetic seismograms and chose and keep the best "genes" from each of the images (see figure 19). In the next step there is also a modification, since one cannot use directly the correlation values in the snesim simulator. So a modification of the probability perturbation method as in [31]

N Simulated Facies

MPS

This facies probabilities cube is then used as soft constraint to generate the next set of N cubes. This is done using Journel's tau-model as in [10], to integrate probabilistic information from various sources. The process converges as the previous algorithm and it

iii. Create the synthetic seismogram of amplitudes, by convolving the reflectivity, derived

iv. Evaluate the match of the synthetic seismograms, of entire 3D image, and the real

v. Ranking the "best" images based on the match (e.g. the average value or a percentile of correlation coefficients for the entire image). From it, one selects the best parts (the columns or the horizons with the best correlation coefficient) of each image. Compose

one auxiliary image with the selected "best" parts, for the next simulation step. vi. Transform the cube with the best parts and the corresponded best values to a The major change made for the adaptation to MPS, is that the acoustic impedance values are estimated or populated in the simulated facies model. This can cause some reservations but if one considered that the main objective is comparing the reflection coefficients between the real and the simulated seismic amplitude, those reflection coefficients are more accentuated in the change of terrain type, and those can be considered a channel or crevasse. Inside the channels the rock type does not vary too much and that is visible in the seismic profiles by the presence of low amplitude seismic. So this difference in the algorithm can be considered a valid one.

The second difference is that to co-simulate a facies model, one can not use acoustic impedance values (a continuous variable) as soft data, so instead of choosing from the simulated acoustics model, the algorithm build the Best Facies cube from the simulated facies models (figure 19).

**Figure 19.** Process of creating "Best Mismatch cube" and "Best Facies cube".

As the methodology for acoustic impedance models, the process starts by analyzing each of the mismatch cubes that were created previously, and in each position of the cubes (x0) it compares which has the higher value of correlation.

The point set 1a and point set 1b are compared, from those is the 1a that has a higher value of correlation so, it is copied (1c) to the Best Mismatch cube.

At the same time, since the first simulation showed better coefficient of correlation (CC) with the real seismic, another cube is created with the correspondent facies values (1d).

The process follow to the next set of values from the CC cubes, and it is the same, comparison between the sets 2a and 2b, that in this case it is the second cube that has the higher value of cc (or lower mismatch), it is these values that are copied to the Best Mismatch cube (2c). The same is done from the facies cube of this simulation, copy of the facies values to the Best Facies (2d). This process continues until all the N simulated models have been analyzed and to all the values of the cubes.

Through this way two new cubes are created, one with the best genes from each facies model (Best Facies) generated previously and another cube with the confidence factor of each part (Best Mismatch). As noticed above, the "least mismatch cube" can be seen as a summary of the least mismatch of all N realizations. The "best facies cube" can be seen as facies model combined from all N facies models that best matches the seismic data. However the "best facies cube" does not have the same geological concept as the training image and may have various artifacts, since it is constructed by copying several sets from independently generated facies models.

The third difference is that, these two new cubes can not be used for the co-simulation of the new generation of facies models, but the "best facies cube" can be used indirectly to improve the existing N facies models.

In order to do this, one uses a modification of the probability perturbation method as in [24]. Caers' method was developed to solve non-linear inverse problems under a prior model constraint. It allows the conditioning of stochastic simulations to any type of non-linear data. The principle of this method relies on perturbing the probability models used to generate a chain of realizations that converge to match any type of data.

In this methodology the unknown pre-posterior probability – Prob(Aj | C) – is modeled using a single parameter model in the following equation 9:

$$P\left(A\_{\boldsymbol{j}} \mid \mathcal{C}\right) = P(I(\boldsymbol{u}\_{\boldsymbol{j}}) = 1 \mid \mathcal{C}) = (1 - r\_{\boldsymbol{c}}) \times i\_{\mathcal{B}}^{(0)}(\boldsymbol{u}\_{\boldsymbol{j}}) + r\_{\boldsymbol{c}} \times P(A\_{\boldsymbol{j}})\_{\boldsymbol{\iota}} \boldsymbol{j} = 1, \ldots, N \tag{9}$$

where A is unknown data, B is well data and previously simulated facies indicators and C will be the "best facies cube", rC is not dependent on uj and is between [0,1], and <sup>0</sup> , 1,..., *B j i jN* **<sup>u</sup>** is an initial realization conditioned to the B-data only.

According to the methodology proposed in this work, the equation 9 is adapted with rC now representing the mismatch between the field and synthetic seismic as summarized with the least-mismatch cube (correlation coefficient), and <sup>0</sup> *B i* corresponding to the presence of the facies with the least mismatch. This adaptation leads to the following equation 10:

142 New Technologies in the Oil and Gas Industry

compares which has the higher value of correlation.

have been analyzed and to all the values of the cubes.

independently generated facies models.

improve the existing N facies models.

<sup>0</sup> , 1,..., *B j*

of correlation so, it is copied (1c) to the Best Mismatch cube.

As the methodology for acoustic impedance models, the process starts by analyzing each of the mismatch cubes that were created previously, and in each position of the cubes (x0) it

The point set 1a and point set 1b are compared, from those is the 1a that has a higher value

At the same time, since the first simulation showed better coefficient of correlation (CC) with the real seismic, another cube is created with the correspondent facies values (1d).

The process follow to the next set of values from the CC cubes, and it is the same, comparison between the sets 2a and 2b, that in this case it is the second cube that has the higher value of cc (or lower mismatch), it is these values that are copied to the Best Mismatch cube (2c). The same is done from the facies cube of this simulation, copy of the facies values to the Best Facies (2d). This process continues until all the N simulated models

Through this way two new cubes are created, one with the best genes from each facies model (Best Facies) generated previously and another cube with the confidence factor of each part (Best Mismatch). As noticed above, the "least mismatch cube" can be seen as a summary of the least mismatch of all N realizations. The "best facies cube" can be seen as facies model combined from all N facies models that best matches the seismic data. However the "best facies cube" does not have the same geological concept as the training image and may have various artifacts, since it is constructed by copying several sets from

The third difference is that, these two new cubes can not be used for the co-simulation of the new generation of facies models, but the "best facies cube" can be used indirectly to

In order to do this, one uses a modification of the probability perturbation method as in [24]. Caers' method was developed to solve non-linear inverse problems under a prior model constraint. It allows the conditioning of stochastic simulations to any type of non-linear data. The principle of this method relies on perturbing the probability models used to

In this methodology the unknown pre-posterior probability – Prob(Aj | C) – is modeled

where A is unknown data, B is well data and previously simulated facies indicators and C will be the "best facies cube", rC is not dependent on uj and is between [0,1], and

According to the methodology proposed in this work, the equation 9 is adapted with rC now representing the mismatch between the field and synthetic seismic as summarized with the

(0) | ( ( ) 1| ) (1 ) ( ) ( ), 1,..., *jj c <sup>B</sup> jc j P A C PIu C r i u r PA j N* (9)

generate a chain of realizations that converge to match any type of data.

*i jN* **u** is an initial realization conditioned to the B-data only.

using a single parameter model in the following equation 9:

$$P\left(A\_{\boldsymbol{j}} \mid \mathcal{C}\right) = \rho\_c(\boldsymbol{u}) \times I(\boldsymbol{u}\_{\boldsymbol{j}}) + (1 - \rho\_c(\boldsymbol{u})) \times P(A\_{\boldsymbol{j}})\_{\boldsymbol{\iota}} \boldsymbol{j} = 1, \ldots, N \tag{10}$$

where *ρ*C is the correlation coefficient, between [0,1], extracted from the "least mismatch cube", I(uj) is the sand indicator extracted from the "best facies cube" with j=1,…,N facies, and P(Aj) is the global proportion of the considered facies. This expression generates a "facies-probability cube" which is a mixture of global proportion and the "best facies cube".

This facies probabilities cube is then used as soft constraint to generate the next set of N cubes. This is done using Journel's tau-model to integrate probabilistic information from various sources (see equation 11 and figure 20);

$$\frac{\mathbf{x}}{b} = \frac{\mathbf{c}}{a} \tag{11}$$
 
$$\text{where: } \mathbf{x} = \frac{1 - P(A \mid B, \mathbf{C})}{P(A \mid B, \mathbf{C})}, \text{ } b = \frac{1 - P(A \mid B)}{P(A \mid B)}, \text{ } c = \frac{1 - P(A \mid \mathbf{C})}{P(A \mid \mathbf{C})} \text{ and } a = \frac{1 - P(A)}{P(A)}$$

*a* is the information of the global facies proportion, *b* is the influence of the training image, and *c* is the conditioning of the soft probability cube.

**Figure 20.** Next generation of simulations conditioned to the training image, rotation/affinity and probability cubes.

This iterative algorithm is run until the global mismatch between the synthetic seismic of the facies model and the field seismic data reach an optimal minimum.

#### **5.5. Results**

To illustrate the method, 6 iterations were computed with N=30 simulations on the Stanford VI dataset. Note that one assumed the availability of perfect geological information and a perfect forward model.

The algorithm converges as shown by a systematic increase in the global correlation coefficient between model and amplitude data (figure 21). The facies model with the highest correlation out of 30 models in the last iteration has a correlation of 0.88.

**Figure 21.** Convergence of the algorithm.

**Figure 22.** Comparison between the two methods

The final facies model (top left in in figure 22) does not contain any artifacts, i.e. reproduces well the geological continuity of the training image.

To check how well the seismic amplitude data is reproduced, is forward modeling the best facies model to seismic synthetic (see figure 22). Clearly the method matches well the field seismic, particularly when compared with the probabilistic approach.

In figure 23 some slices of the results are presented, where it is possible to see some similarities between the seismic response of the proposed approach and the reference data seismic amplitude.

**Figure 23.** Correlation slices

144 New Technologies in the Oil and Gas Industry

**5.5. Results** 

perfect forward model.

**Figure 21.** Convergence of the algorithm.

**Diference**

**Figure 22.** Comparison between the two methods

This iterative algorithm is run until the global mismatch between the synthetic seismic of the

To illustrate the method, 6 iterations were computed with N=30 simulations on the Stanford VI dataset. Note that one assumed the availability of perfect geological information and a

The algorithm converges as shown by a systematic increase in the global correlation coefficient between model and amplitude data (figure 21). The facies model with the highest

> 123456 **Ite ra tions**

> > B es t M inim um Diferenc e B est Global Correlation

(Proposed Approach) (Reference Data) (Probabilistic Approach)

Forward Modeling Forward Modeling Forward Modeling

**Seismic Amplitude Field Seismic Seismic Amplitude** 

24.17

*ρ* = 0.20

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

**Correlation**

**Facies Cube** 

0.88

facies model and the field seismic data reach an optimal minimum.

correlation out of 30 models in the last iteration has a correlation of 0.88.

60.49

0.17

**Facies Cube Facies Cube**

*ρ* = 0.88

In summary, as stated previously, the local maximum correlation or local minimum mismatch depends on the parameters used to calculate the mismatches. Some parameters sensitivity is further analyzed:

Number of iterations

In the first iteration the main optimization is obtained, after this the process tends to stabilize (figure 24). Hence, without changing any of the other parameters, the number of iterations does not have a big influence on the optimization.

Figure 24 compares the values of correlation and average difference of the simulation model with the best result for different number of facies models N simulated. These values are presented only for the first and last iterations.

For the first iteration the difference is not very noticeable, but in the last iteration the correlation value is higher as the number of simulations increases.

Number of facies simulations (N) per iterations

This parameter can have a major influence on the optimization: with numerous simulations the process has a wide variety of possibilities to choose from, and can build a more precise best-facies cube, but if the simulations are very alike, the choice of N will not matter.

**Figure 24.** Influence of different number of iterations (1 to 6) and simulations (20, 30 and 40).

Criterion of convergence

The criterion of differences is more precise than the correlation, since the correlation gives us a value of the similarity between two sets of data, while the difference is more susceptible to different patterns or small variation in the patterns. But to calculate the probabilities the correlations are needed.

Size of the columns

In the previous test the cubes were divided in 4 layers with 20 values each column. Since the correlation has some sensitivity to the number of data used, the results also change. For a largest column the convergence will tend to be slower, since it is more difficult to match an entire column than piece-wise matching. In general we recommend that the number of layers chosen depends on the vertical resolution of the seismic data, i.e. for lower resolution less layer could be retained.

#### **6. Conclusions**

The algorithm proposed uses two different approaches to create a stochastic model of a reservoir property that are conditioned to both well and seismic data, since this two data are totally different, this process was to be an iterative one, that used the maximum correlation or minimum mismatch as the objective function.

The first approach uses the direct sequential co-simulation as the method of "transforming" 3D images, in an iterative process. Hence it generates, at each iterative step, global acoustic impedances images with the same spatial pattern, without imposing artificially good match in areas of low confidence in seismic quality.

It means that in those areas the final images will reflect high uncertainty. This uncertainty of the seismic acoustic impedance is used to access the uncertainty associated with the porosity model, by co-simulate the porosity model and using the generated final acoustic impedance model and its correlation coefficient cube as soft or secondary data.

Another example is the capacity to improve the seismic resolution, if the acoustic impedance model is created below the original resolution of the seismic. This is possible since the data from wells, that is mainly used to create the model, has a much lower resolution than the seismic.

Time consuming is no more a drawback as it is implemented in a parallel computing platform in [32].

This algorithm has been implemented and benchmarked tested with several real reservoirs, e.g. [33].

For the case of the multipoint approach, it is a new application for the newest stochastic generation of images and is a complement for the first approach.

The modifications made from the first approach to the second, are not too complex, except the population of the acoustic impendence cube, which for this example, a technique of simple populate the channels was used with previous simulated acoustic impedance values.

To resolve this bypass, an algorithm by [34], simulates any variable within the channel, making this step a more realistic one.

The program still needs to be improved and the algorithm can be optimized for real case applications and the lack of training images that could implement it to all kind or examples is still in research.
