**2. The technology behind the Seismic Migration**<sup>1</sup>

One of the first works related with the beginnings of the Seismic Migration (SM) is from 1914 and it was related with the construction of a sonic reflection equipment to locate icebergs by the Canadian inventor Reginald Fessenden. The importance of this work was that together with the patent named "Methods and Apparatus for Locating Ore Bodies" presented by himself three years later (1917), he gave us the first guidelines about the use of reflection and refraction methods to locate geological formations.

Later, in 1919, McCollum and J.G. Karchner made other important advance in this field because they *received a patent for determining the contour of subsurface strata that was inspired by their work on detection of the blast waves of artillery during World War I,*[6]. But it was only in 1924 when a group led by the German scientist Ludger Mintrop, could locate the first Orchard salt dome in Fort Bend County, Texas [15].

In the next year (1925), as a consequence of the Reginald's patent in 1917, Geophysical research corporation (GRC) created a department dedicated to the design and construction of new and improve seismographic instrumentation tools.

#### **2.1. Acceptance period**

In the next three years (1926 - 1928) GRC was testing and adjusting his new tools, but at that time there was an air of skepticism due to the low reliability of the instruments. It can be said that the method was tested and many experiments were performed, but it was only between 1929 and 1932 when the method was finally accepted by the oil companies.

Subsequently, the oil companies began to invest large sums of money to improve it and have a technological advantage over their competitors. *As the validity of reflection seismics became more and more acceptable and its usage increased, doubts seemed to reenter the picture. Even though this newfangled method appeared to work, in many places it produced extremely poor seismic records. These were the so-called no-record areas* [6].

Only until 1936 when Frank Reiber could recognize that the cause of this behavior was the steep folding, [38], faulting or synclinale and anticlinal responses and he built an analog device to model the waves of different geological strata. This discovery was really important for the method, because it could give it the solidity that the method was needing at that moment; but finally the bad news arrived with the beginning of the world war II, because it stopped all the inertia of this process.

<sup>1</sup> The main ideas of this section has been taken from the work of J. Bee Bednar in his paper A brief history of seismic migration

#### **2.2. World War II (1939-1945)**

2 Will-be-set-by-IN-TECH

section five shows the use of FPGAs as another HPC platform, section six discusses the advantages and disadvantages of both technologies and, finally, the chapter is closed with

At the end of this chapter,it is expected that the reader have enough background to compare platform specifications and be able to choose the most suitable platform for the

One of the first works related with the beginnings of the Seismic Migration (SM) is from 1914 and it was related with the construction of a sonic reflection equipment to locate icebergs by the Canadian inventor Reginald Fessenden. The importance of this work was that together with the patent named "Methods and Apparatus for Locating Ore Bodies" presented by himself three years later (1917), he gave us the first guidelines about the use of reflection

Later, in 1919, McCollum and J.G. Karchner made other important advance in this field because they *received a patent for determining the contour of subsurface strata that was inspired by their work on detection of the blast waves of artillery during World War I,*[6]. But it was only in 1924 when a group led by the German scientist Ludger Mintrop, could locate the first Orchard

In the next year (1925), as a consequence of the Reginald's patent in 1917, Geophysical research corporation (GRC) created a department dedicated to the design and construction of new and

In the next three years (1926 - 1928) GRC was testing and adjusting his new tools, but at that time there was an air of skepticism due to the low reliability of the instruments. It can be said that the method was tested and many experiments were performed, but it was only between

Subsequently, the oil companies began to invest large sums of money to improve it and have a technological advantage over their competitors. *As the validity of reflection seismics became more and more acceptable and its usage increased, doubts seemed to reenter the picture. Even though this newfangled method appeared to work, in many places it produced extremely poor seismic records. These*

Only until 1936 when Frank Reiber could recognize that the cause of this behavior was the steep folding, [38], faulting or synclinale and anticlinal responses and he built an analog device to model the waves of different geological strata. This discovery was really important for the method, because it could give it the solidity that the method was needing at that moment; but finally the bad news arrived with the beginning of the world war II, because

<sup>1</sup> The main ideas of this section has been taken from the work of J. Bee Bednar in his paper A brief history of seismic

1929 and 1932 when the method was finally accepted by the oil companies.

**2. The technology behind the Seismic Migration**<sup>1</sup>

and refraction methods to locate geological formations.

salt dome in Fort Bend County, Texas [15].

improve seismographic instrumentation tools.

**2.1. Acceptance period**

*were the so-called no-record areas* [6].

it stopped all the inertia of this process.

migration

the acknowledgments.

implementation of the SM.

With the World War II all efforts were focused on building war machines. During this period, important developments were achieved, which would be very useful in the future of the SM. Some of these developments were done by Herman Wold, Norbert Weiner, Claude Shannon and Norman Levinson from MIT, and established the fundamentals of the numerical analysis and finite differences, areas that would be very important in future of seismic imaging. For example, Shannon proposed a theorem to sample analog signals and convert them into discrete signals and then developed all the mathematical processing of discrete signals starting the digital revolution.

On the other hand, between 1940 and 1956, appeared the first generation of computers which used Vacuum Tubes (VT). The VT is an electronic device that can be used as an electrical switch to implement logical and arithmetic operations. As the VT were large and consumed too much energy, the first computers were huge and had high maintenance and operation costs. They used magnetic drums for memory, their language was machine language (the lowest level programming language understood by computers) and the information was introduced into them through punched cards<sup>2</sup> [36].

One of the first computers was developed in Pennsylvania University, in 1941 by John Mauchly and J. Prester and it was called **ENIAC** [36]. This computer had the capacity of performing 5000 additions and 300 multiplications per second (Nowadays, a PC like Intel core i7 980 XE can perform 109 billions of Floating Point Operations [27]). In the next years were developed the **EDVAC** (1949), the first commercial computer **UNIVAC** (1951) [36].

One final important event on this period, was the general recognition that the subsurface geological layers weren't completely flat (based on Rieber work previously mentioned), leading the oil companies to develop the necessary mathematical algorithms to locate correctly the position of the reflections and in this way strengthen the technique.

#### **2.3. Second generation (1956-1963): Transistors**

In this generation the VT were replaced by transistors (invented in 1947). The transistor also works as an electric switch but it's smaller and consumes less energy than the VT (see figure 1). This brought a great advantage for the computers because made them smaller, faster, cheaper and more efficient in energy consumption.

Additionally this generation of computers started to use a symbolic language called assembler (see figure 2) and it was developed the first version of high level languages like COBOL and FORTRAN. This generation also kept using the same input method (punched cards) but changed the storage technology from magnetic drum to magnetic core.

On the other hand, the SM in this period received a great contribution with the J.G. Hagedoorn work called "A process of seismic reflection interpretation" [18]. In this work Hagedoorn introduced a "ruler-and-compass method for finding reflections as an envelope of equal traveltime curves" based on Christiaan Huygens principle.

Other important aspect in this period was that the computational technology began to be used in seismic data processing, like the implementation made by Texas Instrument Company in

<sup>2</sup> A punched card, is an input method to introduce information, through the hole identification

**Figure 1.** In order from left to right: three vacuum tubes, one transistor, two integrated circuits and one integrated microprocessor.

**Figure 2.** Assembly language.

1959 on the computer TIAC. In the next year (1960) Alan Trorey from Chevron Company developed one of the first computerized methods based on Kirchhoff and in 1962 Harry Mayne obtained a patent on the CMP stack [30]; this two major contributions would facilitate later the full computational implementation of the SM.

#### **2.4. Third generation (1964-1971): Integrated circuits**

In this generation the transistors were miniaturized and packaged in integrated circuits (IC) (see figure 1). Initially the IC could contain less than 100 transistors but nowadays they can contain billions of transistors. With this change of technology, the new computers could be faster, smaller, cheaper and more efficient in the consumption of energy than the second generation [36].

Additionally, the way in which the user could introduce the information to the computers also changed, because the punched cars were replaced by the monitor, keyboard and interfaces based on operating systems (OS). The OS concept appeared for first time, allowing to this new generation of computers execute more than one task at the same time.

On the other hand, the SM also had a significant progress in this period. *In 1967, Sherwood completed the development of Continuous Automatic Migration (CAM) on an IBM accounting machine in San Francisco. The digital age may have been in its infancy, but there was no question that it was now running full blast,* [6] and, in 1970 and 1971, Jon Claerbout published two papers focused on the use of second order, hyperbolic, partial-differential equations to perform the imaging. Largely, the Claerbout work was centered in the use of finite differences taking advantage of the numeric analysis created during the World War II. The differences finite work, allowed to apply all these developments over the computers of that time.

#### **2.5. Fourth generation (1971- Present): Microprocessors**

The microprocessor is an IC that works as a data processing unit, providing the control of the calculations. It could be seen as the computer brain, [36].

This generation was highlighted because each 24 months the number of transistors inside an IC was doubled according with the Moore Law, who predicted this growing rate in 1975, [32]. This allowed that the development of computers with high processing capacity were growing very fast (see table 1). In 1981 IBM produced the first computer for home, in 1984 Apple introduced the Macintosh and in 1985 was created the first domain on Internet (symbolics.com). During the next years the computers were reducing their size and cost; and taking advantage of Internet, they raided in many areas.


**Table 1.** Summary of the microprocessors evolution.

Moore law was in force approximately until 2000, because of power dissipation problems produced by the amount of transistors inside a single chip, this effect is known as the Power Wall (PW) and remains stagnating the processing capacity.

From that moment began to surface new ideas to avoid this problem. Ideas like fabricate processors with more than one processing data unit (see figure 3); these units are known as cores, seemed to be a solution but new problems arose again. Problems like the increase of cost, power consumption and design complexity of the new processors, didn't reveal to be a good solution. This is the reason that even today we can see that the processors fabricated with this idea only have got 2, 4, 6, 8, or in the best way 16 cores, [5].

In figure 4 we can see the Moore Law tendency until 2000, and after this date we can see the emergence of multi-core processors.

#### **2.6. HPC's birth**

4 Will-be-set-by-IN-TECH

**Figure 1.** In order from left to right: three vacuum tubes, one transistor, two integrated circuits and one

1959 on the computer TIAC. In the next year (1960) Alan Trorey from Chevron Company developed one of the first computerized methods based on Kirchhoff and in 1962 Harry Mayne obtained a patent on the CMP stack [30]; this two major contributions would facilitate

In this generation the transistors were miniaturized and packaged in integrated circuits (IC) (see figure 1). Initially the IC could contain less than 100 transistors but nowadays they can contain billions of transistors. With this change of technology, the new computers could be faster, smaller, cheaper and more efficient in the consumption of energy than the second

Additionally, the way in which the user could introduce the information to the computers also changed, because the punched cars were replaced by the monitor, keyboard and interfaces based on operating systems (OS). The OS concept appeared for first time, allowing to this

On the other hand, the SM also had a significant progress in this period. *In 1967, Sherwood completed the development of Continuous Automatic Migration (CAM) on an IBM accounting machine in San Francisco. The digital age may have been in its infancy, but there was no question that it was now running full blast,* [6] and, in 1970 and 1971, Jon Claerbout published two papers focused on the use of second order, hyperbolic, partial-differential equations to perform the

new generation of computers execute more than one task at the same time.

integrated microprocessor.

**Figure 2.** Assembly language.

generation [36].

later the full computational implementation of the SM.

**2.4. Third generation (1964-1971): Integrated circuits**

A new challenge to computing researchers was the implementation of computationally expensive algorithms. For this purpose was necessary to design new strategies to do an optimal implementation over the more powerful computers. That group of implementation strategies over supercomputers has been known as High Performance Computing (HPC).

The HPC was born in Control Data Corporation (CDC) in 1960 with Seymour Cray, who launched in 1964 the first supercomputer called CDC 6600 [24]. Later in 1972 Cray left CDC

**Figure 3.** Intel multi-core processor die. Taken from [8]

**Figure 4.** CPU, GPU and FPGA Transistor Counts

to create his own company and four years later in 1976, Cray developed Cray1 which worked at 80MHz and it became in the most famous supercomputer of the history.

After that, the HPC kept working with clusters of computers using the best cores (processors) of each year. Each cluster was composed by a front-end station and nodes interconnected through the Ethernet port (see figure 5). So, in that way the HPC evolution was focused on increase the quantity of nodes per cluster, improve the interconnection network and the development of new software tools that allow to take advantage of the cluster. This strategy began to be used by other companies in the world like Fujitsu's Numerical Wind Tunnel, Hitachi SR2201, Intel Paragon among others, giving very good results.

**Figure 5.** General diagram of a cluster

#### **2.7. SM consolidation**

6 Will-be-set-by-IN-TECH

1970 1975 1980 1985 1990 1995 2000 2005 2010 2015

Pentium

Virtex

Virtex E

Pentium 4

CPUs GPUs FPGAs

Intel i7

Virtex 7

NV15

Virtex 4

NV3

Year

**Figure 3.** Intel multi-core processor die. Taken from [8]

103

Intel 4004

**Figure 4.** CPU, GPU and FPGA Transistor Counts

Motorola 6800

104

105

106

Transistor number

107

108

109

1010

On the other hand, the SM also received a great contribution by Jon Claerbout, because in 1973 he formed The Stanford Exploration Project (SEP). The SEP together with the group GAC from MIT built the bases for many consortia that would be formed in the future like Center For Wave Phenomenon at Colorado School of Mines. In 1978 R. H. Stolt presented a different approach to do the SM, which was called "Migration by Fourier Transform", [41]. Compared with the approach of Claerbout, the Stolt migration is faster, can handle up to 90 degrees formation dips but it's limited to constant velocities; while as Claerbout migration is insensitive to velocity variations and can handle formation dips up to 15 degrees.

In 1983 three papers were published at the same time about a new two-way solution to migration; these works were done by Whitmore, McMechan, and Baysal, Sherwood and Kosloff. Additionally, in 1987 Bleistein published a great paper about Kirchhoff migration, using a vastly improved approach to seismic amplitudes and phases. From that moment, the SM could be done in space time (x,t), frequency space (f,x), wavenumber time (k,t), or almost any combination of these domains.

Once the mathematical principles of the SM were developed, the following efforts focused on propose efficient algorithms. The table 2 summarizes some of these implementations, [6].

But perhaps one of the most important software developments between 1989 and 2004 for seismic data processing was performed by Jack Cohen and John Stockwell at the Center for Wave Phenomena (CWP) at the Colorado School of Mines. This tool was called Seismic Unix (SU) and has been one of the most used and downloaded worldwide.

Moreover, the combination of cluster of computers with efficient algorithms began to bear fruit quickly, allowing pre-stack migrations in depth at reasonable times. An important aspect to note is that almost all algorithms that we use today were developed during that time and have been implemented as the technology has been allowing it.

#### 8 Will-be-set-by-IN-TECH 90 New Technologies in the Oil and Gas Industry


**Table 2.** Development of efficient algorithms.

Over the years the seismic imaging began to be more demanding with the technology, because every time it would need a more accurate subsurface image. This led to the use of more complex mathematical attributes. Additionally it became necessary to process a large volume of data resulting from the 3D and 4D seismic and the use of multicomponent geophones. All this combined with the technological limitations of computers since 2000, made HPC developers began to seek new technological alternatives.

These alternatives should provide a solution to the three barriers currently faced by computers (Power wall, Memory wall and IPL wall; these three barriers will be explained in detail below). For that reason the attention was focused on two emerging technologies that promised to address the three barriers in a better way than computers, [14]. These technologies were GPUs and FPGAs.

In the following sections it's explained how these two technologies have been facing the three barriers, in order to map out the future of the HPC in the area of seismic data processing.
