**2**

## **High-Quality Seamless Panoramic Images**

Jaechoon Chon, Jimmy Wang, Tom Slankard and John Ristevski *Earthmine Inc., Berkeley, USA* 

#### **1. Introduction**

28 Special Applications of Photogrammetry

Silva, D. C; Candeias, A. L. B. (2008). Restoration of Non Uniform Illumination in Color Aerial Photographs. *Boletim de Ciências Geodésicas*. Curitiba: Ufpr. V.14, N.3. Slater, Philip N. (1980). Remote Sensing, Optics and Optical Systems. Massachusetts:

Slater, Philip N. (Ed). (1983). Photographic Systems for Remote Sensing (Chap 6). In: Manual of Remote Sensing 2nd Edition. V.1. American Society of Photogrametry. Tuominen, Sakari; Pekkarinen, Anssi. (2004). Local Radiometric Correction of Digital Aerial

Wu, Xiaoliang; Campbell, Norm. (2004). A Colour-Balancing Method and Its Applications.

Wyszecki, G; Stiles (1982)., W.S. Color Science Concepts and Methods, Quantitative data and

Photographs for Multi Source Forest Inventory. *Remote Sensing of Environment*. N.

*In: The 12th Australasian Remote Sensing and Photogrammetric Conference Proceedings*.

Addison-Wesley Publishing. Il. 575 P..

Fremantle, Western Australia, 2004.

Formulae. 2th edition. John Wiley & Sons.

89, P. 72–82.

Image mosaicing is the creation of a larger image by stitching together smaller images. Image mosaicing has many different applications, such as satellite imagery mosaics (Chon et al., 2010,) the creation of virtual reality environments (Szeliski, 1996; Chon et al., 2007; Brown and Lowe, 2007,) medical image mosaics (Chou et al., 1997,) and video compression (Irani et al., 1995; Irani and Anandan, 1998; Kumar et al., 1995; Lee et al., 1997; Teodosio and Bender, 1993).

Image mosaicing has four steps: 1) estimating relative pose among smaller images to project on a plane or specific defined surfaces, 2) projecting those images on the surface, 3) correcting photometric parameters among projecting images, and 4) blending overlapping images.

The first step has two categories: 2D plane images (For example, scanned maps in sections and orthoimages) based methods and perspective image based methods. Orthoimages are generated by correcting each image pixel from a perspective image with a help of a Digital Elevation Model (DEM,) produced by a stereo camera system, a laser/radar aerial system or obtained of photogrammetric works. To align the images with the DEM, we need to estimate absolute orientation with ground control points (GCP.) In the case of perspective images, we need to estimate relative pose using homography, affine transformation, colinear conditions, and coplanner condition with feature correspondences. (Hartley and Zisserman & Books, 2003; McGlone et al. & Books, 2004; Szeliski & Books, 2011)

The second step projects all images onto a specific defined surface, such as 2D plane, cylinder (Chen, 1995; Szeliski, 1996), sphere (Coorg and Teller, 2000; Szeliski and Shum, 1997), and multiple projection planes (Chon et al., 2007.) In the case of cylinder and sphere, we assume that images are captured by fixing the position of a camera and rotating it at the position. And all images are projecting onto a cylinder or a sphere surface.

In the third step, we have to balance photometric parameters among projected images because exposure affects the photometric parameters of each image. At the last step, we have to create one image from multiple overlapping images.

Ideally each sample (pixel) along a ray would have the same intensity in every image that it intersects, but in reality this is not the case. Even after gain compensation some image edges are still visible due to a number of un-modeled effects, such as vignetting (the decrease of intensity towards the edge of the image,) parallax effects due to unwanted motion of the optical center, registration errors due to an incorrect or approximate camera model, radial distortion, and so on.

High-Quality Seamless Panoramic Images 31

In this chapter, we introduce luminance balancing using gamma correction, Dijkstra's algorithm for seam-line detection, our proposed fast color blending algorithm on seam-lines, and color balance and saturation implemented on EARTHMINE Inc.'s mobile mapping

A major challenge in merging multiple shots into a single panorama is that each individual image experiences its own exposure setting, even if each camera (when there are multiple cameras) is color calibrated. A common scenario that may cause such discrepancy is when the shots are taken, some cameras face towards the light source (e.g. the sun), and others face away from the light source. Due to the different exposure to the physical light source, some images will be brighter than others. Merging these images together inevitably causes a

To overcome this problem, we describe an algorithm that detects the differences in the luminance and chrominance in each image and attempts to equalize such differences. First, we assume that there exist overlapping regions between neighboring images. For the set of images, we assume there exist a single correction factor per channel (one luminance channel that governs image intensity and two chrominance channels that governs the color of the image) per image such that the luminance/chrominance of the overlapping areas match up. The ideal approach would be to find corresponding pixels between the overlapping areas and find the optimal correction factors using these correspondences. Specifically, we can

<sup>2</sup>

*I I* (2.1)

arg min *n np m mp <sup>p</sup>*

Fig. 2.1. An illustration of a subset of the correspondences between two images in the

where *Inp* and *Imp* are the *pth* corresponding channel value for image *In* and *Im* respectively; *γn* and *γm* are the correction factor for image *n* and *m.* A simple illustration is shown in

system (MMS) that generates 3D panoramic images as shown in Fig 1.

**2. Color balance** 

figure 2.1.

overlapped region.

visible seam at the merging points.

write the optimization process in the following form

Because of this, a good blending strategy is important (Brown and Lowe, 2007; Goldman, 2011; Kim and Pollefeys, 2008; Zomet et al., 2006.) In particular, applying seam-line detection algorithms before applying blending algorithms is an effective strategy when using image data that has parallax effects, because a seam-line detected by optimal path finding algorithms passes equal-depth pixels or similar colors pixels overlapping images.

c) 4 stereo fish-eye images

d) Image and 3D range panoramas

e) Applications of ArcGIS and AutoCAD Map3D

Fig. 1. EARTHMINE Inc.'s 3D mobile mapping systems, 3D panorama generated from eight fish-eye images captured using the system, and its applications.

In this chapter, we introduce luminance balancing using gamma correction, Dijkstra's algorithm for seam-line detection, our proposed fast color blending algorithm on seam-lines, and color balance and saturation implemented on EARTHMINE Inc.'s mobile mapping system (MMS) that generates 3D panoramic images as shown in Fig 1.

#### **2. Color balance**

30 Special Applications of Photogrammetry

Because of this, a good blending strategy is important (Brown and Lowe, 2007; Goldman, 2011; Kim and Pollefeys, 2008; Zomet et al., 2006.) In particular, applying seam-line detection algorithms before applying blending algorithms is an effective strategy when using image data that has parallax effects, because a seam-line detected by optimal path finding algorithms passes equal-depth pixels or similar colors pixels overlapping images.

a) Vehicle system b) Quad cycle system

c) 4 stereo fish-eye images

d) Image and 3D range panoramas

e) Applications of ArcGIS and AutoCAD Map3D Fig. 1. EARTHMINE Inc.'s 3D mobile mapping systems, 3D panorama generated from eight

fish-eye images captured using the system, and its applications.

A major challenge in merging multiple shots into a single panorama is that each individual image experiences its own exposure setting, even if each camera (when there are multiple cameras) is color calibrated. A common scenario that may cause such discrepancy is when the shots are taken, some cameras face towards the light source (e.g. the sun), and others face away from the light source. Due to the different exposure to the physical light source, some images will be brighter than others. Merging these images together inevitably causes a visible seam at the merging points.

To overcome this problem, we describe an algorithm that detects the differences in the luminance and chrominance in each image and attempts to equalize such differences. First, we assume that there exist overlapping regions between neighboring images. For the set of images, we assume there exist a single correction factor per channel (one luminance channel that governs image intensity and two chrominance channels that governs the color of the image) per image such that the luminance/chrominance of the overlapping areas match up.

The ideal approach would be to find corresponding pixels between the overlapping areas and find the optimal correction factors using these correspondences. Specifically, we can write the optimization process in the following form

$$\arg\min \left[ \sum\_{p} \left( \gamma\_n I\_{np} - \gamma\_m I\_{mp} \right)^2 \right] \tag{2.1}$$

where *Inp* and *Imp* are the *pth* corresponding channel value for image *In* and *Im* respectively; *γn* and *γm* are the correction factor for image *n* and *m.* A simple illustration is shown in figure 2.1.

Fig. 2.1. An illustration of a subset of the correspondences between two images in the overlapped region.

High-Quality Seamless Panoramic Images 33

We used a correction factor of 2.2 according to common practice (Poynton 2002). Xiong and Pulli (2009) also used a very similar objective. The final objective over all the images is thus

*d n np m mp pn pm*

1 1

*dk k k k p k*

1 1 arg min 1 1

arg min [] [ ] 1 *l l*

*EI EI*

In the above equation, we assume that there are only two overlapping images (left and right) for a given image. While this is often the typical scenario, it is straightforward to generalize to any arbitrary amount of images. This objective function thus can be optimized

To create a seamless panorama from a multiple stereo fish-eye camera system, with a system like the EARTHMINE Inc, is necessary an optimal seam-line detection algorithm (see Figs.

Dijkstra's algorithm to find an optimal path within a cost space (Bellman, 1957; Dijkstra, 1959.) A seam-line can be detect by using the Dijkstra's algorithm with a cost map built by a cross correlation between two overlapping images (Milgram, 1975, 1977; Davis, 1998; Efros

a) b) Fig. 3.1. Chon et al. (2010) proposed seam-line algorithm explained with the waterway on the terrain structure. a) High water level creates many possible paths between two points A

Chon et al. (2010) proposed a novel algorithm for selecting a seam-line that at first identifies a subarea within two overlapping images such that in the subarea, at least a seam line exists and the maximal mismatch score in this subarea is minimized. Let the degree of mismatching of pixels between two images on the overlapped region form a cost field, which can be expressed in a 3D plot. If a threshold is set with all cost values under the

and B. b) Lowered water level makes the maximum cost smaller.

*P P I I*

*P P*

, , 0

*k*

where *P* is the number of overlapping pixels.

1

with any of-the-shelf optimization package.

,

*n m*

**3. Seam line detection** 

1(c) and 1(d).)

and Freeman, 2001.)

 2 2.2 2.2 2 2

(2.4)

2 2

(2.5)

However, finding pixel correspondences between pairs of images is non-trivial and most currently available algorithms such as SIFT (Lowe, 2004) are very computationally expensive. For applications that have limited processing power (e.g. image stitching on mobile platforms) or real time processing requirements, such approach is inadequate.

Brown and Lowe (2007) and Xiong and Pulli (2010) pointed out that because luminance/chrominance is a global entity, instead of using pixel correspondences, we can approximate them using the mean luminance / chrominance value between the overlapping region. Eq(2.1) thus becomes

$$\arg\min \left[ \left( \gamma\_n E[I\_n] - \gamma\_m E[I\_m] \right)^2 \right] \tag{2.2}$$

where E[In] and E[Im] denotes the average luminance / chrominance values of the overlapped image regions from image n and m respectively.

Note that there is a trivial solution to Eq(2.2), where when the γs are set to 0, the objective function is at its minimum. To prevent this, we add a regularization term in the form of 1 γ, such that this term is non-zero when γ is 0. In addition, this term prevents the luminance / chrominance from changing too much, which is often desired. The objective function thus becomes

$$\underset{\gamma\_{n},\gamma\_{m}}{\arg\min} \left\lfloor \left\| \alpha\_{d} \left( \gamma\_{n} \boldsymbol{E}[I\_{n}] - \gamma\_{m} \boldsymbol{E}[I\_{m}] \right)^{2} + \alpha\_{p} \left( 1 - \gamma\_{n} \right)^{2} + \alpha\_{p} \left( 1 - \gamma\_{m} \right)^{2} \right\rfloor \tag{2.3}$$

where *wd* and *wp* are weights for the two terms.

The human visual system perceives brightness according to the Weber-Fechner law where the perceived brightness is not a linear function of the physical brightness but rather follows a log function (red line in figure 2.1; blue line indicates a linear response).

Fig. 2.2. An illustration of Weber-Fechner law of brightness perception.

To account for this effect, we linearize the luminance channel with a power function such that the brightness perception curve is linear (some cameras have this linearization in the post-processing step. If so this step can be omitted). The objective function for the luminance channel becomes

However, finding pixel correspondences between pairs of images is non-trivial and most currently available algorithms such as SIFT (Lowe, 2004) are very computationally expensive. For applications that have limited processing power (e.g. image stitching on mobile platforms) or real time processing requirements, such approach is inadequate.

Brown and Lowe (2007) and Xiong and Pulli (2010) pointed out that because luminance/chrominance is a global entity, instead of using pixel correspondences, we can approximate them using the mean luminance / chrominance value between the overlapping

where E[In] and E[Im] denotes the average luminance / chrominance values of the

Note that there is a trivial solution to Eq(2.2), where when the γs are set to 0, the objective function is at its minimum. To prevent this, we add a regularization term in the form of 1 γ, such that this term is non-zero when γ is 0. In addition, this term prevents the luminance / chrominance from changing too much, which is often desired. The objective function thus

arg min [] [ ] 1 1

a log function (red line in figure 2.1; blue line indicates a linear response).

Fig. 2.2. An illustration of Weber-Fechner law of brightness perception.

overlapped image regions from image n and m respectively.

 <sup>2</sup> arg min [ ] [ ] *nn mm EI EI*

22 2

*dn n m m p n p m EI EI*

The human visual system perceives brightness according to the Weber-Fechner law where the perceived brightness is not a linear function of the physical brightness but rather follows

To account for this effect, we linearize the luminance channel with a power function such that the brightness perception curve is linear (some cameras have this linearization in the post-processing step. If so this step can be omitted). The objective function for the luminance

(2.2)

(2.3)

region. Eq(2.1) thus becomes

,

*n m*

where *wd* and *wp* are weights for the two terms.

becomes

channel becomes

$$\arg\min\_{\gamma\_{n},\gamma\_{n}} \left| \alpha\_{d} \left( \gamma\_{n} \frac{1}{P} \sum\_{p} (I\_{np})^{2.2} - \gamma\_{m} \frac{1}{P} \sum\_{p} (I\_{mp})^{2.2} \right)^{2} + \alpha\_{p} \left( 1 - \gamma\_{n} \right)^{2} + \alpha\_{p} \left( 1 - \gamma\_{m} \right)^{2} \right| \tag{2.4}$$

where *P* is the number of overlapping pixels.

We used a correction factor of 2.2 according to common practice (Poynton 2002). Xiong and Pulli (2009) also used a very similar objective. The final objective over all the images is thus

$$\underset{\boldsymbol{\gamma}\_{1},\cdots,\boldsymbol{\gamma}\_{k}}{\arg\min} \left[ \sum\_{k=0}^{k} \left( \alpha\_{d} \left( \gamma\_{k} \boldsymbol{E} [\boldsymbol{I}\_{k}^{l}] - \gamma\_{k-1} \boldsymbol{E} [\boldsymbol{I}\_{k-1}^{l}] \right)^{2} + \alpha\_{p} \left( 1 - \gamma\_{k} \right)^{2} \right) \right] \tag{2.5}$$

In the above equation, we assume that there are only two overlapping images (left and right) for a given image. While this is often the typical scenario, it is straightforward to generalize to any arbitrary amount of images. This objective function thus can be optimized with any of-the-shelf optimization package.

#### **3. Seam line detection**

To create a seamless panorama from a multiple stereo fish-eye camera system, with a system like the EARTHMINE Inc, is necessary an optimal seam-line detection algorithm (see Figs. 1(c) and 1(d).)

Dijkstra's algorithm to find an optimal path within a cost space (Bellman, 1957; Dijkstra, 1959.) A seam-line can be detect by using the Dijkstra's algorithm with a cost map built by a cross correlation between two overlapping images (Milgram, 1975, 1977; Davis, 1998; Efros and Freeman, 2001.)

Fig. 3.1. Chon et al. (2010) proposed seam-line algorithm explained with the waterway on the terrain structure. a) High water level creates many possible paths between two points A and B. b) Lowered water level makes the maximum cost smaller.

Chon et al. (2010) proposed a novel algorithm for selecting a seam-line that at first identifies a subarea within two overlapping images such that in the subarea, at least a seam line exists and the maximal mismatch score in this subarea is minimized. Let the degree of mismatching of pixels between two images on the overlapped region form a cost field, which can be expressed in a 3D plot. If a threshold is set with all cost values under the

High-Quality Seamless Panoramic Images 35

We adopts Dijkstra's algorithm to find an optimal seam-line on a cost space built by using the normalized cross correlation (*NCC*) between two overlapping images for pixel (*u,v*) (Lewis, 1995). The *NCC* has been widely using image matching for stereo matching, feature

/2 /2

/2 /2

*i=u w j=v h*

*uv I* are averages of each image in a 5*×*5 window.

The cost value approaches 0.0 for similar pixel points and 1.0 for dissimilar pixels.

*I(i, j) I I (i, j) I*

*uv uv*

(3.2)

(3.3)

*uv uv*

2 . 0

*I(i, j) I I (i, j) I*

/2 /2 /2 /2 <sup>2</sup> <sup>2</sup>

*u+w v+h ' '*

*u+w v+h u+w v+h ' '*

/2 /2 /2 /2

*i=u w j=v h i=u w j=v h*

The *NCC* between two images for pixel (u; v) is computed using the 5*×*5 subimages as in Eq. (3.2). Note that *NCC* has a range of [-1.0, 1.0]. A cost (degree of mismatch) at pixel *(u, v)*,

cos , 1.0 - , / . *t u v NCC u v*

Dijkstra's algorithm is a global optimization technique that determines the optimal path on the cost space by taking the local minimum operation at each node. To apply to optimal seam line searching, each pixel in an overlapping area is associated with a node, which has 8

0 inf inf

inf

inf inf inf

Fig. 3.3. The cost *uv,kl d* between two nodes (*u,v*) and (*k,l*) built by using *NCC*.

goal node

*uv kl d* ,

(*k,l*) (*u,v*)

**3.1 Cost space** 

tracking, etc.

where *uv I* and *'*

*cost(u, v)*, is defined as

**3.2 Dijkstra's algorithm** 

neighboring nodes, with four in diagonal directions.

starting node

inf

*NCC(u,v) =*

threshold filled with ``water'', then the 3D plot will look like the ones in Fig. 4, where Fig. 3.1(a) uses a larger threshold. The plots show some ``water ways'' between points A and B, and the threshold defines the water level. Fig. 3.1(b) shows that the allowed path is not near the shortest one. The technique then further applies Dijkstra's algorithm to find an optimal path within the restricted subspace. In this optimization phase, a cost conversion is applied to make a higher cost (the mismatch score) larger. This enables the search to find a possibly longer seam-line with less highly mismatched pairs.

Kerscher (2001) proposed a method called the "twin snake algorithm" to detect seam-lines. The algorithm starts with two initial vertical lines as control points of the snakes on the overlapping images. The snakes have two energy terms which are internal and external energies in general (Kass et al., 1987; Leymarie and Levine, 1993; Tiilikainen, 2007; Williams and Shah, 1990). The sum of the mismatching values on those lines and the relationship between neighbor control points are called *internal energy* and *external energy*, respectively. The twin snake algorithm builds an energy function \* *E*snake from four terms: the internal term *E*int , the photometric term *E*pho , and the external force *E*ext . The energy is calculated for each vertex *v*(*s*) and integrated over the whole length of the snake:

$$E\_{\rm snake}^{\*} = \underset{\rm top\atop\partial}{\left(} E\_{\rm snake} \left( \upsilon(\text{s}) \right) ds = \underset{\rm top\atop\partial}{\left( \left( E\_{\rm int} \left( \upsilon(\text{s}) \right) + E\_{\rm pho} \left( \upsilon(\text{s}) \right) + E\_{\rm ext} \left( \upsilon(\text{s}) \right) \right) \right)} ds \tag{3.1}$$

The internal energy *E*int tries to preserve a smooth shape for the curve. Photometric energy *E*pho usually evaluates edge strength or similar measures in the examined image and tries to pull the snake to salient image features. External energy *E*ext can be introduced by user interaction and are responsible for globally controlling and guiding the snake evolution (Kerscher, 2001.) The curve with minimum energy as shown in Fig. 3.2 is determined to be the optimal seam line.

Fig. 3.2. Initial vertical lines as control points (dotted line) and a detected seam line (curved line).

This algorithm cannot completely overcome the local minima problem, and it requires a high computation load. Even though Chon et al. (2010) avoid the local minima problem, it requires a high computation cost as well because of finding the best threshold.

#### **3.1 Cost space**

34 Special Applications of Photogrammetry

threshold filled with ``water'', then the 3D plot will look like the ones in Fig. 4, where Fig. 3.1(a) uses a larger threshold. The plots show some ``water ways'' between points A and B, and the threshold defines the water level. Fig. 3.1(b) shows that the allowed path is not near the shortest one. The technique then further applies Dijkstra's algorithm to find an optimal path within the restricted subspace. In this optimization phase, a cost conversion is applied to make a higher cost (the mismatch score) larger. This enables the search to find a possibly

Kerscher (2001) proposed a method called the "twin snake algorithm" to detect seam-lines. The algorithm starts with two initial vertical lines as control points of the snakes on the overlapping images. The snakes have two energy terms which are internal and external energies in general (Kass et al., 1987; Leymarie and Levine, 1993; Tiilikainen, 2007; Williams and Shah, 1990). The sum of the mismatching values on those lines and the relationship between neighbor control points are called *internal energy* and *external energy*, respectively. The twin snake algorithm builds an energy function \* *E*snake from four terms: the internal term *E*int , the photometric term *E*pho , and the external force *E*ext . The energy is calculated

*E = E (v(s))ds = E (v(s))+ E (v(s))+ E (v(s)) ds* (3.1)

longer seam-line with less highly mismatched pairs.

for each vertex *v*(*s*) and integrated over the whole length of the snake:

snake snake int pho ext

Fig. 3.2. Initial vertical lines as control points (dotted line) and a detected seam line

requires a high computation cost as well because of finding the best threshold.

This algorithm cannot completely overcome the local minima problem, and it requires a high computation load. Even though Chon et al. (2010) avoid the local minima problem, it

The internal energy *E*int tries to preserve a smooth shape for the curve. Photometric energy *E*pho usually evaluates edge strength or similar measures in the examined image and tries to pull the snake to salient image features. External energy *E*ext can be introduced by user interaction and are responsible for globally controlling and guiding the snake evolution (Kerscher, 2001.) The curve with minimum energy as shown in Fig. 3.2 is determined to be

1 1

0 0

\*

the optimal seam line.

(curved line).

We adopts Dijkstra's algorithm to find an optimal seam-line on a cost space built by using the normalized cross correlation (*NCC*) between two overlapping images for pixel (*u,v*) (Lewis, 1995). The *NCC* has been widely using image matching for stereo matching, feature tracking, etc.

$$\text{NCC}(u,v) = \frac{\sum\_{i=u-w/2}^{u+w/2} \sum\_{j=v-h/2}^{v+h/2} \left( I(i,j) - I\_{uv} \right) \left( I'(i,j) - I\_{uv}' \right)}{\sqrt{\sum\_{i=u-w/2}^{u+w/2} \sum\_{j=v-h/2}^{v+h/2} \left( I(i,j) - I\_{uv} \right)^2 \sum\_{i=u-w/2}^{v+w/2} \sum\_{j=v-h/2}^{v+h/2} \left( I'(i,j) - I\_{uv}' \right)^2}} \tag{3.2}$$

where *uv I* and *' uv I* are averages of each image in a 5*×*5 window.

The *NCC* between two images for pixel (u; v) is computed using the 5*×*5 subimages as in Eq. (3.2). Note that *NCC* has a range of [-1.0, 1.0]. A cost (degree of mismatch) at pixel *(u, v)*, *cost(u, v)*, is defined as

$$\text{cost}(u, v) = \left\{1.0 \text{ - NCC}(u, v)\right\} / \text{ } \mathcal{Z} \mathcal{O}.\tag{5.3}$$

The cost value approaches 0.0 for similar pixel points and 1.0 for dissimilar pixels.

#### **3.2 Dijkstra's algorithm**

Dijkstra's algorithm is a global optimization technique that determines the optimal path on the cost space by taking the local minimum operation at each node. To apply to optimal seam line searching, each pixel in an overlapping area is associated with a node, which has 8 neighboring nodes, with four in diagonal directions.

Fig. 3.3. The cost *uv,kl d* between two nodes (*u,v*) and (*k,l*) built by using *NCC*.

High-Quality Seamless Panoramic Images 37

Fig. 3.5. Four seam-lines detected on four overlapping images in a panorama; Blue and red

Even though color transitions among all overlapping images are smoothed, the color matching only provides an approximate match. Because color differences among the images corrected by using the color matching method are not enough, Xiong and Pulli (2010) proposed an effective blending method with fast processing speed and high blending

The method calculates the color differences between each *p* and all points *D(p)* on each of a seam lines *mc* and then interpolate *D(p)* at a pixel *q* of the blending image (see Fig. 4.1(a)).

1

Where *i* is an index of the seam points consisting of a seam-line *mc* and weights

Additionally, they proposed the creation of a color difference distribution process to enforce color consistency for 360-degree panorama. They attenuate the color of the pixel *q* in the

1

*<sup>x</sup> C(q) = C(q)+ D(q)*

where *x* and *<sup>b</sup> x* are the horizontal distances between *q* and the seam point *p* of the seam line

*ms* and between *p* and the end of the blending area, respectively, in Fig. 4.1(b).

*b*

*x* 

*i=*

*i i*

are the inverse coordinate distance to the boundary pixels.

*D(q) = w (q)D(p )* (4.1)

(4.2)

*n*

Finally, the color value *C(q)* at pixel *q* are changed as *C(q)= C(q)* + *D(p)*.

points are starting and goal nodes, respectively.

**4. Fast color blending on seam-lines** 

quality.

1

*j=*

*i n*

1 /

*<sup>p</sup> <sup>q</sup> w (q) =*

1 /

*i*

*j*

*p q*

blending area on the current scan line with

Let the node at which we are starting be called the initial node. Let (*u,v*) specify a node and (*k,l*) be a neighboring node of node (*u,v*). NBR(*u,v*) indicates the set of neighboring nodes of node (*u,v*). Let the cost *uv,kl d* be a path cost between two nodes (*u,v*) and (*k,l*). Let the global

minimum cost *Di*(*u,v*) be a accumulated cost from the starting node to (*u,v*). Dijkstra's algorithm will assign some initial costs to avoid some areas and will try to improve them step by step.

Assign to every node a tentative cost *Di*: set it to zero for the starting node and to infinity for all other nodes.

1. Mark all nodes as unvisited. Set the starting node as current. For current node (*u,v*), consider all its unvisited neighbors and calculate their tentative cost *Di*.

$$\text{Di}(\mathfrak{u}, \mathfrak{v}) = \min \left\{ d\_{uv, \text{kl}} + D\_i(\mathfrak{k}, l); (\mathfrak{k}, l) \in \text{NBR}(\mathfrak{u}, \mathfrak{v}) \right\} \tag{3.4}$$

For example, if current node (*u,v*) has cost of 6, and a path cost *uv,kl d* is 2, the cost to node (*k,l*) through node (*u,v*) will be 6+2=8. If this cost is less than the previously recorded cost, overwrite the cost. All unvisited neighbors are added to an unvisited set.


Fig. 3.4. Finding the path with the minimum cost from the goal to staring nodes.

Fig. 3.5 shows four seam-lines detected on four overlapping images using Dijkstra's algorithm on the cost spaces. Each overlapping image is generated from fish-eye images as shown in Fig. 1(c) using a sphere projection method (Coorg and Teller, 2000; Genner, 2006; Kim et al., 2004; Szeliski and Shum, 1997; Yakimovsky and Cunningham, 1978.) The size of each overlapping image is about 30 % of a panoramic image.

Let the node at which we are starting be called the initial node. Let (*u,v*) specify a node and (*k,l*) be a neighboring node of node (*u,v*). NBR(*u,v*) indicates the set of neighboring nodes of node (*u,v*). Let the cost *uv,kl d* be a path cost between two nodes (*u,v*) and (*k,l*). Let the global minimum cost *Di*(*u,v*) be a accumulated cost from the starting node to (*u,v*). Dijkstra's algorithm will assign some initial costs to avoid some areas and will try to improve them

Assign to every node a tentative cost *Di*: set it to zero for the starting node and to infinity for

1. Mark all nodes as unvisited. Set the starting node as current. For current node (*u,v*),

For example, if current node (*u,v*) has cost of 6, and a path cost *uv,kl d* is 2, the cost to node (*k,l*) through node (*u,v*) will be 6+2=8. If this cost is less than the previously recorded cost, overwrite the cost. All unvisited neighbors are added to an unvisited set. 2. When we are done considering all neighbors of the current node, mark it as visited. A visited node will not be checked ever again; its cost recorded now is final and minimal.

*Di(u,v) = d + D (k,l);(k,l) NBR(u,v)* min *uv,kl i* (3.4)

consider all its unvisited neighbors and calculate their tentative cost *Di*.

3. The next current node will be the node with the lowest cost in the unvisited set.

between the starting and goal nodes.

starting node

20

90

each overlapping image is about 30 % of a panoramic image.

80

10

80

7

Fig. 3.4. Finding the path with the minimum cost from the goal to staring nodes.

110

40

4. If all nodes have been visited, finish and then find the path with the minimum cost

0 30 40

10

5

90 <sup>50</sup> <sup>46</sup>

70

<sup>60</sup> <sup>30</sup>

55 43

20 75 70

82 85 90

Fig. 3.5 shows four seam-lines detected on four overlapping images using Dijkstra's algorithm on the cost spaces. Each overlapping image is generated from fish-eye images as shown in Fig. 1(c) using a sphere projection method (Coorg and Teller, 2000; Genner, 2006; Kim et al., 2004; Szeliski and Shum, 1997; Yakimovsky and Cunningham, 1978.) The size of

3 5

goal node

step by step.

all other nodes.

Fig. 3.5. Four seam-lines detected on four overlapping images in a panorama; Blue and red points are starting and goal nodes, respectively.

#### **4. Fast color blending on seam-lines**

Even though color transitions among all overlapping images are smoothed, the color matching only provides an approximate match. Because color differences among the images corrected by using the color matching method are not enough, Xiong and Pulli (2010) proposed an effective blending method with fast processing speed and high blending quality.

The method calculates the color differences between each *p* and all points *D(p)* on each of a seam lines *mc* and then interpolate *D(p)* at a pixel *q* of the blending image (see Fig. 4.1(a)). Finally, the color value *C(q)* at pixel *q* are changed as *C(q)= C(q)* + *D(p)*.

$$D(q) = \sum\_{i=1}^{n} w\_i(q) D(p\_i) \tag{4.1}$$

Where *i* is an index of the seam points consisting of a seam-line *mc* and weights 1 1 / 1 / *i i n j j= <sup>p</sup> <sup>q</sup> w (q) = p q* are the inverse coordinate distance to the boundary pixels.

Additionally, they proposed the creation of a color difference distribution process to enforce color consistency for 360-degree panorama. They attenuate the color of the pixel *q* in the blending area on the current scan line with

$$\mathbf{C}(q) = \mathbf{C}(q) + \left(\mathbf{1} - \frac{\mathbf{x}}{\mathbf{x}\_b}\right) \mathbf{D}(q) \tag{4.2}$$

where *x* and *<sup>b</sup> x* are the horizontal distances between *q* and the seam point *p* of the seam line *ms* and between *p* and the end of the blending area, respectively, in Fig. 4.1(b).

High-Quality Seamless Panoramic Images 39

To evaluate whether seam points are poorly aligned or well aligned, we first build a histogram that accumulates the standard deviation in the horizontal direction, shown in Fig 4.2(c) as the thick green curve, and then find the peak point '\*' in the thick green. A dotted red horizontal line in Fig. 4.2(c) chosen by the peak point is a reference standard deviation. We add a margin into the reference standard deviation to calculate a threshold, a red

a) A seam-line

c) Standard deviation and histogram Fig. 4.2. The color differences and standard deviation of seam points; '\*' is the most peak point in a histogram and the dotted and solid horizontal lines are the reference standard

We cannot remove the color differences in a dotted circle in Fig. 4.2(b) using only this single threshold, because the standard deviation of the color differences of seam points in the circle

b) Color differences

horizontal line in Fig. 4.2(c).

sky area

**\***

deviation and threshold, respectively, in c).

peak

Fig. 4.1. Image blending on a seam line and color difference distribution for 360-degree panoramic images (Xing and Pulli, 2010).

#### **4.1 The proposed method**

If overlapping images are well aligned, Xiong and Pulli (2010) proposed method works perfectly. However, poor image alignment due to parallax, registration errors, and radial distortions can lead to poor blending results in general.

To cope with this problem, we propose filtered color differences using multiple major color differences detected on a seam-line. When the seam points are well aligned, the difference between the neighboring points will be smoothly changed. Because seam points with misalignment cause sudden changes, we simply filter out these seam points and keep good aligned seam points. And we then replace the misaligned seam points with interpolated color differences using the ends of two neighbor seam points on a seam-line. We detect smoothly changed color differences using a median filter and the standard deviation of the changes.

#### **4.2 Detection of good aligned seam points**

Fig. 4.2(a) shows an overlapping image that is located at the rightmost side of the panoramic image shown in Fig. 3.5. Fig. 4.2(b) shows a graph of color differences corresponding to the seam points of the seam line shown in Fig 4.2(a). If the seam points are well aligned, the differences will be smooth like the differences for seam points in the sky area.

Before applying Eq (4.2), we have to remove the color differences of poorly aligned seam points. To detect smooth changing parts, we use a threshold based on the standard deviation of the color differences. The graph in Fig 4.2(c) depicts the standard deviations of the color differences for each channel. The standard deviation of poorly aligned seam points is higher than that of the sky and road.

a)

b)

If overlapping images are well aligned, Xiong and Pulli (2010) proposed method works perfectly. However, poor image alignment due to parallax, registration errors, and radial

To cope with this problem, we propose filtered color differences using multiple major color differences detected on a seam-line. When the seam points are well aligned, the difference between the neighboring points will be smoothly changed. Because seam points with misalignment cause sudden changes, we simply filter out these seam points and keep good aligned seam points. And we then replace the misaligned seam points with interpolated color differences using the ends of two neighbor seam points on a seam-line. We detect smoothly changed color differences using a median filter and the standard deviation of the changes.

Fig. 4.2(a) shows an overlapping image that is located at the rightmost side of the panoramic image shown in Fig. 3.5. Fig. 4.2(b) shows a graph of color differences corresponding to the seam points of the seam line shown in Fig 4.2(a). If the seam points are well aligned, the

Before applying Eq (4.2), we have to remove the color differences of poorly aligned seam points. To detect smooth changing parts, we use a threshold based on the standard deviation of the color differences. The graph in Fig 4.2(c) depicts the standard deviations of the color differences for each channel. The standard deviation of poorly aligned seam points

differences will be smooth like the differences for seam points in the sky area.

Fig. 4.1. Image blending on a seam line and color difference distribution for 360-degree

panoramic images (Xing and Pulli, 2010).

distortions can lead to poor blending results in general.

**4.2 Detection of good aligned seam points** 

is higher than that of the sky and road.

**4.1 The proposed method** 

To evaluate whether seam points are poorly aligned or well aligned, we first build a histogram that accumulates the standard deviation in the horizontal direction, shown in Fig 4.2(c) as the thick green curve, and then find the peak point '\*' in the thick green. A dotted red horizontal line in Fig. 4.2(c) chosen by the peak point is a reference standard deviation. We add a margin into the reference standard deviation to calculate a threshold, a red horizontal line in Fig. 4.2(c).

Fig. 4.2. The color differences and standard deviation of seam points; '\*' is the most peak point in a histogram and the dotted and solid horizontal lines are the reference standard deviation and threshold, respectively, in c).

We cannot remove the color differences in a dotted circle in Fig. 4.2(b) using only this single threshold, because the standard deviation of the color differences of seam points in the circle

High-Quality Seamless Panoramic Images 41

a) A single peak point

b) Two peak points

a)

b)

Fig. 4.5. Comparison between a) Xiong and Pulli (2010) method and b) The extended

method.

Fig. 4.4. A panorama re-created by our extended seam-line blending algorithm with

multiple peak points chosen in a histogram of color differences

is low like the reference standard deviation. To remove the color differences of those seam points on the seam-line, we use the same process to build the second histogram using the color differences of the remaining seam points, which are determined by using the first histogram built by using the standard deviation.

After detecting a peak point on the second histogram as a reference of color difference, two thresholds are calculated by adding and subtracting a margin. In our experiments, the margin is set as 15. If a single reference is only applied, bad blending will occur as shown in Fig 4.3.

Fig. 4.3. Our extended algorithm with a single peak point

To suppress this problem, we choose multiple peak points in the histogram. If a peak point is including over 20% of the number of all seam points on the seam-line, the peak point becomes one of the references of color difference. Figs. 4.4(a) and 4.4(b) show color blending results by using one and two peak points chosen by our algorithm with each second histogram, respectively.

As the final step, we have to replace the color differences of seam points detected as poorly aligned seam points during the previous step. A line or curve is built by using both two ends of neighbor seam points from well-aligned seam points.

Figs. 4.5(a) and 4.5(b) show panoramic images created by using method of Xiong and Pulli (2010) and our extended method, respectively. When comparing the area in the dotted circle in Fig. 4.5(a), our extended method made a better result then that by Xiong and Pulli's method. However, our extended method did not perform as well within the dotted square in Fig. 4.5(b). This is originated from big differences among the values of the red, green, and blue channels of one or two end points of the remained seam points as reference color differences. If we change color differences into gray differences, this phenomenon will be disappear. However, subtle color differences among RGB channels could not recovered. We apply a median filter with a couple of seam points around the two end seam points to remove the incorrect differences. Fig. 4.6(a) shows an image including the phenomenon that is an overlapping image in the dotted square in Fig. 4.5(b) and a graph of color differences. Figs. 4.6(b) and 4.6(c) show a graph of filtered color differences and a final result using the filtered color differences, respectively.

is low like the reference standard deviation. To remove the color differences of those seam points on the seam-line, we use the same process to build the second histogram using the color differences of the remaining seam points, which are determined by using the first

After detecting a peak point on the second histogram as a reference of color difference, two thresholds are calculated by adding and subtracting a margin. In our experiments, the margin is set as 15. If a single reference is only applied, bad blending will occur as shown in

To suppress this problem, we choose multiple peak points in the histogram. If a peak point is including over 20% of the number of all seam points on the seam-line, the peak point becomes one of the references of color difference. Figs. 4.4(a) and 4.4(b) show color blending results by using one and two peak points chosen by our algorithm with each second

As the final step, we have to replace the color differences of seam points detected as poorly aligned seam points during the previous step. A line or curve is built by using both two

Figs. 4.5(a) and 4.5(b) show panoramic images created by using method of Xiong and Pulli (2010) and our extended method, respectively. When comparing the area in the dotted circle in Fig. 4.5(a), our extended method made a better result then that by Xiong and Pulli's method. However, our extended method did not perform as well within the dotted square in Fig. 4.5(b). This is originated from big differences among the values of the red, green, and blue channels of one or two end points of the remained seam points as reference color differences. If we change color differences into gray differences, this phenomenon will be disappear. However, subtle color differences among RGB channels could not recovered. We apply a median filter with a couple of seam points around the two end seam points to remove the incorrect differences. Fig. 4.6(a) shows an image including the phenomenon that is an overlapping image in the dotted square in Fig. 4.5(b) and a graph of color differences. Figs. 4.6(b) and 4.6(c) show a graph of filtered color differences and a final result using the

histogram built by using the standard deviation.

Fig. 4.3. Our extended algorithm with a single peak point

ends of neighbor seam points from well-aligned seam points.

Fig 4.3.

histogram, respectively.

filtered color differences, respectively.

a) A single peak point

b) Two peak points

Fig. 4.4. A panorama re-created by our extended seam-line blending algorithm with multiple peak points chosen in a histogram of color differences

a)

b)

Fig. 4.5. Comparison between a) Xiong and Pulli (2010) method and b) The extended method.

High-Quality Seamless Panoramic Images 43

To make the final panorama more aesthetically pleasing, we apply additional processing to increase contrast, color balance, and color saturation. The additional processing also has the added benefit of making the collection process more tolerant to different lighting

We achieve color/white balance by using existing auto-white balance algorithm (employed in many image-editing applications, such as Adobe Photoshop). This algorithm computes histograms for each color channel (red, green, and blue) and then in each channel "discards" the 0.5% darkest and 0.5% lightest pixel colors from the histogram and then stretches the resulting histogram to [0, 255] effectively making all pixels that fell in the discarded ranges

The above process has two effects. First, it increases the apparent contrast of the image. Second, it can cause a noticeable hue shift because the algorithm operates on the color channels individually. This is desirable for panoramas taken outdoors in natural light because it makes overly "cool" scenes appear warmer and vice versa--providing a good overall aesthetic improvement robustly under varying weather conditions. On the other hand, when the algorithm processes scenes with significantly dark scenes (such as panoramas in tunnels or those taken at night under artificial lighting) the hue shift will be much more pronounced and often produces very poor results. Thus, this color balance algorithm is only applied to images that is taken under decent lightly conditions. Figure 5

a) Only our extended method

**5. Color balance and saturation** 

"black" and "white" respectively.

illustrates the effect of this algorithm.

conditions.

c) Final panoramic image

Fig. 4.6. Median filter to remove incorrect interpolated color differences and a final panoramic image.

a) Linear interpolated color differences

b) Color differences by median filter

c) Final panoramic image

Fig. 4.6. Median filter to remove incorrect interpolated color differences and a final

panoramic image.

## **5. Color balance and saturation**

To make the final panorama more aesthetically pleasing, we apply additional processing to increase contrast, color balance, and color saturation. The additional processing also has the added benefit of making the collection process more tolerant to different lighting conditions.

We achieve color/white balance by using existing auto-white balance algorithm (employed in many image-editing applications, such as Adobe Photoshop). This algorithm computes histograms for each color channel (red, green, and blue) and then in each channel "discards" the 0.5% darkest and 0.5% lightest pixel colors from the histogram and then stretches the resulting histogram to [0, 255] effectively making all pixels that fell in the discarded ranges "black" and "white" respectively.

The above process has two effects. First, it increases the apparent contrast of the image. Second, it can cause a noticeable hue shift because the algorithm operates on the color channels individually. This is desirable for panoramas taken outdoors in natural light because it makes overly "cool" scenes appear warmer and vice versa--providing a good overall aesthetic improvement robustly under varying weather conditions. On the other hand, when the algorithm processes scenes with significantly dark scenes (such as panoramas in tunnels or those taken at night under artificial lighting) the hue shift will be much more pronounced and often produces very poor results. Thus, this color balance algorithm is only applied to images that is taken under decent lightly conditions. Figure 5 illustrates the effect of this algorithm.

a) Only our extended method

High-Quality Seamless Panoramic Images 45

Chen, S. E., (1995). QuickTime VR-an image-based approach to virtual environment

Chon, J., Fuse, T., Shimizu, E., Shibasaki, R., (2007). Three-dimensional image mo saicking

Chon, J., Kim, H., and Lin, C.S., (2010). Seam-line determination for image mosaicking: A

Chou, J.S., Qian, J., Wu, Z., Schramm, H., (1997). Automatic mosaic and display from a

Coorg, S. and Teller, S., (2000). Sphereical Mosaics with Quaternions and Dense Correlation.

Dijkstra, E.W., (1959). A note on two problems in connexion with graphs. Numerische

Efros, A., Freeman, W., (2001). Image quilting for texture synthesis and transfer. In:

Genner, D.B., (2006). Generalized camera calibration including Fish-eye lenses, Int. J.

Goldman, D. B. (2011). Vignette and exposure calibration and compensation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(12):2276–2288. Hartley, R. and Zisserman, A., (2003). Multiple View Geometry. 2nd Edition, Cambridge

Irani, M. and Anandan, P., (1998). Video Indexing based on mosaic representations.

Irani, M., Hsu, S., and Anandan, P. (1995). Video compression using mosaic representations.

Kass, M., Witkin ,A., and Terzopoulos, D., 1987. Snakes: Active contour models. *International* 

Kerschner, M., (2001). Seam-line detection in colour orthoimage mosaicking by use of twin snakes. ISPRS Journal of Photogrammetry and Remote Sensing, 56 (1):53-64. Kim, S. J. and Pollefeys, M., (2008). Robust Radiometric Calibration and Vignetting

Kim, W.S., Steinke, R.C., Steele, R.D., Ansar, A.I., (2004). Camera Calibration and Stereo

Kumar, R., Anandan, P., Irani, M.,Bergen, J., and Hanna, K. (1995). Representation of scenes

Lee, M.-C., ge Chen, W., lung Bruce Lin, C., Gu, C., Markoc, T., Zabinsky, S. I., and Szeliski,

model. IEEE Transactions on Circuits and Video Technology, 7(1):130-145.

Lewis, J. P., (1995). Fast Normalized Cross-Correlation. Vision Interface*,* pp. 120-123.

Correction. IEEE Transactions on Pattern Analysis and Machine Intelligence,

from collections of images. In IEEE Workshop on Representaions of Visual Scenes,

R. (1997). A layered video object coding system using sprite and affine motion

Journal of Photogrammetry and Remote Sensing, 65(1): 86-92.

International Journal of Computer Vision, 37(3):259-273.

Angeles.

3034:1077-1087.

Mathematik, 1: 269-271.

Techniques. pp. 341-346.

University Press.

30(4):562-576.

Computer Vision, 68(3): 239-266.

Proceedings of the IEEE, 86(5): 905-921.

*Journal of Computer Vision.* 1(4): 321-331.

pp. 10-17, Cambridge, Massachusetts.

Signal Processing: Image Communication, 7:529-552.

Vision Technology Validation Report. JPL D-27015.

(Cybernetics) 37 (4):771-783.

naviation. In ACM SIGGRAPH 1995 Conference Proceedings, pp. 29-38, Los

using multiple projection planes for 3-D visualization of roadside standing buildings. IEEE Transactions on Systems, Man and Cybernetics, Part B

technique minimizing the maximum local mismatch and the global cos., ISPRS

sequence of peripheral angiographic images. Proceedings SPIE Medical Imaging,

Proceedings International Conference on Computer Graphics and Interactive

b) With more Color balance and saturation

Fig. 5. Example panoramas before a) and after b) color balance and saturation adjustment.

## **6. Conclusion**

In this chapter we introduced a novel panorama-stitching algorithm. We described each stage of the algorithmic pipeline in detail, namely a channel-correction algorithm that normalizes luminance and chrominance of the projected images prior stitching; a seam-line finding algorithm that finds the optimal transition between two overlapping images; a color blending algorithm that blends the seam-lines; and finally, a color enhancement algorithm that adjusts the overall contrast, color, and saturation of the panoramic image.

In the algorithm, we particularly address the shortcomings of previous approaches, where poor results are often observed from the color-blending algorithm due to poorly aligned points from the individual images on the seam-lines. Our proposed algorithm removes outliers on the seam-lines due to misalignment by using histograms, color differences, and standard deviation of the color differences. The outliers are replaced by linearly interpolated points using the color differences of neighbor well-aligned seam points.

We have tested the proposed algorithm on millions of outdoor panoramas and it has been proven that the algorithm is robust under most lighting and weather conditions. The algorithm described in this paper is not only bounded to Earthmine's collection system but any system that generates panorama images.

#### **7. References**

Bellman, R., (1957). Dynamic Programming. Princeton Univ. Press.

Brown, M. and Lowe, D., (2007). Automatic Panoramic Image Stitching using Invariant Features. International Journal of Computer Vision, 74(1):59-73.

b) With more Color balance and saturation Fig. 5. Example panoramas before a) and after b) color balance and saturation adjustment.

In this chapter we introduced a novel panorama-stitching algorithm. We described each stage of the algorithmic pipeline in detail, namely a channel-correction algorithm that normalizes luminance and chrominance of the projected images prior stitching; a seam-line finding algorithm that finds the optimal transition between two overlapping images; a color blending algorithm that blends the seam-lines; and finally, a color enhancement algorithm

In the algorithm, we particularly address the shortcomings of previous approaches, where poor results are often observed from the color-blending algorithm due to poorly aligned points from the individual images on the seam-lines. Our proposed algorithm removes outliers on the seam-lines due to misalignment by using histograms, color differences, and standard deviation of the color differences. The outliers are replaced by linearly interpolated

We have tested the proposed algorithm on millions of outdoor panoramas and it has been proven that the algorithm is robust under most lighting and weather conditions. The algorithm described in this paper is not only bounded to Earthmine's collection system but

Brown, M. and Lowe, D., (2007). Automatic Panoramic Image Stitching using Invariant

Features. International Journal of Computer Vision, 74(1):59-73.

that adjusts the overall contrast, color, and saturation of the panoramic image.

points using the color differences of neighbor well-aligned seam points.

Bellman, R., (1957). Dynamic Programming. Princeton Univ. Press.

any system that generates panorama images.

**6. Conclusion** 

**7. References** 


**3** 

*Spain* 

*University of Salamanca* 

**Assessment of Stereoscopic Precision –** 

**Film to Digital Photogrammetric Cameras** 

*2Regional Development Institute, Albacete, University of Castilla-La Mancha* 

*1High Polytechnic School of Avila, Department of Cartographic and Land Engineering,* 

In the generation of the three more important photogrammetric products, digital terrain models (DTM), orthophotos and maps derived from compilation, issues such as direct georeferencing, managing a high volume of data and automatic measurements (matching), are really important. However, within the photogrammetric workflow still exists tasks that remain manual or that require user interaction. The generation of cartography through restitution is one of the tasks that require an intense user interaction despite the great advances in the sector. On the other hand, the constant emergence of new large and medium format digital sensors and their incorporation into large photogrammetric projects has prompted different stakeholders demand and thus a greater need to the knowledge of the precision, correctness and reliability of these sensors, especially when most existing photogrammetric software do not allow a detailed analysis of the results. That is why nowadays is still relevant to consider the photogrammetric precision reached by an operator measuring on a digital image and compare it with that achieved by measuring on a scanned film image, considering always that both dataset are provided with similar input conditions

This chapter aims to address this issue in detail through a study of manual stereoscopic precision measured on original digital image and digitalized film images. After this introduction, Section 2 will address a comprehensive bibliographic review of major studies in this line, from those made in the field of analogical photogrammetry to the modern largeformat digital cameras. Section 3 will describe in detail the main materials and methods used in this study. Section 4 will focus on showing experimental results obtained in three different study cases with a discussion of them. The last section we will highlight the main

Photogrammetric Community has always tested new tools and methods with the aim of guaranteeing that the results achieved are equal to or better than traditional ones. In this

(pixel size, measurement device, expertise of the operator, etc.)

concluding remarks from this study.

**2. State of the art** 

**1. Introduction** 

Benjamín Arias-Pérez1, Diego González-Aguilera1, Javier Gómez-Lahoz1 and David Hernández-López2


## **Assessment of Stereoscopic Precision – Film to Digital Photogrammetric Cameras**

Benjamín Arias-Pérez1, Diego González-Aguilera1, Javier Gómez-Lahoz1 and David Hernández-López2 *1High Polytechnic School of Avila, Department of Cartographic and Land Engineering, University of Salamanca 2Regional Development Institute, Albacete, University of Castilla-La Mancha Spain* 

#### **1. Introduction**

46 Special Applications of Photogrammetry

Leymarie, F. and Levine, M.D., 1993. Tracking deformable objects in the plane using an

Lowe, D.G., (2004). Distinctive Image Features from Scale-Invariant Keypoints. International

McGlone, C., E. Mikhail and J. Bethel, (2004). Manual of Photogrammetry, 5th Edition,

Milgram, D.L., (1975). Computer methods for creating photomosaics. IEEE Transactions on

Milgram, D.L., (1977). Adaptive techniques for photomosaicking. IEEE Transactions on

Poynton, C. ,(2002). Digital Video and HD: Algorithms and Interfaces. Morgan Kaufmann;

Tepdosio, L. and Bender, W., (1993). Salient video stills: Content and context preserved. In

Szeliski, R., (1996). Video mosaics for virtual environments. Computer Graphics

Szeliski, R. and Shum, H.-Y., (1997). Creating full view panoramic image mosaics and

Tiilikainen, N.P., (2007). A Comparative Study of Active Contour Snakes. Copenhagen

Williams, D.J. and Shah, M., (1990). A fast algorithm for active contours. *CVGIP: Image* 

Xiong, Y. and Pulli, K., (2010). Color Matching for High-Quality Panoramic Images on Mobile Phones. IEEE Transactions on Consumer Electronics, 56(4), 2592-2600. Yakimovsky, Y. and Cunningham, R.T., (1978). a system for extracting three-dimensional

Zomet, A., Levin, A., Peleg, S., and Weiss, Y., (2006). Seamless Image Stitching by

texture-mapped models. In ACM SIGGRAPH 1997 Conference Proceedings, pp.

measurements from a stereo pair of TV cameras, Computer Graphics and Image

Minimizing False Edges. *IEEE Transactions on Pattern Analysis and Machine* 

15(6): 617-634.

ASPRS.

Journal of Computer Vision, 60(2):91-110.

ACM Multimedia 93, pp. 39-46, Anaheim, California

Szeliski, R., (2011). Computer Vision: Algorithms and Applications. Springer.

Computers, C-24(11):1113-1119.

Computers, C-26(11):1175-1180.

1st edition (December 2002)

Applications 16 (3): 22-30.

251-258, Los Angeles.

University, Denmark.

Processing, 7, 195-210.

*Intelligence.* 15(4): 969-977.

*Understanding.* 55(1): 14-26.

active contour model. *IEEE Transactions on Pattern Analysis and Machine Intelligence.*

In the generation of the three more important photogrammetric products, digital terrain models (DTM), orthophotos and maps derived from compilation, issues such as direct georeferencing, managing a high volume of data and automatic measurements (matching), are really important. However, within the photogrammetric workflow still exists tasks that remain manual or that require user interaction. The generation of cartography through restitution is one of the tasks that require an intense user interaction despite the great advances in the sector. On the other hand, the constant emergence of new large and medium format digital sensors and their incorporation into large photogrammetric projects has prompted different stakeholders demand and thus a greater need to the knowledge of the precision, correctness and reliability of these sensors, especially when most existing photogrammetric software do not allow a detailed analysis of the results. That is why nowadays is still relevant to consider the photogrammetric precision reached by an operator measuring on a digital image and compare it with that achieved by measuring on a scanned film image, considering always that both dataset are provided with similar input conditions (pixel size, measurement device, expertise of the operator, etc.)

This chapter aims to address this issue in detail through a study of manual stereoscopic precision measured on original digital image and digitalized film images. After this introduction, Section 2 will address a comprehensive bibliographic review of major studies in this line, from those made in the field of analogical photogrammetry to the modern largeformat digital cameras. Section 3 will describe in detail the main materials and methods used in this study. Section 4 will focus on showing experimental results obtained in three different study cases with a discussion of them. The last section we will highlight the main concluding remarks from this study.

#### **2. State of the art**

Photogrammetric Community has always tested new tools and methods with the aim of guaranteeing that the results achieved are equal to or better than traditional ones. In this

Assessment of Stereoscopic Precision – Film to Digital Photogrammetric Cameras 49

etc. The results show that the smallest ratio *b/h* for DMC camera is compensated with higher precision in stereo measurements, reaching comparable values in all components (*X,Y,H*), and for the two flights. Subsequently, other specific tests were performed to determine the altimetric stereoscopic precision. Points were measured in several contiguous stereo models near to the so-called Von Gruber points. The results showed that the precision in *Z* is worse in the case of digital camera which can be due to the topography, or because that overlap

More recently, Arias and Gomez-Lahoz (2009) conducted an empirical study of stereoscopic precision. Finally, Spreckels et al. (2010) reported the results obtained in the project DGPF "Evaluation of Digital Photogrammetric Camera Systems", within the working group "stereoplotting". Multiple cameras were used in this project: Film Camera Zeiss RMK Top 15; large format digital cameras UltraCamX Vexcel Imaging and Intergraph / ZI DMC, and the combination of four medium format cameras Digi-CAM Quattro IGI. The main outline

This digital sensor is based on a multi-cone matrix, so that four sensors can provide a large format CCD (7.000 x 4.000-pixel, 12 micron pixel size), which capture the image at the same time (synchronous operation) (Fig. 1). Panchromatic cones are slightly inclined, so they have a small common area, which is then used to generate the so-called virtual image size of 13.824 x 7.680 pixels (height x width). The color information is obtained from four CCDs with a smaller size, but that capture the entire scene. A whole high-resolution color image

Fig. 1. Left: images of the four cones (solid line), and the virtual image (dashed line).

The UltraCamD camera design is based on the use of 9 sensors CCDs (each of 4.000 x 2.700 pixels) with pixel size of 9 microns (Fig. 2). Each cone has the same field, but the CCDs are

of the project show a precision better than 0,9 pixel in *XY* and 1,4 pixel in *Z*.

can be obtained automatically using pan-sharpening method.

areas are different in digital and film images.

**3. Materials and methods** 

Right: DMC digital camera.

**3.1.2 UltraCam** 

**3.1.1 DMC**

**3.1 Photogrammetric sensors: Digital** 

sense, one of the first tests performed was to check the precision of the film cameras that finally replaced the plate cameras (Grifoni, 1949). From these studies important conclusions were derived confirming the superiority of the plate cameras in terms of precision. However, for small and medium scales, film cameras were fully reliable. Later, Lehmann (1955) in the framework of the Organisation Europeenne d'Etudes Photgrammetriques Experimental OEEPE investigates the precision of restitution based on several factors such as the field of view, photographic material (film or plate), the method of measuring, the user, the type of instrument (plotter, etc.). To develop this work, Switzerland offered a test field located in the Rhine valley near Oberriet, which covers an area of 1,5 x1,5 km with 600 control marks with planimetric and altimetric coordinates. Furthermore, in an area of 4x4 km points were spaced every 500 m. This trend was followed by a total of 7 schools in different countries (1 in Switzerland, Austria, Italy and the Netherlands and three in Germany), using different restitution instruments. Some of the centers that performed the measurements published their reports including aspects such as: times of measurement, methods of operation, problems encountered and their solutions, and even the measurements made by the operators (Gotthardt, 1955; Brucklacher, 1955, and Förstner, 1955, Commission C., Ablauf der Messung OEEPE Deroulment Zeitlicher chronologique des observations, 1955), while the results were discussed in later publications (Gotthardt, 1958, and Stickler, 1959; Stoch, 1961). At the same time as the OEEPE began its work, the International Society of Photogrammetry, ISP, showed their concerns about the restitution of cadastral maps (Härry, 1954), land consolidation (Härry, 1955), establishing plans for urban areas (Dubuisson, 1955) and small scale mapping (Blachut, 1955). In 1975, the analysis of planimetric and altimetric precision on the restitution was revived again but this time through the angular field factor. Stark (1975) used a total of 4 cameras with varying focal lengths and taking images at different flight altitudes. A total of 23 sets of points distributed regularly around the stereoscopic model were measured and analyzed for each stereoscopic model. The study followed that the altimetric mean error decreases continuously as the image angle increases, while the planimetric mean error is practically independent of this angle. On the other hand, another aspect that has provided a particular interest from the International Photogrammetric Community is the comparison of stereoscopic and monoscopic measurements (O´Connor, 1967; Karara, 1967; Trinder, 1986). To this end, manual stereoscopic measurements involving human operators were also developed in some tests to determine the stereoscopic accuracy achieved by restitution operators (Zorn, 1965, Krakau, 1970).

With the advent of large format digital cameras in 2000, studies comparing the analogdigital technology have become inevitable. Dörstel (2003) analyzed the precision of largeformat digital camera DMC using four flights at different heights while preserving the ratio of base/height (*b/h*). Dörstel performed 10 measurements of each point and use different types of points that allow him to contrast the empirical and theoretical precision. Alamús et al. (2005) contrasted the ground coordinates (measured with GPS) with those obtained by stereoscopic measurement, and making these measurements with film camera, RC30 (*b/h*=0,6), and digital, DMC (*b/h*=0,3). Use 11 points in a flight with a GSD of 0,08 m (Amposta block), and 21 points on another flight of 0,5 m (Caro block). It provides data on how many times are measured points, or if they are homogeneously distributed in the model, or classified in some way, how many operators are involved in the measurements, etc. The results show that the smallest ratio *b/h* for DMC camera is compensated with higher precision in stereo measurements, reaching comparable values in all components (*X,Y,H*), and for the two flights. Subsequently, other specific tests were performed to determine the altimetric stereoscopic precision. Points were measured in several contiguous stereo models near to the so-called Von Gruber points. The results showed that the precision in *Z* is worse in the case of digital camera which can be due to the topography, or because that overlap areas are different in digital and film images.

More recently, Arias and Gomez-Lahoz (2009) conducted an empirical study of stereoscopic precision. Finally, Spreckels et al. (2010) reported the results obtained in the project DGPF "Evaluation of Digital Photogrammetric Camera Systems", within the working group "stereoplotting". Multiple cameras were used in this project: Film Camera Zeiss RMK Top 15; large format digital cameras UltraCamX Vexcel Imaging and Intergraph / ZI DMC, and the combination of four medium format cameras Digi-CAM Quattro IGI. The main outline of the project show a precision better than 0,9 pixel in *XY* and 1,4 pixel in *Z*.

## **3. Materials and methods**

#### **3.1 Photogrammetric sensors: Digital**

#### **3.1.1 DMC**

48 Special Applications of Photogrammetry

sense, one of the first tests performed was to check the precision of the film cameras that finally replaced the plate cameras (Grifoni, 1949). From these studies important conclusions were derived confirming the superiority of the plate cameras in terms of precision. However, for small and medium scales, film cameras were fully reliable. Later, Lehmann (1955) in the framework of the Organisation Europeenne d'Etudes Photgrammetriques Experimental OEEPE investigates the precision of restitution based on several factors such as the field of view, photographic material (film or plate), the method of measuring, the user, the type of instrument (plotter, etc.). To develop this work, Switzerland offered a test field located in the Rhine valley near Oberriet, which covers an area of 1,5 x1,5 km with 600 control marks with planimetric and altimetric coordinates. Furthermore, in an area of 4x4 km points were spaced every 500 m. This trend was followed by a total of 7 schools in different countries (1 in Switzerland, Austria, Italy and the Netherlands and three in Germany), using different restitution instruments. Some of the centers that performed the measurements published their reports including aspects such as: times of measurement, methods of operation, problems encountered and their solutions, and even the measurements made by the operators (Gotthardt, 1955; Brucklacher, 1955, and Förstner, 1955, Commission C., Ablauf der Messung OEEPE Deroulment Zeitlicher chronologique des observations, 1955), while the results were discussed in later publications (Gotthardt, 1958, and Stickler, 1959; Stoch, 1961). At the same time as the OEEPE began its work, the International Society of Photogrammetry, ISP, showed their concerns about the restitution of cadastral maps (Härry, 1954), land consolidation (Härry, 1955), establishing plans for urban areas (Dubuisson, 1955) and small scale mapping (Blachut, 1955). In 1975, the analysis of planimetric and altimetric precision on the restitution was revived again but this time through the angular field factor. Stark (1975) used a total of 4 cameras with varying focal lengths and taking images at different flight altitudes. A total of 23 sets of points distributed regularly around the stereoscopic model were measured and analyzed for each stereoscopic model. The study followed that the altimetric mean error decreases continuously as the image angle increases, while the planimetric mean error is practically independent of this angle. On the other hand, another aspect that has provided a particular interest from the International Photogrammetric Community is the comparison of stereoscopic and monoscopic measurements (O´Connor, 1967; Karara, 1967; Trinder, 1986). To this end, manual stereoscopic measurements involving human operators were also developed in some tests to determine the stereoscopic accuracy achieved by restitution operators (Zorn,

With the advent of large format digital cameras in 2000, studies comparing the analogdigital technology have become inevitable. Dörstel (2003) analyzed the precision of largeformat digital camera DMC using four flights at different heights while preserving the ratio of base/height (*b/h*). Dörstel performed 10 measurements of each point and use different types of points that allow him to contrast the empirical and theoretical precision. Alamús et al. (2005) contrasted the ground coordinates (measured with GPS) with those obtained by stereoscopic measurement, and making these measurements with film camera, RC30 (*b/h*=0,6), and digital, DMC (*b/h*=0,3). Use 11 points in a flight with a GSD of 0,08 m (Amposta block), and 21 points on another flight of 0,5 m (Caro block). It provides data on how many times are measured points, or if they are homogeneously distributed in the model, or classified in some way, how many operators are involved in the measurements,

1965, Krakau, 1970).

This digital sensor is based on a multi-cone matrix, so that four sensors can provide a large format CCD (7.000 x 4.000-pixel, 12 micron pixel size), which capture the image at the same time (synchronous operation) (Fig. 1). Panchromatic cones are slightly inclined, so they have a small common area, which is then used to generate the so-called virtual image size of 13.824 x 7.680 pixels (height x width). The color information is obtained from four CCDs with a smaller size, but that capture the entire scene. A whole high-resolution color image can be obtained automatically using pan-sharpening method.

Fig. 1. Left: images of the four cones (solid line), and the virtual image (dashed line). Right: DMC digital camera.

#### **3.1.2 UltraCam**

The UltraCamD camera design is based on the use of 9 sensors CCDs (each of 4.000 x 2.700 pixels) with pixel size of 9 microns (Fig. 2). Each cone has the same field, but the CCDs are

Assessment of Stereoscopic Precision – Film to Digital Photogrammetric Cameras 51

The following precision analysis have been established in this study: *XY* precision, *Z*  precision, relationship between planimetric and altimetric precision, comparison of means, analysis of agreement, personal equation and relative relief. Measurements were taken with the analytical plotter Leica SD2000 and the photogrammetric digital workstation Digi3D.

The theoretical *XY* precision is directly proportional to the scale of the image, *mb*, and the

The precision of the measure on the image plane, *σi* usually ± 6µm (Kraus, 1993) can be expressed in terms of pixel size, *px*, as a fraction (*1/k*). This value *k* can be considered as an

> *i xy b px px σ σ <sup>m</sup>*

Moreover, the product of pixel size for image scale provides the pixel size in the ground,

Thus, the precision observed in *XY* can be expressed as a fraction of the *GSD*. Once the empirical planimetric standard deviation, *SXY*, is obtained, the empirical measurement precision of the image, *Si* is get. From *Si* the value of *k* can be computed which is a good

*xy i b i*

It is important to note that *σ* expresses the theoretical precision while *S* expresses the

The theoretical precision in *Z*, *σZ*, depends on the precision of measurement of the horizontal parallax, *σPx*, the image scale, *mb*, and the ratio height/base, *H/B* (Kraus, 1993; Schiewe, 1995):

*H*

*z Px b*

*σ σ m \**

*S Sm S*

*px px S k k S*

*i*

From this expression it follows that the higher *k*, the better precision.

empirical standard deviation which is determined from measurements.

*b xy GSD GSD px m <sup>σ</sup>*

*k*

*xy*

*S*

*m*

*b*

*i*

*xy i b m* (1)

*k k* (2)

(3)

*<sup>B</sup>* (5)

(4)

**3.3 Methods** 

**3.3.1 Theoretical precision in planimetry** 

indicator of measurement precision in the image.

measurement precision of the image, *σi*:

*GSD (Ground Sample Distance)*:

value of comparison between cameras.

**3.3.2 Theoretical precision in altimetry** 

arranged in various positions within the focal plane. The idea is that not all cones are exposed to the same time, but at the same place (operation syntopic). A cone acts as a master cone, and that defines the image coordinate system. The other images are paired as secondary parts in this main frame defined by the master cone. The final image has a single central perspective and has a size of 11.500 x 7.500 pixels.

Fig. 2. Left: 9 CCDs that form the complete image of the camera UltraCam. Right: camera UltraCamD.

#### **3.2 Photogrammetric sensors: Film**

In this case the camera used was the Leica RC30 camera, widely used in aerial photogrammetry industry. It allows two settings: 15/4 UAG-S with a focal length of 15 cm (the one used in the measurements), and 30/4 NAT-S with a focal length of 30 cm. In both cases, the format corresponds to a film width of 240 mm. But due to the intrinsic characteristics of these cameras (fiducial marks, marginal information) the effective width is smaller.

Fig. 3. Left: diagram of Leica RC30 with 15/4 UGS (in PAV30 mount). Right: Leica RC 30 camera (in PAV30 mount) and the NSF3-E Navigation Sight.

The scanner used to convert the film to digital format was Vexcel UltraScan 5000.

#### **3.3 Methods**

50 Special Applications of Photogrammetry

arranged in various positions within the focal plane. The idea is that not all cones are exposed to the same time, but at the same place (operation syntopic). A cone acts as a master cone, and that defines the image coordinate system. The other images are paired as secondary parts in this main frame defined by the master cone. The final image has a single

central perspective and has a size of 11.500 x 7.500 pixels.

Right: camera UltraCamD.

**3.2 Photogrammetric sensors: Film** 

Fig. 2. Left: 9 CCDs that form the complete image of the camera UltraCam.

In this case the camera used was the Leica RC30 camera, widely used in aerial photogrammetry industry. It allows two settings: 15/4 UAG-S with a focal length of 15 cm (the one used in the measurements), and 30/4 NAT-S with a focal length of 30 cm. In both cases, the format corresponds to a film width of 240 mm. But due to the intrinsic characteristics of

these cameras (fiducial marks, marginal information) the effective width is smaller.

Fig. 3. Left: diagram of Leica RC30 with 15/4 UGS (in PAV30 mount). Right: Leica RC 30

The scanner used to convert the film to digital format was Vexcel UltraScan 5000.

camera (in PAV30 mount) and the NSF3-E Navigation Sight.

The following precision analysis have been established in this study: *XY* precision, *Z*  precision, relationship between planimetric and altimetric precision, comparison of means, analysis of agreement, personal equation and relative relief. Measurements were taken with the analytical plotter Leica SD2000 and the photogrammetric digital workstation Digi3D.

#### **3.3.1 Theoretical precision in planimetry**

The theoretical *XY* precision is directly proportional to the scale of the image, *mb*, and the measurement precision of the image, *σi*:

$$
\sigma\_{xy} = \sigma\_i \ast m\_b \tag{1}
$$

The precision of the measure on the image plane, *σi* usually ± 6µm (Kraus, 1993) can be expressed in terms of pixel size, *px*, as a fraction (*1/k*). This value *k* can be considered as an indicator of measurement precision in the image.

$$
\sigma\_i = \frac{p\mathbf{x}}{k} \Longrightarrow \sigma\_{xy} = \frac{p\mathbf{x}}{k} \ast m\_b \tag{2}
$$

Moreover, the product of pixel size for image scale provides the pixel size in the ground, *GSD (Ground Sample Distance)*:

$$\text{GSD} = p \text{x} \ast m\_b \Rightarrow \sigma\_{xy} = \frac{\text{GSD}}{k} \tag{3}$$

Thus, the precision observed in *XY* can be expressed as a fraction of the *GSD*. Once the empirical planimetric standard deviation, *SXY*, is obtained, the empirical measurement precision of the image, *Si* is get. From *Si* the value of *k* can be computed which is a good value of comparison between cameras.

$$\begin{aligned} S\_{xy} = S\_i \* m\_b \Longrightarrow S\_i = \frac{S\_{xy}}{m\_b} \\ S\_i = \frac{p\chi}{k} \Longrightarrow k = \frac{p\chi}{S\_i} \end{aligned} \tag{4}$$

From this expression it follows that the higher *k*, the better precision.

It is important to note that *σ* expresses the theoretical precision while *S* expresses the empirical standard deviation which is determined from measurements.

#### **3.3.2 Theoretical precision in altimetry**

The theoretical precision in *Z*, *σZ*, depends on the precision of measurement of the horizontal parallax, *σPx*, the image scale, *mb*, and the ratio height/base, *H/B* (Kraus, 1993; Schiewe, 1995):

$$
\sigma\_z = \sigma\_{p\_{\mathcal{X}}} \ast m\_b \ast \frac{H}{B} \tag{5}
$$

Assessment of Stereoscopic Precision – Film to Digital Photogrammetric Cameras 53

*z D DD*

The ratios c/b are known for a camera, having determined their longitudinal overlap. At first, it is assumed that *k*, an indicator of precision is the same for both cameras. Then, it is determined the empirical *Z* precision of a digital camera (*SZD*), the empirical *Z* precision of

> *z z D D*

Since *c/b* ratios are known for the two cameras, *k* which marks the measurement precision in the image plane is different for the two cameras (the higher *k*, the better precision). The

 Theoretical ratio greater than the empirical one. As a result, the precision achieved in digital camera *Z* is greater than expected. Therefore, it is assumed that *kD* is greater than *kA*. This means that, somehow, the quality of the digital camera is better than can be

> *z z D D*

 Theoretical ratio less than the empirical one. It is the opposite of the previous case, so the *Z* precision achieved in digital camera is smaller than expected. As a result the

*s*

*s*

 *z z D D*

 *z z D D*

*z z A A*

There are no significant differences between the theoretical and the empirical ratio.

In the case of significant differences between the ratio of theoretical and empirical precision, a significant change in the quality of stereoscopic measurement is obtained, since the geometric basis of the ratio *c/b* is indisputable. Therefore, only *k* is an indicator of measurement precision in the image plane. If the measurements are made under the same conditions, the differences can be attributed not to the measurement in the image but the image quality itself. By and large, if significant or important differences are obtained between the empirical and theoretical ratio, a significant difference in image quality could

*z z A A*

*σ s*

*σ s*

*σ s*

*z z A A*

*z z A A*

*σ s*

*σ s*

*σ k b kb σ GSD c 1 c*

*GSD c 1 c*

*k b kb*

*A A*

*D A*

*D A*

*D A*

*D A*

*k k*

*k k*

(13)

(14)

*<sup>σ</sup> <sup>s</sup>* (15)

*k k*

(12)

*k k*

(11)

 

a film camera (*SZA*) and their ratio:

following cases can be obtained:

expected theoretically.

be the reason.

*z A*

 

 

quality of the digital camera is worse than the film camera.

There is no difference in the stereoscopic precision.

 

 

The measurement precision of the horizontal parallax can be replaced by the measurement precision in the image plane, *σi*. The ratio height/base can be replaced by the ratio focal/base (*c/b*), then:

$$
\sigma\_x = \sigma\_i \ast m\_b \ast \frac{c}{b} \tag{6}
$$

The precision of the measure in the image plane, *σi*, can be expressed in terms of pixel size as a fraction of it. In this case, it is assigned a value of *1/k*:

$$\begin{aligned} \sigma\_i &= \frac{p\chi}{k} \\ \sigma\_z &= \frac{p\chi}{k} \* m\_b \* \frac{c}{b} \end{aligned} \tag{7}$$

Moreover, the product of pixel size for image scale provides the pixel size in the ground, *GSD*:

$$\begin{aligned} \mathbf{GSD} &= p \mathbf{x} \ast m\_b \\ \sigma\_z &= \frac{\mathbf{GSD}}{k} \ast \frac{\mathbf{c}}{b} \end{aligned} \tag{8}$$

As can be seen, precision in *Z* can also be expressed in terms of the *GSD*, the focal length and photobase. This is a function of longitudinal overlap, *RL*, together with the image width:

$$b = (1 - R\_L) \* width\tag{9}$$

The value *c/b* affects proportionally the *Z* precision, so that the higher the value of this ratio less precision in *Z*.


Table 1. Ratios *c/b* or various photogrammetric aerial cameras, calculated for a longitudinal overlap of 60%.

A comparison of details leads to the study of the ratio of precisions with two different cameras (*D*: Digital, UltraCamD or DMC; *A*: Analog-Film):

$$\frac{\left(\sigma\_z\right)\_D}{\left(\sigma\_z\right)\_A} \tag{10}$$

The comparison must be made by measurements from similar flights, which have the same *GSD*:

The measurement precision of the horizontal parallax can be replaced by the measurement precision in the image plane, *σi*. The ratio height/base can be replaced by the ratio

\* *zib*

The precision of the measure in the image plane, *σi*, can be expressed in terms of pixel size as

*z b*

Moreover, the product of pixel size for image scale provides the pixel size in the ground,

*GSD px m GSD c <sup>σ</sup> \* k b*

As can be seen, precision in *Z* can also be expressed in terms of the *GSD*, the focal length and photobase. This is a function of longitudinal overlap, *RL*, together with the image width:

The value *c/b* affects proportionally the *Z* precision, so that the higher the value of this ratio

Film 150 220 88 1,704 DMC 120 95 38 3,158 UltraCamD 100 67,5 27 3,704 UltraCamX 100 68,4 27,36 3,655

Table 1. Ratios *c/b* or various photogrammetric aerial cameras, calculated for a longitudinal

A comparison of details leads to the study of the ratio of precisions with two different

 *z D z A*

The comparison must be made by measurements from similar flights, which have the same

 

cameras (*D*: Digital, UltraCamD or DMC; *A*: Analog-Film):

*b*

*i*

*z*

*Camera c (mm) Width (mm) b (RL = 60%)*

*px <sup>σ</sup> k px c σ m \* k b*

a fraction of it. In this case, it is assigned a value of *1/k*:

*c m b*

(6)

(8)

(1 ) *<sup>L</sup> b R width* (9)

(7)

*(mm) c/b* 

(10)

focal/base (*c/b*), then:

*GSD*:

less precision in *Z*.

overlap of 60%.

*GSD*:

$$\frac{\left(\sigma\_z\right)\_D}{\left(\sigma\_z\right)\_A} = \frac{\left(\frac{\overline{SSD}}{k} \* \frac{c}{b}\right)\_D}{\left(\frac{\overline{SSD}}{k} \* \frac{c}{b}\right)\_A} = \frac{\left(\frac{1}{k} \* \frac{c}{b}\right)\_D}{\left(\frac{1}{k} \* \frac{c}{b}\right)\_A} \tag{11}$$

The ratios c/b are known for a camera, having determined their longitudinal overlap. At first, it is assumed that *k*, an indicator of precision is the same for both cameras. Then, it is determined the empirical *Z* precision of a digital camera (*SZD*), the empirical *Z* precision of a film camera (*SZA*) and their ratio:

$$\frac{\left(\sigma\_z\right)\_D}{\left(\sigma\_z\right)\_A} \neq \frac{\left(s\_z\right)\_D}{\left(s\_z\right)\_A} \Rightarrow k\_D \neq k\_A \tag{12}$$

Since *c/b* ratios are known for the two cameras, *k* which marks the measurement precision in the image plane is different for the two cameras (the higher *k*, the better precision). The following cases can be obtained:

 Theoretical ratio greater than the empirical one. As a result, the precision achieved in digital camera *Z* is greater than expected. Therefore, it is assumed that *kD* is greater than *kA*. This means that, somehow, the quality of the digital camera is better than can be expected theoretically.

$$\frac{\left(\sigma\_z\right)\_D}{\left(\sigma\_z\right)\_A} > \frac{\left(s\_z\right)\_D}{\left(s\_z\right)\_A} \Rightarrow k\_D > k\_A \tag{13}$$

 Theoretical ratio less than the empirical one. It is the opposite of the previous case, so the *Z* precision achieved in digital camera is smaller than expected. As a result the quality of the digital camera is worse than the film camera.

$$\frac{\left(\sigma\_z\right)\_D}{\left(\sigma\_z\right)\_A} < \frac{\left(s\_z\right)\_D}{\left(s\_z\right)\_A} \Rightarrow k\_D < k\_A \tag{14}$$

 There are no significant differences between the theoretical and the empirical ratio. There is no difference in the stereoscopic precision.

$$\frac{\left(\sigma\_z\right)\_D}{\left(\sigma\_z\right)\_A} \approx \frac{\left(s\_z\right)\_D}{\left(s\_z\right)\_A} \Rightarrow k\_D \approx k\_A \tag{15}$$

In the case of significant differences between the ratio of theoretical and empirical precision, a significant change in the quality of stereoscopic measurement is obtained, since the geometric basis of the ratio *c/b* is indisputable. Therefore, only *k* is an indicator of measurement precision in the image plane. If the measurements are made under the same conditions, the differences can be attributed not to the measurement in the image but the image quality itself. By and large, if significant or important differences are obtained between the empirical and theoretical ratio, a significant difference in image quality could be the reason.

Assessment of Stereoscopic Precision – Film to Digital Photogrammetric Cameras 55

Coefficient, *ICC* (Fleiss, 1986). Among the various *ICC* estimators, two of them are used in this paper: *ICCC* to measure consistency, and *ICCA* to measure absolute agreement (Doménech, 2005). Both *ICCC* and *r* share the fact that are unable to discriminate a constant difference between two sets of observations, but *ICCC* is sensitive to proportional differences and *r*, no. The *ICCA* senses any difference between sets as an inconsistency, independently if this difference is constant, proportional or any other. The lower the value of *ICCA*, the larger

Considering *i* subjects and *j* values for these subjects, in order to calculate the *ICC*, the total variation (*SST*) of the *i* \**j* observations must be decomposed in three terms: the variation due to subjects (*SSS*), the variation due to evaluators (*SSE*) and the residual

> *T S E R*

*i (MSS MSR) ICC*

*MSS (j 1) MSR*

*MSS MSR ICC*

*df i\*j 1 df i 1 df j 1 df (i-1)\*(j 1)*

 

Afterwards, the mean values (*MS*) are computed dividing the sum of squares (SS) by their corresponding degrees of freedom. The *ICCA* and the *ICCC* are calculated according to the

In Figure 4 four different cases are outlined, showing a perfect linear relationship (*r=1*). In the top left, a case total agreement is showed, which is obtained when the valuations A and B are identical, and therefore the two evaluators values are 1. In the upper right, a case of constant disagreement is depicted, in which the consistency is 1 whereas the total agreement decreases. At the bottom, are showed the cases of proportional disagreement (left) and proportional and constant disagreement (right), where the difference between evaluators

While consistency is behaving as an index of additivity the correlation coefficient is shown as an index of linearity. *ICCC* and *r* both have in common a lack of sensitivity to contain a constant difference between two sets of observations, but differ in that *ICCC* is sensitive to differences of proportional representation. The *ICCA* provides any difference between measures with disagreement, whether they are of constant rate, proportional or otherwise.

*i MSS j MSE (i j i j) MSR*

*SST SSS SSE SSR* (18)

(19)

(20)

the disagreement is.

variation (*SSR*):

following expressions:

can be observed.

With the following degrees of freedom (*df*):

*A*

*C*

The lower the value of *ICCA*, the more disagreement exists.

#### **3.3.3 Relationship between planimetric and altimetric precision**

The ratio between the planimetric and altimetric precisions obtained, *SXY/SZ*, compared with the ratio *B/H*, expresses the variation between the planimetric and altimetric precisions. Since, theoretically, this ratio is unity:

$$\frac{\begin{aligned} \text{S}\_{xy} & \text{} \\ \frac{\text{S}\_{z}}{\sigma\_{xy}} &= \frac{\text{S}\_{xy} \cdot \text{m}\_{b}}{(\sigma\_{i} \, \, ^\ast m\_{b}) \Big\langle \sigma\_{i} \, ^\ast m\_{b} \, ^\ast \text{H} \, / \, \text{B} \rangle} = \frac{\text{S}\_{xy}}{\text{B} \not\, ^\ast \text{H}} = 1 \end{aligned} \tag{16}$$

Therefore, if the value for this ratio is less than one, this would indicate that planimetric precision is better than the altimetric precision. Otherwise (greater than one), this camera would show worse results in altimetry than planimetry.

#### **3.3.4 Comparison of the averages of within-subject measures**

Since we want to establish whether there are differences between the cameras, the right thing is to compare the results based on an operator individually. Keep in mind that the variability between operators may be greater than the variability between the cameras, each operator must be studied independently. In fact, just applying a simple hypothesis test corresponding to the homogeneity of variances, it is possible to observe that there are no significant differences between cameras while there are differences between operators. So, it would be wrong to use the *t* comparison test for the assessment of the precision of the two cameras, since there are not different operators, but the same operator measures with two different cameras. Therefore, we apply the t test comparison of the averages of withinsubject measures, which should follow a normal distribution with mean zero. The null hypothesis, *H0*, establishes that the different cameras do not affect the precision obtained. The alternative hypothesis, *Ha*, establishes that the different cameras modify the precision, and thus the mean difference is not zero.

$$\begin{aligned} \, \_0H\_0: \mu d &= 0\\ \, \_0H\_a: \mu d &\neq 0 \end{aligned} \tag{17}$$

The statistical test is constructed around the null hypothesis. It consists in comparing the difference average with the theoretical average, which is zero and represents no change. If calculating the difference average, the value obtained in the sample is not zero, the null hypothesis is rejected. That is, if there are differences between the observed and the null hypothesis, it is accepted that there are differences between cameras. Considering that the sample is large (n> 30) it is assumed that the dataset follows a normal distribution.

#### **3.3.5 Analysis of agreement**

The Pearson correlation coefficient, *r*, is usually applied to assess the concordance between the results obtained with different instruments. But this strategy would be incorrect in the present case since this coefficient renders the intensity of the linear association between two measures, and not the degree of agreement between them (Bland & Altman, 1986). A more correct strategy to measure the concordance is to calculate the Intraclass Correlation Coefficient, *ICC* (Fleiss, 1986). Among the various *ICC* estimators, two of them are used in this paper: *ICCC* to measure consistency, and *ICCA* to measure absolute agreement (Doménech, 2005). Both *ICCC* and *r* share the fact that are unable to discriminate a constant difference between two sets of observations, but *ICCC* is sensitive to proportional differences and *r*, no. The *ICCA* senses any difference between sets as an inconsistency, independently if this difference is constant, proportional or any other. The lower the value of *ICCA*, the larger the disagreement is.

Considering *i* subjects and *j* values for these subjects, in order to calculate the *ICC*, the total variation (*SST*) of the *i* \**j* observations must be decomposed in three terms: the variation due to subjects (*SSS*), the variation due to evaluators (*SSE*) and the residual variation (*SSR*):

$$SST = SSS + SSE + SSR\tag{18}$$

With the following degrees of freedom (*df*):

54 Special Applications of Photogrammetry

The ratio between the planimetric and altimetric precisions obtained, *SXY/SZ*, compared with the ratio *B/H*, expresses the variation between the planimetric and altimetric precisions.

> *xy xy xy z zz*

Therefore, if the value for this ratio is less than one, this would indicate that planimetric precision is better than the altimetric precision. Otherwise (greater than one), this camera

Since we want to establish whether there are differences between the cameras, the right thing is to compare the results based on an operator individually. Keep in mind that the variability between operators may be greater than the variability between the cameras, each operator must be studied independently. In fact, just applying a simple hypothesis test corresponding to the homogeneity of variances, it is possible to observe that there are no significant differences between cameras while there are differences between operators. So, it would be wrong to use the *t* comparison test for the assessment of the precision of the two cameras, since there are not different operators, but the same operator measures with two different cameras. Therefore, we apply the t test comparison of the averages of withinsubject measures, which should follow a normal distribution with mean zero. The null hypothesis, *H0*, establishes that the different cameras do not affect the precision obtained. The alternative hypothesis, *Ha*, establishes that the different cameras modify the precision,

> <sup>0</sup> : 0 : 0 *<sup>a</sup> H μd H μd*

The statistical test is constructed around the null hypothesis. It consists in comparing the difference average with the theoretical average, which is zero and represents no change. If calculating the difference average, the value obtained in the sample is not zero, the null hypothesis is rejected. That is, if there are differences between the observed and the null hypothesis, it is accepted that there are differences between cameras. Considering that the

The Pearson correlation coefficient, *r*, is usually applied to assess the concordance between the results obtained with different instruments. But this strategy would be incorrect in the present case since this coefficient renders the intensity of the linear association between two measures, and not the degree of agreement between them (Bland & Altman, 1986). A more correct strategy to measure the concordance is to calculate the Intraclass Correlation

sample is large (n> 30) it is assumed that the dataset follows a normal distribution.

(17)

*SSS*

( \* \* /)

*m HB H*

(16)

*SSS <sup>1</sup> m B*

**3.3.3 Relationship between planimetric and altimetric precision** 

(\* )

*i b z*

*xy i b*

**3.3.4 Comparison of the averages of within-subject measures** 

would show worse results in altimetry than planimetry.

Since, theoretically, this ratio is unity:

and thus the mean difference is not zero.

**3.3.5 Analysis of agreement** 

$$\begin{aligned} df\_{\,\,\,T} &= \,\, ^\*j - 1\\ df\_S &= i - 1\\ df\_E &= j - 1\\ df\_R &= (i - 1)\*(j - 1) \end{aligned} \tag{19}$$

Afterwards, the mean values (*MS*) are computed dividing the sum of squares (SS) by their corresponding degrees of freedom. The *ICCA* and the *ICCC* are calculated according to the following expressions:

$$\begin{aligned}ICC\_A &= \frac{i \ast (MSS - MSR)}{i \ast MSS + j \ast MSE + (i \ast j - i - j) \ast MSR} \\\\ICC\_C &= \frac{MSS - MSR}{MSS + (j - 1) \ast MSR} \end{aligned} \tag{20}$$

In Figure 4 four different cases are outlined, showing a perfect linear relationship (*r=1*). In the top left, a case total agreement is showed, which is obtained when the valuations A and B are identical, and therefore the two evaluators values are 1. In the upper right, a case of constant disagreement is depicted, in which the consistency is 1 whereas the total agreement decreases. At the bottom, are showed the cases of proportional disagreement (left) and proportional and constant disagreement (right), where the difference between evaluators can be observed.

While consistency is behaving as an index of additivity the correlation coefficient is shown as an index of linearity. *ICCC* and *r* both have in common a lack of sensitivity to contain a constant difference between two sets of observations, but differ in that *ICCC* is sensitive to differences of proportional representation. The *ICCA* provides any difference between measures with disagreement, whether they are of constant rate, proportional or otherwise. The lower the value of *ICCA*, the more disagreement exists.

Assessment of Stereoscopic Precision – Film to Digital Photogrammetric Cameras 57

where *i* is the operator, *A* is the average of all measurements of all operators to a certain

Logically, only those measures which imply the same operators observing the same points

The relief range of an area, *∆H*, can be defined as the difference between the maximum and

While the relative relieve is the ratio between the relief of an area and flight height, *H*, from

*<sup>H</sup> Relative relieve*

This relative relief indicates a degree of three-dimensional enhancement. The higher the relative relieve is the better is perceived the relief, but it also implies a greater variability in the positioning, and if the operator does not make good measurements, this would explain a larger standard deviation in a series of repeated measures. However, too large relative relieve should get worse the altimetric positioning due to effects of perspective, i.e., there must be an optimum value for which a greater precision is achieved in the altimetric positioning. Anyway, for the flights used in this work, it can be remarked that the higher the

Considering two different flight heights for the two cameras, *HA≠HD*, for the same study area, *∆HA=∆HD*, the relationship between the relieves on the two flights is simplified to the

*D*

Then the ratio of heights of flight gives the variation of relative relieve between the two

 *HD/HA > 1*: the relieve is noticed worse with the digital camera than with the film camera. This is the situation that occurs in most cases, since for the same *GSD*, *HD>HA*. *HD/HA < 1*: the relieve is noticed better with the digital camera than with the film camera.

This section shows and analyzes the experimental results obtained with three different flights establishing a comparison between the stereoscopic precision obtained with film and

*H*

*A D*

*H H H H H* 

*A*

*H*

point, and *ai* is the average of all measurements at that point made by the operator *i*.

with film and digital camera will be presented and analyzed.

relative relieve, the better the stereoscopic positioning.

relationship between the two flight altitudes:

cameras. The following values can be obtained:

**4. Experimental results** 

*HD/HA = 1*: the relieve is noticed equal for both cameras.

**3.4 Relative relieves** 

which the images were taken:

minimum height:

*Eq A a i i* (21)

*H Zmax Zmin* (22)

(23)

(24)

Fig. 4. Consistency, total agreement and linear correlation coefficient (Doménech, 2005).

#### **3.3.6 Personal equation**

As is known, errors can be classified as instrumental, natural or personal (Wolf & Ghilani, 1997):


Aerial photogrammetry has always coped with the latter type of error in terms of personal equation, assuming this error as a systematic trend of each operator.

Stereoscopic measurements in the personal equation are used to compare film and digital cameras. It is expected that an operator who carries out the measures under the real situation of the point with film camera, do the same with digital camera. Even the relationship between personal equations of film and digital camera should reflect the theoretical ratio (*b/h*) between the two cameras.

A value for the "real" situation of a point can be calculated as the average of all the observations made by all operators at that point. The personal equation of each operator, *Eqi*, can be obtained by the difference of his personal measure with the global measure of all operators (real situation):

$$Eq\_i = A - a\_i \tag{21}$$

where *i* is the operator, *A* is the average of all measurements of all operators to a certain point, and *ai* is the average of all measurements at that point made by the operator *i*.

Logically, only those measures which imply the same operators observing the same points with film and digital camera will be presented and analyzed.

#### **3.4 Relative relieves**

56 Special Applications of Photogrammetry

Fig. 4. Consistency, total agreement and linear correlation coefficient (Doménech, 2005).

As is known, errors can be classified as instrumental, natural or personal (Wolf &

Instrumental errors: caused by imperfections in the construction or adjustment of the

Personal errors: due to human limitations. The size of this error depends on personal

Aerial photogrammetry has always coped with the latter type of error in terms of personal

Stereoscopic measurements in the personal equation are used to compare film and digital cameras. It is expected that an operator who carries out the measures under the real situation of the point with film camera, do the same with digital camera. Even the relationship between personal equations of film and digital camera should reflect the

A value for the "real" situation of a point can be calculated as the average of all the observations made by all operators at that point. The personal equation of each operator, *Eqi*, can be obtained by the difference of his personal measure with the global measure of all

Natural errors: caused by the variation of environmental conditions.

equation, assuming this error as a systematic trend of each operator.

**3.3.6 Personal equation** 

instruments.

operators (real situation):

ability and skill of each operator.

theoretical ratio (*b/h*) between the two cameras.

Ghilani, 1997):

The relief range of an area, *∆H*, can be defined as the difference between the maximum and minimum height:

$$
\Delta \mathbf{H} = \mathbf{Z}m\mathbf{a}\mathbf{x} - \mathbf{Z}m\mathbf{u} \tag{22}
$$

While the relative relieve is the ratio between the relief of an area and flight height, *H*, from which the images were taken:

$$\text{Relative relative} = \frac{\Delta H}{H} \tag{23}$$

This relative relief indicates a degree of three-dimensional enhancement. The higher the relative relieve is the better is perceived the relief, but it also implies a greater variability in the positioning, and if the operator does not make good measurements, this would explain a larger standard deviation in a series of repeated measures. However, too large relative relieve should get worse the altimetric positioning due to effects of perspective, i.e., there must be an optimum value for which a greater precision is achieved in the altimetric positioning. Anyway, for the flights used in this work, it can be remarked that the higher the relative relieve, the better the stereoscopic positioning.

Considering two different flight heights for the two cameras, *HA≠HD*, for the same study area, *∆HA=∆HD*, the relationship between the relieves on the two flights is simplified to the relationship between the two flight altitudes:

$$\frac{\left(\Delta H\bigvee\_{A}H\_{A}\right)}{\left(\Delta H\bigvee\_{D}H\_{D}\right)} = \frac{H\_{D}}{H\_{A}}\tag{24}$$

Then the ratio of heights of flight gives the variation of relative relieve between the two cameras. The following values can be obtained:


#### **4. Experimental results**

This section shows and analyzes the experimental results obtained with three different flights establishing a comparison between the stereoscopic precision obtained with film and

Assessment of Stereoscopic Precision – Film to Digital Photogrammetric Cameras 59

this way, 6 measurements per point and operator have been obtained for a total of 13 operators from public and private companies. These stereoplotter operators are daily engaged in purely stereoscopic photogrammetric procedures (stereoplotting, editing DTMs) and have an experience that ranges from 10 years (high) to 5 years (medium) and to 1-2 years (low). In any case the minimum experience to achieve significant results has been

Due to the modular composition of large format digital cameras (DMC and UltraCamD) it has been considered relevant to perform a geometric analysis based on the distribution of the points across the stereoscopic area. Consequently, the points have been distributed in nine zones of the stereoscopic model. On each of these nine zones, at least one of the three following types of points has been measured: well defined points on the terrain; easy urban

Given the modular structure of the images coming from large-format digital cameras, a geometric analysis has been performed based on the location of points within the stereoscopic model. Therefore, the points shall be distributed in nine areas of the stereoscopic model, to analyze the influence of the position of point in the model. In each

1 2 3

4 5 6

7 8 9

In the stereoscopic model obtained with the film camera, Leica RC30 (LD\_AE), in Laguna de Duero, 46 points have been observed. These measurements were made by five different operators in two sets of three cycles each. The five operators were distributed as follows: 2

points (roofs) and difficult urban points (ground points close to buildings).

For the different cases of study three types of points have been considered:

Well-defined urban points (roofs and curbs, both above and below).

one of these areas at least one point of each type will be chosen (Fig. 5).

Difficult urban points (ground points close to buildings).

Fig. 5. Numbering of areas within the stereoscopic model.

**4.3 Case study 1: Large-scale flight Laguna de Duero (LD)** 

with high experience, 1 with medium experience and 2 with low experience:

considered to be one year.

**4.2.2 Types of points** 

Well-defined terrain points.

**4.2.3 Area of the stereoscopic model** 

digital cameras. Aspects such as: type of measurement, operator expertise, point type and area of the stereoscopic model, have been considered.

#### **4.1 Flights**

The flights used to undertake the stereoscopic measurements performed by operators are:


Table 2 collects all these data.


Table 2. Flight used for manual stereoscopic measurements, where *mb* indicates the scale denominator of the photographs; *px*, the pixel size; *GSD*, the pixel projection over the ground; *B*, the base (ground); *H*, the flight height; *B/H*, the base-height ratio; and *c*, the focal length of the camera.

#### **4.2 Hypothesis**

#### **4.2.1 Measurements**

For every point and operator, the standard deviation in *XY*, *SXY*, has been obtained and also the standard deviation in *Z*, *SZ*. These are parameters to be analyzed and considered to express the precision of the stereoscopic (both planimeric and altimetric) measures, as expressed by Hallert (1959) on repeated direct measurements of unknown quantities.

It is not aimed to asses the global precision of a photogrammetric product but the precision related to the stereoscopic model (Kraus, 1993). The point stereoscopic measurements have been done in this order: first point, second point and so on until the last point to complete one cycle. No point has been observed *n* times in a consecutive fashion. To achieve *n* measurements of the same point, the cycle has been repeated *n* times. Each operator has realized 3 cycles at the beginning of the day and 3 cycles again at midday, to avoid tiredness in his performance, the repeatability in measurements and the so called learning effect. In this way, 6 measurements per point and operator have been obtained for a total of 13 operators from public and private companies. These stereoplotter operators are daily engaged in purely stereoscopic photogrammetric procedures (stereoplotting, editing DTMs) and have an experience that ranges from 10 years (high) to 5 years (medium) and to 1-2 years (low). In any case the minimum experience to achieve significant results has been considered to be one year.

Due to the modular composition of large format digital cameras (DMC and UltraCamD) it has been considered relevant to perform a geometric analysis based on the distribution of the points across the stereoscopic area. Consequently, the points have been distributed in nine zones of the stereoscopic model. On each of these nine zones, at least one of the three following types of points has been measured: well defined points on the terrain; easy urban points (roofs) and difficult urban points (ground points close to buildings).

#### **4.2.2 Types of points**

58 Special Applications of Photogrammetry

digital cameras. Aspects such as: type of measurement, operator expertise, point type and

The flights used to undertake the stereoscopic measurements performed by operators are:

 "Laguna de Duero" (LD): this large scale flight play an essential role since establishes comparison between film camera (LD\_AE) and digital camera (LD\_D). In addition, this flight was observed with the original negatives (LD\_AA) in an analytical

"Mansilla de las Mulas" (MM): this large-scale flight was recorded only with digital

"Arauzo" (AR): this small-scale flight performs as a comparison between film camera

LD\_AE Leica RC30 5.000 20 0,100 450 767 0,587 153,42 LD\_D UltraCamD 8.333 9 0,075 225 845 0,266 101,40 MM\_D UltraCamD 11.111 9 0,100 300 1,125 0,267 101,40 AR\_AE Leica RC30 30.000 15 0,450 2,686 4,600 0,583 153,42 AR\_D UltraCamD 55.555 9 0,500 1,500 5,633 0,267 101,40

Table 2. Flight used for manual stereoscopic measurements, where *mb* indicates the scale denominator of the photographs; *px*, the pixel size; *GSD*, the pixel projection over the ground; *B*, the base (ground); *H*, the flight height; *B/H*, the base-height ratio; and *c*, the focal

For every point and operator, the standard deviation in *XY*, *SXY*, has been obtained and also the standard deviation in *Z*, *SZ*. These are parameters to be analyzed and considered to express the precision of the stereoscopic (both planimeric and altimetric) measures, as

It is not aimed to asses the global precision of a photogrammetric product but the precision related to the stereoscopic model (Kraus, 1993). The point stereoscopic measurements have been done in this order: first point, second point and so on until the last point to complete one cycle. No point has been observed *n* times in a consecutive fashion. To achieve *n* measurements of the same point, the cycle has been repeated *n* times. Each operator has realized 3 cycles at the beginning of the day and 3 cycles again at midday, to avoid tiredness in his performance, the repeatability in measurements and the so called learning effect. In

expressed by Hallert (1959) on repeated direct measurements of unknown quantities.

GSD (m)

B (m) H

(m) B/H <sup>c</sup>

(mm)

px (m)

area of the stereoscopic model, have been considered.

photogrammetric station, Leica SD2000.

(AR\_AE) and digital camera (AR\_D).

**4.1 Flights** 

camera (MM\_D).

Table 2 collects all these data.

length of the camera.

**4.2.1 Measurements** 

**4.2 Hypothesis** 

Flight Camera mb

For the different cases of study three types of points have been considered:


#### **4.2.3 Area of the stereoscopic model**

Given the modular structure of the images coming from large-format digital cameras, a geometric analysis has been performed based on the location of points within the stereoscopic model. Therefore, the points shall be distributed in nine areas of the stereoscopic model, to analyze the influence of the position of point in the model. In each one of these areas at least one point of each type will be chosen (Fig. 5).


Fig. 5. Numbering of areas within the stereoscopic model.

#### **4.3 Case study 1: Large-scale flight Laguna de Duero (LD)**

In the stereoscopic model obtained with the film camera, Leica RC30 (LD\_AE), in Laguna de Duero, 46 points have been observed. These measurements were made by five different operators in two sets of three cycles each. The five operators were distributed as follows: 2 with high experience, 1 with medium experience and 2 with low experience:

Assessment of Stereoscopic Precision – Film to Digital Photogrammetric Cameras 61

*D AE*

This indicates that the digital flight observes the relieve a 10% flatter than the film flight.

Using the comparison of the averages of within-subject measures introduced in the previous section, Table 5 outlines the comparison of averages for the standard deviation in XY.

1 0,587 -3;4 0,771 2 -0,348 -5;4 0,880 3 -0,565 -3;2 0,694 4 -0,652 -6;5 0,813 5 -0,522 -6;5 0,850

In view of the differences in *SXY* between the flights LD\_D and *LD\_AE* (operators 1 to 5), one

 The differences averages are less than a millimeter for all operators and, except for the first one, they are negative, which means that the *SXY* of LD\_D is larger than the *SXY* of

 The 95% confidence intervals of the differences averages provide interesting ranges. All of them are, approximately, symmetric intervals centred on zero. This is consistent with

The following table (Table 6) shows the data for comparison of averages for the standard

1 17,761 11; 24 0,000 2 28,739 10; 47 0,003 3 18,304 7; 28 0,001 4 29,652 19; 39 0,000 5 23,587 13; 33 0,000

Table 6. Comparison of averages for *SZ*: LD\_D vs. LD\_AE. Confidence level of 95%.

*Confidence interval* 

*(lower; upper) (mm) p-value* 

There are no significant differences for any operator, for a significance level of 5%.

Table 5. Comparison of averages for *SXY*: LD\_D vs. LD\_AE. Confidence level of 95%.

*<sup>H</sup> 0,90 <sup>H</sup>*

(26)

*(lower; upper) (mm) p-value* 

 

*H*

*H*

This value may explain the difference of 6% previously observed.

*Operator Average differences (mm) Confidence interval* 

the fact that there are no really differences between the cameras.

**4.3.3 Average differences for** *SZ***: LD\_D vs. LD\_AE** 

*(mm)* 

*Operator Average differences* 

**4.3.2 Average differences for the** *SXY***: LD\_D vs. LD\_AE** 

can make the following observations:

LD\_AE*.* 

deviation in *Z*.


Table 3. Precision obtained with the flight LD\_AE.

These same operators observed the same points in digital images for the Laguna de Duero flight using the UltraCamD digital camera (LD\_D):


Table 4. Precision obtained with the flight LD\_D.

#### **4.3.1 Film-digital flight comparison: LD\_D vs. LD\_AE**

The ratio of *SXY* for the two flights is unity (0,016/0,016), so there are significant differences in planimetry. The ratio of *Z* precision is determined empirically as 1,719 (0,055/0,032). While the ratio given theoretically is:

$$\frac{\left(\sigma\_z\right)\_{LD\\_D}}{\left(\sigma\_z\right)\_{LD\\_AE}} = \frac{\left(\frac{0.075m}{k} \* \frac{101.4mm}{27mm}\right)\_{LD\\_D}}{\left(\frac{0.100m}{k} \* \frac{153.42mm}{88mm}\right)\_{LD\\_AE}} = 1,616\*\frac{k\_{LD\\_AE}}{k\_{LD\\_D}}\tag{25}$$

Comparing the observed with the theoretical, it results *kLD\_AE* = 1,06 \* k*LD\_D* (6%). It concludes that there are only slight variations in the precision with digital camera than with the film camera.

Are these empirical dataset concluding? Yes, very concluding, since they encompass 5 operators, with different experience and measuring the same points with film and digital camera. In addition, these points are well distributed throughout the model and represent all types of points.

What might be due this slight difference? The relative relieve is the ratio between the relieve of an area and the flight altitude from which the images were taken (*∆H/H*). The relative ratio between relieves indicates the variation between the relative relieve between both cameras:

$$
\frac{\left(\Delta H\Big/\_{H}\Big)\_{D}}{\left(\Delta H\Big/\_{H}\Big)\_{AE}}=0.90\tag{26}
$$

This indicates that the digital flight observes the relieve a 10% flatter than the film flight. This value may explain the difference of 6% previously observed.

#### **4.3.2 Average differences for the** *SXY***: LD\_D vs. LD\_AE**

60 Special Applications of Photogrammetry

These same operators observed the same points in digital images for the Laguna de Duero

The ratio of *SXY* for the two flights is unity (0,016/0,016), so there are significant differences in planimetry. The ratio of *Z* precision is determined empirically as 1,719 (0,055/0,032).

> *z LD\_D LD\_D LD\_AE z LD\_AE LD\_D*

Comparing the observed with the theoretical, it results *kLD\_AE* = 1,06 \* k*LD\_D* (6%). It concludes that there are only slight variations in the precision with digital camera than with the film

Are these empirical dataset concluding? Yes, very concluding, since they encompass 5 operators, with different experience and measuring the same points with film and digital camera. In addition, these points are well distributed throughout the model and represent

What might be due this slight difference? The relative relieve is the ratio between the relieve of an area and the flight altitude from which the images were taken (*∆H/H*). The relative ratio between relieves indicates the variation between the relative relieve between both cameras:

*0,075m 101,4mm σ k 27mm k*

*σ 0,100m 153,42mm k k 88mm*

*LD\_AE*

*1,616*

(25)

*Operator SXY (m) SZ (m) Number of observations*  1 0,014 0,042 274 2 0,018 0,070 274 3 0,011 0,048 275 4 0,019 0,063 276 5 0,017 0,053 273 Average 0,016 0,055 Total 1.372

Table 3. Precision obtained with the flight LD\_AE.

flight using the UltraCamD digital camera (LD\_D):

Table 4. Precision obtained with the flight LD\_D.

While the ratio given theoretically is:

camera.

all types of points.

 

**4.3.1 Film-digital flight comparison: LD\_D vs. LD\_AE** 

*Operator SXY (m) SZ (m) Number of observations*  1 0,013 0,025 274 2 0,018 0,042 276 3 0,012 0,030 268 4 0,020 0,033 276 5 0,018 0,030 275 Average 0,016 0,032 Total 1.369

> Using the comparison of the averages of within-subject measures introduced in the previous section, Table 5 outlines the comparison of averages for the standard deviation in XY.


Table 5. Comparison of averages for *SXY*: LD\_D vs. LD\_AE. Confidence level of 95%.

In view of the differences in *SXY* between the flights LD\_D and *LD\_AE* (operators 1 to 5), one can make the following observations:


### **4.3.3 Average differences for** *SZ***: LD\_D vs. LD\_AE**

The following table (Table 6) shows the data for comparison of averages for the standard deviation in *Z*.


Table 6. Comparison of averages for *SZ*: LD\_D vs. LD\_AE. Confidence level of 95%.

Assessment of Stereoscopic Precision – Film to Digital Photogrammetric Cameras 63

These average values per operator in the personal equation shows that the behavior of an operator in film measures is transmitted to digital measures, reflecting the same sign and a

In order to analyze in detail the personal equation and its variation between cameras, the test of average differences has been applied. The following table (Table 9) summarizes the

1 -16 -32;1 0,059 2 29 -3;60 0,075 3 -66 -83;-49 0,000 4 64 46;82 0,000 5 -12 -25;2 0,086

Table 9. Test results: comparison of averages differences for the personal equation of the operators who performed observations on flights LD\_AE and LD\_D. Confidence interval

In view of the hypothesis testing for differences in personal equation, the following

 The averages differences range from -66 mm for the Operator 3 to 64 mm for the Operator 4. In three cases (operators 1, 3 and 5), the average difference is negative, but this does not mean that the personal equation is greater for LD\_D than for LD\_AE, but this value is because the personal equation for these operators is negative in both cases

 The confidence intervals at 95% for the averages differences for the operators 1, 2 and 5 contain the zero value, but they are markedly asymmetric, while the intervals of the

 In line with the comments in the previous section, there are no significant differences for a significance level of 5% for the operators 1, 2 and 5, but they show very low values, which suggests that there are differences. For the other two operators it can be

Two more additional considerations should be remarked. The first consideration is based on the location of the points where the measures are performed. In order to get the same ground sample distance (*GSD*) with film and digital camera, it is necessary to use several digital camera models that cover the same area recorded by a film camera. The second consideration is related with the type of point. Especially in the case of difficult points distributed on the bottom of the buildings, the precision will get worse in Z for digital

(note the data of personal equation for operator in Table 9).

said clearly that there are differences between cameras.

*Confidence interval* 

*(lower; upper) (mm) p-value* 

ratio (LD\_D/LD\_AE) between 4,1 and 1,5.

results of this comparison of paired samples.

*Operator Average differences* 

observations can be pointed out:

operators 3 and 4 do not include zero.

**4.3.6 Other considerations: LD\_D vs. LD\_AE** 

95%.

camera.

*(mm)* 

In view of the differences in *SZ* between the flights LD\_D and LD\_AE (operators 1 to 5), one can make the following observations:


#### **4.3.4 Analysis of agreement: LD\_D vs. LD\_AE**

As in the previous case this comparison element can be applied only on the flights that were observed by the same operators. The results are collected in Table 7.


Table 7. LD\_D vs. LD\_AE: *ICCC*: consistency; *ICCA*: total agreement; *r*: linear correlation coefficient, p: p-value: contrast significance of the differences average. Confidence level of 95%.

In case of the data from flights LD\_D vs. LD\_AE, the relationship between the values of *SXY* shows a good agreement, consistent and linear, although there is a slight constant discrepancy. The relationship between the values *SZ* shows a low agreement but consistent and linear. This indicates that in the relationship between the values of *SZ* there is a proportionate and constant inconsistency

#### **4.3.5 Personal equation: LD\_D vs. LD\_AE**

The personal equation of each operator *Z* has been calculated for each point, as explained above (expression 21), from the observations performed in LD\_D and LD\_AE. The average data are shown in Table 8:


Table 8. Average values of the personal equation for the operators who performed the measurements for LD\_AE and LD\_D.

Those values in the right column (Table 8) which differ from the unit mean that an operator behaves differently with the two cameras.

In view of the differences in *SZ* between the flights LD\_D and LD\_AE (operators 1 to 5), one

 The differences averages range from 17,761mm for the operator 1 to 29,652 mm for the operator 4. For all operators, the differences averages are positive, which means that the

As in the previous case this comparison element can be applied only on the flights that were

 SXY SZ *ICCC* 0,835 (p=0,019) 0,854 (p=0,015) *ICCA* 0,735 (p=0,019) 0,288 (p=0,015) *r* 0,893 (p=0,021) 0,988 (p=0,001)

Table 7. LD\_D vs. LD\_AE: *ICCC*: consistency; *ICCA*: total agreement; *r*: linear correlation coefficient, p: p-value: contrast significance of the differences average. Confidence level of 95%.

In case of the data from flights LD\_D vs. LD\_AE, the relationship between the values of *SXY* shows a good agreement, consistent and linear, although there is a slight constant discrepancy. The relationship between the values *SZ* shows a low agreement but consistent and linear. This indicates that in the relationship between the values of *SZ* there is a

The personal equation of each operator *Z* has been calculated for each point, as explained above (expression 21), from the observations performed in LD\_D and LD\_AE. The average

*Operator LD\_D (mm) LD\_AE (mm) LD\_D/LD\_AE*  1 -32 -16 2,0 2 84 55 1,5 3 -109 -43 2,5 4 86 21 4,1 5 -29 -18 1,6

Table 8. Average values of the personal equation for the operators who performed the

Those values in the right column (Table 8) which differ from the unit mean that an operator

 None of the 95% confidence intervals of differences means contains zero. There are significant differences in all operators, for a significance level of 5%.

observed by the same operators. The results are collected in Table 7.

can make the following observations:

*SZ* of LD\_D is larger than the *SZ* of LD\_AE.

**4.3.4 Analysis of agreement: LD\_D vs. LD\_AE** 

proportionate and constant inconsistency

measurements for LD\_AE and LD\_D.

behaves differently with the two cameras.

data are shown in Table 8:

**4.3.5 Personal equation: LD\_D vs. LD\_AE** 

These average values per operator in the personal equation shows that the behavior of an operator in film measures is transmitted to digital measures, reflecting the same sign and a ratio (LD\_D/LD\_AE) between 4,1 and 1,5.

In order to analyze in detail the personal equation and its variation between cameras, the test of average differences has been applied. The following table (Table 9) summarizes the results of this comparison of paired samples.


Table 9. Test results: comparison of averages differences for the personal equation of the operators who performed observations on flights LD\_AE and LD\_D. Confidence interval 95%.

In view of the hypothesis testing for differences in personal equation, the following observations can be pointed out:


#### **4.3.6 Other considerations: LD\_D vs. LD\_AE**

Two more additional considerations should be remarked. The first consideration is based on the location of the points where the measures are performed. In order to get the same ground sample distance (*GSD*) with film and digital camera, it is necessary to use several digital camera models that cover the same area recorded by a film camera. The second consideration is related with the type of point. Especially in the case of difficult points distributed on the bottom of the buildings, the precision will get worse in Z for digital camera.

Assessment of Stereoscopic Precision – Film to Digital Photogrammetric Cameras 65

The *XY* precision ratio shows a decrease of 14% (0,016/0,014) with the image scanned at 20

This may indicate that a pixel scan size of 17,5 µm (20/1,14) would have been optimal. The results come to confirm that with a scanned resolution of 20 µm the *Z* precision is about 28% (0,032/0,025) worse than with the original images. According to this data and considering that the scan size is the main factor, the optimal size would have been of 15,6 µm (20/1,28).

These data seem to indicate that more precision could have been obtained if the images had been scanned at 15 µm. With this, the *GSD* (0,10 m) would have been the same for the film

Note that the measure instrument is completely different (including the stereoscopic system). In addition, measures were made with different operators. Probably, those differences encountered during the experiments could be related with these variables.

If we work with global average values for both flights, a comparison can be established, even knowing the differences between measurements (different instrument and different operators). The *XY* precision ratio is observed to be about 14% (0,016/0,014) more accurate for film flight. The theoretical precision in Z, considering the same *GSD* for both flights, is of

Due to initial requirements for the selection of points and the type of points, it is not clear the precision related with those flights executed with low height. For this purpose, several measures were performed for a digital flight, MM\_D, using a different collection of points and with a 0,10 m *GSD*. In this case, only well-defined terrain points were observed, distributing three points along the nine areas of the stereoscopic model. The operators were four: 2 with

Comparing LD\_AE vs. MM\_D, the difference for *XY* precision reaches the 13% (0,018/0,016), when the *GSD* is the same. The ratio for *Z* precision is determined empirically as 1,594 (0,051/0,032), while the ratio determined theoretically, having the same *GSD*, is 2,155. As a result, the digital flight MM\_D is a 35% better than the film flight LD\_AE (*kMM*\_D

*Operator Sxy (m) Sz (m) Number of observations*  1 0,016 0,053 162 2 0,018 0,046 162 3 0,022 0,062 162 4 0,016 0,044 162 Average 0,018 0,051 Total 808

(In this case, the theoretical reason *σZ* is 1 since it is the same camera).

2,155, while the empirical precision is 2,200 (0,055/0,025), almost the same.

high experience, 1with medium experience and 1 with low experience (Table 12).

**4.4 Case study 2: Large-scale flight Mansilla de las Mulas (MM)** 

= 1.35 \* *kLD*\_AE) considering a pixel size of 20 microns.

Table 12. Precision obtained with the flight MM\_D.

µm.

and digital camera.

**4.3.8 Comparison: LD\_D vs. LD\_AA** 


Table 10. Precision obtained with flights LD\_D and LD\_AE, with different types of points: *T*: well-defined terrain points; *U\_S*: Well-defined urban points; *U\_C*: Difficult urban points.

These two considerations suggest that the measures will not be performed equally for both cameras.

What would it happen if the scanning resolution for film images were 15 µm instead of 20 µm? In this case the pixel size on the ground for the film camera would be 0,075 m. A value similar to the digital flight, so that the theoretical ratio of altimetric precision would be as follows:

$$\frac{\left(\sigma\_z\right)\_{ID\\_D}}{\left(\sigma\_z\right)\_{ID\\_AA}} = \frac{\left(\frac{0.075m}{k} \* \frac{101.4mm}{27mm}\right)\_{LD\\_D}}{\left(\frac{0.075m}{k} \* \frac{153.42mm}{88mm}\right)\_{LD\\_AA}} = 2.155 \* \frac{k\_{ID\\_AA}}{k\_{ID\\_D}}\tag{27}$$

This value of 2,155 is very different (about 33%) from its theoretical, 1,616. Nevertheless, if these same 5 operators perform their measures with a 15 µm image, do the empirical precision would be 33% better? Several authors believe that the optimal size is 15 µm scan (Boniface, 1996). The next section addresses this question.

Note that for changing the *GSD* size (15 to 20 µm) with film camera is not necessary to change any parameters of the proposed flight, but simply the scanning pixel size. Neither the focal length or the frame size or resolution of the film, etc.

#### **4.3.7 Comparison of: LD\_AE vs. LD\_AA**

In this case, manual measures were performed over the same points considering the original images, using a Leica SD2000 analytical plotter with analog vision system, but with other operators in the restitution. This is the flight LD\_AA.


Table 11. Precision obtained with the flight LD\_AA.

The *XY* precision ratio shows a decrease of 14% (0,016/0,014) with the image scanned at 20 µm.

This may indicate that a pixel scan size of 17,5 µm (20/1,14) would have been optimal. The results come to confirm that with a scanned resolution of 20 µm the *Z* precision is about 28% (0,032/0,025) worse than with the original images. According to this data and considering that the scan size is the main factor, the optimal size would have been of 15,6 µm (20/1,28). (In this case, the theoretical reason *σZ* is 1 since it is the same camera).

These data seem to indicate that more precision could have been obtained if the images had been scanned at 15 µm. With this, the *GSD* (0,10 m) would have been the same for the film and digital camera.

Note that the measure instrument is completely different (including the stereoscopic system). In addition, measures were made with different operators. Probably, those differences encountered during the experiments could be related with these variables.

### **4.3.8 Comparison: LD\_D vs. LD\_AA**

64 Special Applications of Photogrammetry

*SXY\_LD\_D (m)* 

*T* 0,014 0,017 0,029 0,052 *U\_S* 0,015 0,014 0,033 0,050 *U\_C* 0,024 0,017 0,042 0,074

Table 10. Precision obtained with flights LD\_D and LD\_AE, with different types of points: *T*: well-defined terrain points; *U\_S*: Well-defined urban points; *U\_C*: Difficult urban points.

These two considerations suggest that the measures will not be performed equally for both

What would it happen if the scanning resolution for film images were 15 µm instead of 20 µm? In this case the pixel size on the ground for the film camera would be 0,075 m. A value similar to the digital flight, so that the theoretical ratio of altimetric precision would be as

> *z LD\_D LD\_D LD\_AA z LD\_AA LD\_D*

This value of 2,155 is very different (about 33%) from its theoretical, 1,616. Nevertheless, if these same 5 operators perform their measures with a 15 µm image, do the empirical precision would be 33% better? Several authors believe that the optimal size is 15 µm scan

Note that for changing the *GSD* size (15 to 20 µm) with film camera is not necessary to change any parameters of the proposed flight, but simply the scanning pixel size. Neither

In this case, manual measures were performed over the same points considering the original images, using a Leica SD2000 analytical plotter with analog vision system, but with other

1 0,012 0,021 275 2 0,019 0,031 269 3 0,012 0,020 276 4 0,011 0,027 274 Summary 0,014 0,025 1094

*Operator SXY (m) SZ (m) Number of observations* 

*0,075m 101,4mm σ k 27mm k*

*σ 0,075m 153,42mm k k 88mm*

*LD\_AA*

*2,155*

*SZ\_LD\_AE (m)* 

*SZ\_LD\_D (m)* 

(27)

*Type of point SXY\_LD\_AE* 

 

**4.3.7 Comparison of: LD\_AE vs. LD\_AA** 

operators in the restitution. This is the flight LD\_AA.

Table 11. Precision obtained with the flight LD\_AA.

(Boniface, 1996). The next section addresses this question.

the focal length or the frame size or resolution of the film, etc.

cameras.

follows:

*(m)* 

If we work with global average values for both flights, a comparison can be established, even knowing the differences between measurements (different instrument and different operators). The *XY* precision ratio is observed to be about 14% (0,016/0,014) more accurate for film flight. The theoretical precision in Z, considering the same *GSD* for both flights, is of 2,155, while the empirical precision is 2,200 (0,055/0,025), almost the same.

#### **4.4 Case study 2: Large-scale flight Mansilla de las Mulas (MM)**

Due to initial requirements for the selection of points and the type of points, it is not clear the precision related with those flights executed with low height. For this purpose, several measures were performed for a digital flight, MM\_D, using a different collection of points and with a 0,10 m *GSD*. In this case, only well-defined terrain points were observed, distributing three points along the nine areas of the stereoscopic model. The operators were four: 2 with high experience, 1with medium experience and 1 with low experience (Table 12).

Comparing LD\_AE vs. MM\_D, the difference for *XY* precision reaches the 13% (0,018/0,016), when the *GSD* is the same. The ratio for *Z* precision is determined empirically as 1,594 (0,051/0,032), while the ratio determined theoretically, having the same *GSD*, is 2,155. As a result, the digital flight MM\_D is a 35% better than the film flight LD\_AE (*kMM*\_D = 1.35 \* *kLD*\_AE) considering a pixel size of 20 microns.


Table 12. Precision obtained with the flight MM\_D.

Assessment of Stereoscopic Precision – Film to Digital Photogrammetric Cameras 67

The digital flight AR\_D was observed by these same four operators, which measured the

Comparing the precision obtained, it can see that the planimetric ratio *SXY* (0,116/0,094) is 23% worse with the digital camera, even when the difference between *GSD* is only 11% (0,500/0,450). It is important to point out the uncertainty in the point definition given the rustic area. The ratio of *Z* precision is determined empirically as 1,669 (0,247/0,148), while the ratio if *Z* precision determined theoretically is 2,393. This implies that *kAR\_D* = 1,43 \*

> *z AR\_D AR\_D AR\_AE z AR\_AE AR\_D*

It concludes that the precision in *Z* with the digital camera is better (43%) than the precision in *Z* determined with the film camera. The variation of relative relieve between flights is 22%. The lower relative relieve, the lower range of variability in *Z* positioning, so that the standard deviation will be less. Also, the difference in relative relieve must be considered in relation to the different focal distances. The longer the focal length the flatter is the relieve. Note that one important factor which could be minimizing the effect of the ratio *B/H* is the

*0,500m 101,4mm σ k 27mm k*

The following table (Table 16) collects representative data for each flight:

*σ 0,450m 153,42mm k k 88mm*

*AR\_AE*

*2,393*

(28)

*Operator SXY (m) SZ (m) Number of summary*  1 0,124 0,228 162 2 0,086 0,221 161 3 0,145 0,311 162 4 0,110 0,229 162 Summary 0,116 0,247 647

Table 14. Precision obtained with the film flight AR\_AE.

Table 15. Precision obtained with the digital flight AR\_D.

 

increase of flight height.

**4.6.1 Comparison between film and digital flight: AR\_AE vs. AR\_D** 

same points.

*kAR*\_AE:

*Operator SXY (m) SZ (m) Number of observations*  1 0,093 0,132 162 2 0,093 0,125 162 3 0,104 0,214 161 4 0,084 0,119 162 Summary 0,094 0,148 647

Comparing LD\_AA vs. MM\_D, the difference in *XY* precision is 28% (0,018/0,014), while the *Z* empirical precision is 2,040 (0,051/0,025), pretty much the same than the Z theoretical precision, 2,155.

#### **4.5 Comparison between large-scale flights**

The following table (Table 13) shows a series of data representative of each flight which allow us to make comparisons between flights.


Table 13. Data coming from the large-scale flights analyzed.

In the table 13: *Si*: image measures precision in micrometers, *px*: pixel size in microns; *k*: precision indicator; *SXY/GSD*: ratio between planimetric precision and *GSD*, *SZ/H*: ratio between altimetric precision and flight height; (*SXY/SZ)/*(*B/H*) quotient between the planimetric and height empirical standard deviations ratio and the planimetric and height theoretical precision ratios. Note that it has been assumed that the *px* of LD\_AA, the film flight observed with the analytical stereoplotter, is 15 m.

The values of *Si* and *px* are not comparable, whereas the indicators of precision, *k* are comparable. It must be emphasized the equality between MM\_D and LD\_AA. The minor value for the digital camera flight LD\_D could be related with the selection and type of points, while best values reached for film camera flight LD\_AE, may be due to the pixel size of 20 μm. The ratios *SXY/GSD* are similar for all flights. The ratios *SZ/H* for film camera flights are around 3-4\*10-5 (0,00003), while for the two digital flights are around 6\*10-5.

For both film flights the quotient *(SXY/SZ)/(B/H)* indicates that the planimetric precision is better than the altimetric precision, whereas for the digital flights the quotient expresses the opposite.

#### **4.6 Case study 3: Small-scale flight Arauzo**

A case of study was performed in Arauzo (Spain), using a small-scale and combining film camera (AR\_AE) and digital camera (AR\_D). In particular, for the film flight, AR\_AE, 3 points were measured along the 9 areas of the stereoscopic model (a total of 27 points). The points were observed by four different operators in 2 sets of 3 cycles each. The measured points were well-defined terrain points, since the workspace was rural. The experience of the operators was distributed as follows: 2 with high experience, 1 with medium experience and 1 with low experience. Table 14 outlines the main results.


Table 14. Precision obtained with the film flight AR\_AE.

66 Special Applications of Photogrammetry

Comparing LD\_AA vs. MM\_D, the difference in *XY* precision is 28% (0,018/0,014), while the *Z* empirical precision is 2,040 (0,051/0,025), pretty much the same than the Z theoretical

The following table (Table 13) shows a series of data representative of each flight which

*m)* 3,20 2,80 1,92 1,62

*m)* 20 15 9 9 *K* 6,25 5,37 4,69 5,56 *SXY/GSD* 0,16 0,19 0,21 0,18 *SZ/H (x10-5)* 4,17 3,26 6,53 6,03 *(SXY/SZ)/(B/H)* 0,85 0,95 1,09 1,32

In the table 13: *Si*: image measures precision in micrometers, *px*: pixel size in microns; *k*: precision indicator; *SXY/GSD*: ratio between planimetric precision and *GSD*, *SZ/H*: ratio between altimetric precision and flight height; (*SXY/SZ)/*(*B/H*) quotient between the planimetric and height empirical standard deviations ratio and the planimetric and height theoretical precision ratios. Note that it has been assumed that the *px* of LD\_AA, the film

The values of *Si* and *px* are not comparable, whereas the indicators of precision, *k* are comparable. It must be emphasized the equality between MM\_D and LD\_AA. The minor value for the digital camera flight LD\_D could be related with the selection and type of points, while best values reached for film camera flight LD\_AE, may be due to the pixel size of 20 μm. The ratios *SXY/GSD* are similar for all flights. The ratios *SZ/H* for film camera flights are around 3-4\*10-5 (0,00003), while for the two digital flights are around 6\*10-5.

For both film flights the quotient *(SXY/SZ)/(B/H)* indicates that the planimetric precision is better than the altimetric precision, whereas for the digital flights the quotient expresses the

A case of study was performed in Arauzo (Spain), using a small-scale and combining film camera (AR\_AE) and digital camera (AR\_D). In particular, for the film flight, AR\_AE, 3 points were measured along the 9 areas of the stereoscopic model (a total of 27 points). The points were observed by four different operators in 2 sets of 3 cycles each. The measured points were well-defined terrain points, since the workspace was rural. The experience of the operators was distributed as follows: 2 with high experience, 1 with medium experience

*LD\_AA RC30-A* 

*LD\_D ULC-D*  *MM\_D ULC-D* 

precision, 2,155.

*Si (*

*px (*

opposite.

**4.5 Comparison between large-scale flights** 

allow us to make comparisons between flights.

*LD\_AE RC30-D* 

Table 13. Data coming from the large-scale flights analyzed.

flight observed with the analytical stereoplotter, is 15 m.

**4.6 Case study 3: Small-scale flight Arauzo** 

and 1 with low experience. Table 14 outlines the main results.

The digital flight AR\_D was observed by these same four operators, which measured the same points.


Table 15. Precision obtained with the digital flight AR\_D.

#### **4.6.1 Comparison between film and digital flight: AR\_AE vs. AR\_D**

Comparing the precision obtained, it can see that the planimetric ratio *SXY* (0,116/0,094) is 23% worse with the digital camera, even when the difference between *GSD* is only 11% (0,500/0,450). It is important to point out the uncertainty in the point definition given the rustic area. The ratio of *Z* precision is determined empirically as 1,669 (0,247/0,148), while the ratio if *Z* precision determined theoretically is 2,393. This implies that *kAR\_D* = 1,43 \* *kAR*\_AE:

$$\frac{\left(\sigma\_z\right)\_{AR\\_D}}{\left(\sigma\_z\right)\_{AR\\_AE}} = \frac{\left(\frac{0.500m}{k} \* \frac{101.4mm}{27mm}\right)\_{AR\\_D}}{\left(\frac{0.450m}{k} \* \frac{153.42mm}{88mm}\right)\_{AR\\_AE}} = 2.393 \* \frac{k\_{AR\\_AE}}{k\_{AR\\_D}}\tag{28}$$

It concludes that the precision in *Z* with the digital camera is better (43%) than the precision in *Z* determined with the film camera. The variation of relative relieve between flights is 22%. The lower relative relieve, the lower range of variability in *Z* positioning, so that the standard deviation will be less. Also, the difference in relative relieve must be considered in relation to the different focal distances. The longer the focal length the flatter is the relieve.

Note that one important factor which could be minimizing the effect of the ratio *B/H* is the increase of flight height.

The following table (Table 16) collects representative data for each flight:

Assessment of Stereoscopic Precision – Film to Digital Photogrammetric Cameras 69

 For all operators, the average difference is positive, this implies that *Sz\_R\_D*>*Sz\_ AR\_AE*. In addition, the average differences are between 95,481 mm and 110,111 mm. None of the confidence intervals of 95% for the average difference contains zero. All

The total agreement, the consistency and the correlation coefficient are depicted in the

 *SXY SZ ICCC* 0,620 (p=0,132) 0,940 (p=0,009) *ICCA* 0,347 (p=0,132) 0,295 (p=0,009) *r* 0,658 (p=0,171) 0,986 (p=0,007)

Table 19. AR\_D vs. AR\_AE: *ICCC*: consistency; *ICCA*: total agreement; *r*: linear correlation

Note that *SXY* values are not significant for a confidence level of 95%. On the other hand, the relationship between *SZ* shows a low total agreement but following a consistent and linear

The following table (Table 20) shows the personal equation results for each operator and

The values provided by personal equation do not show a clear trend. Even their heterogeneity might indicate that the data are unreliable. In particular, for the operator 1 and 3 the sign is changed, indicating that the altimetric positioning is made above the terrain for digital flights and bellow the terrain for film flights. However, it is important to remark that the average altimetric position of a point is obtained from the average measures of all operators, so that the change of sign should be interpreted as an increase in the personal

*Operator AR\_D (mm) AR\_AE (mm) AR\_D/AR\_AE*  1 90 -20 -4,5 2 -32 -64 0,5 3 -279 12 -23,5 4 222 72 3,1 Table 20. Personal equation for the operators who performed the measurements in flights:

According to the hypothesis testing, the following observations can be made for *Sz*:

There are significant differences in all operators, for a significance level of 5%.

ranges implies that *Sz\_AR\_D*>*Sz\_AR\_AE*.

**4.6.4 Analysis of agreement: AR\_D vs. AR\_AE** 

coefficient. Confidence level of 95%.

**4.6.5 Personal equation: AR\_D vs. AR\_AE** 

equation difference between film and digital camera.

based on the observations performed for flights: AR\_AE and AR\_D.

Table 19.

trend.

AR\_D and AR\_AE.


Table 16. Data from the four flights analyzed large-scale.

The ratio *SXY/GSD* is not very different for both flights (0,21 and 0,23), as well as the altimetric ratio *SZ/H* (3,22\*10-5 and 4,38\*10-5). The quotient (*SXY/SZ)/(B/H*) show for both flights that the planimetric precision is lower than the altimetric precision, even though in the case of digital camera is much more pronounced this difference.

## **4.6.2 Average differences for** *SXY***: AR\_D vs. AR\_AE**

The following table (Table 17) shows the average differences for *SXY*:


Table 17. Comparison of averages for *SXY*: AR\_D vs. AR\_AE. Confidence level of 95%.

The following observations can be made for *SXY* according to the hypothesis testing:


#### **4.6.3 Average differences for** *SZ***: AR\_D vs. AR\_AE**

The following table (Table 18) shows the average differences for SZ.


Table 18. Comparison of averages for *SZ*: AR\_D vs. AR\_AE. Confidence level of 95%.

According to the hypothesis testing, the following observations can be made for *Sz*:


#### **4.6.4 Analysis of agreement: AR\_D vs. AR\_AE**

68 Special Applications of Photogrammetry

*m)* 3,13 2,09

*m)* 15 9 *K* 4,79 4,31 *Sxy/GSD* 0,21 0,23 *Sz/H(x10-5)* 3,22 4,38 *(Sxy/Sz)/(B/H)* 1,09 1,76

The ratio *SXY/GSD* is not very different for both flights (0,21 and 0,23), as well as the altimetric ratio *SZ/H* (3,22\*10-5 and 4,38\*10-5). The quotient (*SXY/SZ)/(B/H*) show for both flights that the planimetric precision is lower than the altimetric precision, even though in

*Confidence interval*

*Si (*

*Px (*

*Operator*

Table 16. Data from the four flights analyzed large-scale.

**4.6.2 Average differences for** *SXY***: AR\_D vs. AR\_AE** 

*Average diferences*

**4.6.3 Average differences for** *SZ***: AR\_D vs. AR\_AE** 

The following table (Table 18) shows the average differences for SZ.

*Operator Average differences (mm) Confidence interval*

*(mm)*

performance is opposite.

clear asymmetry.

the case of digital camera is much more pronounced this difference.

The following table (Table 17) shows the average differences for *SXY*:

1 30,444 -23; 84 0,255 2 -7,000 -32; 18 0,574 3 40,778 -4; 85 0,074 4 26,222 -27; 79 0,322

Table 17. Comparison of averages for *SXY*: AR\_D vs. AR\_AE. Confidence level of 95%.

The following observations can be made for *SXY* according to the hypothesis testing:

 For the operators 1, 3 and 4 the average difference is positive, this implies that for these operators *Sxy\_AR\_D*> *Sxy\_AR\_AE*. This is not the case for the operator 2 since its

The confidence interval 95% for the average differences containing zero, but show a

1 95,630 61;129 0,000 2 95,481 48; 142 0,000 3 97,037 33;161 0,004 4 110,111 75;145 0,000

Table 18. Comparison of averages for *SZ*: AR\_D vs. AR\_AE. Confidence level of 95%.

For a significance level of 5%, no significant differences were found for any operator.

*AR\_A AR\_D*

*(lower; upper) (mm) p-value* 

*(lower; upper) (mm) p-value* 

The total agreement, the consistency and the correlation coefficient are depicted in the Table 19.


Table 19. AR\_D vs. AR\_AE: *ICCC*: consistency; *ICCA*: total agreement; *r*: linear correlation coefficient. Confidence level of 95%.

Note that *SXY* values are not significant for a confidence level of 95%. On the other hand, the relationship between *SZ* shows a low total agreement but following a consistent and linear trend.

#### **4.6.5 Personal equation: AR\_D vs. AR\_AE**

The following table (Table 20) shows the personal equation results for each operator and based on the observations performed for flights: AR\_AE and AR\_D.

The values provided by personal equation do not show a clear trend. Even their heterogeneity might indicate that the data are unreliable. In particular, for the operator 1 and 3 the sign is changed, indicating that the altimetric positioning is made above the terrain for digital flights and bellow the terrain for film flights. However, it is important to remark that the average altimetric position of a point is obtained from the average measures of all operators, so that the change of sign should be interpreted as an increase in the personal equation difference between film and digital camera.


Table 20. Personal equation for the operators who performed the measurements in flights: AR\_D and AR\_AE.

Assessment of Stereoscopic Precision – Film to Digital Photogrammetric Cameras 71

*Earth Imaging for Geospatial Information*, Hannover (Germany), May 2005. Arias, B. & Gómez-Lahoz, J. (2009). Testing the stereoscopic precision of a large-format

Blachut, T.J. (1955). General report of commission IV Subcommission IV/3. Small scale mapping. *Photogrammetria*, Vol. 12 (1955), pp 220-229, ISSN 0031-8663. Bland, M. & Altman, D.G. (1986). Statistical methods for assessing agreement between two

Commission C., O.E.E.P.E. (1955). Zeitlicher Ablauf der Messungen Deroulment

Dörstel, C. (2003). DMC - Practical experiences and Photogrammetric System Performance, *Proceedings of Photogrammetric Week*, pp 59-65, Stuttgart (Germany), 2003. Doménech, J.M. (2005) *Medida del cambio: Análisis de diseños con medidas intrasujeto. Medida de* 

Dubuisson, B. (1955) Rapport général de la commission IV Sous-Commission. IV/2

Fleiss, J.L. (1986). The design and analysis of clinical experiments. John Wiley & Sons, ISBN

Förstner. (1955). O.E.E.P.E., Commission C Rapport sur la restitution effectuée dans l'Institut

d'Oberriet. *Photogrammetria*, , Vol. 12 (1955), pp 163-171, ISSN 0031-8663. Gotthardt, E. (1958). Report on the First Results of Photogrammetric Test Performed near

Grifoni, B. (1949) On different grade of precision attained in plotting using glass plate and film cameras. *Photogrammetria*, , Vol. 6 (1949), pp 55-58, ISSN 0031-8663. Härry, H. (1954). La situation actuelle dans la mensuration cadastrale photogrammétrique.

Härry, H. (1955). Rapport général de la Commission IV Sous-Commission IV/1. Application

parcellaires. *Photogrammetria*, , Vol. 12 (1955), pp 203-207, ISSN 0031-8663. Karara, H.M. (1967). Mono versus stereo analytical photogrammetry—theoretical

*Photogrammetria*, , Vol. 11 (1954), pp 45-50, ISSN 0031-8663.

170, ISSN 1477-9730.

EEUU.

0031-8663.

España.

208-219, ISSN 0031-8663.

141-148, ISSN 0031-8663 .

1967), pp 99-113, ISSN 0031-8663.

978-0471820475, New York, EEUU.

process of the ICC digital camera. *Proceedings of ISPRS Workshop on High-Resolution* 

digital camera. *The Photogrammetric Record*, Vol. 24, Issue 126 (June 2009), pp 157–

methods of clinical measurement. *Lancet*, Vol. i (1986), pp 307-310, ISSN 0140-6736. Boniface, P. R. (1996). State-of-the-art in SoftCopy Photogrammetry, in: *Digital* 

*Photogrammetry: An Addendum to the Manual Photogrammetry*, C. Greve, American Society of Photogrametry and Remote Sensing, ISBN 978-1570830372, Bethesda,

chronologique des observations. *Photogrammetria*, Vol. 12 (1955), pp 198-199, ISSN

*la Concordancia. Fundamentos de diseño y estadística, 14*. Signo, ISBN 9788480493017,

Etablissement des plans des territoires urbains. *Photogrammetria*, , Vol. 2 (1955), pp

für Angewandte Geodäsie, Francfort sur le Main Terrain d'essai d'Oberriet les vols Nr 1 et 3 (groupe I). *Photogrammetria*, , Vol. 12 (1955), pp 183-190, ISSN 0031-8663. Gotthardt, E. (1955). O.E.E.P.E., Commission C Compte rendu de la restitution à la

Technischen Hochschule, Stuttgart, des vols d'essai du gruope I du terrain

"Oberriet" by Commission C of the O.E.E.P.E. *Photogrammetria*, , Vol. 15 (1958), pp

de la photogrammétrie aux mensurations cadastrales et aux remaniements

considerations and experimental results. *Photogrammetria*, Vol. 22, Issue 3 (March


The following table (Table 21) summarizes the results of this comparison of differences for personal equations for each operator and for each point.

Table 21. Test results comparing average differences for the personal equation of the operators who made observations on flights AR\_D and AR\_AE. Confidence level of 95%.

For a significance level of 5%, differences between cameras are found for the operators 1, 3 and 4, whereas no differences are found for the operator 2.

## **5. Conclusions**

In this chapter, an assessment of stereoscopic precision for large-format digital cameras has been studied. In all the flights, the influence of a set of variables on the precision in *XY* and in *Z* has been analyzed. These variables are: the distribution of points in the model; the operators and their experience; and the type of points.

In particular, in every case the planimetric ratio *SXY*/*GSD* indicate that the planimetric error is similar for both flights. Besides this, it is supposed that the flight height is independent, even for large flight heights. The altimetric error seems to indicate that the main difference comes form the flight height and in minor level from the relation *b/h*. No significant differences are observed in the ratio *SZ/H* for all flights analyzed.

In conclusion, the approach of relating digital and film flights through the *GSD* is right on the planimetric side. However, the altimetric precision must be analyzed carefully in order to determine if it is possible to maintain traditional flight heights. In this sense, the indicators of precision *k*, as a fraction of the pixel of the image, are a good comparator.

The values for the quotient *(SXY/SZ)/(B/H)* show that those film flights provide slightly better results for planimetry than for altimetry, while in the case of digital camera just the opposite occurs, having worse planimetric precision.

The empirical conclusion that seems to be reached is that the negative impact on altimetric precision caused by the lower ratio *B/H* for digital camera flights is lower when the flight height increases.

In summary, the main conclusion to be drawn on the stereo manual measurements is that the planimetric accuracy is the same for both cameras: film and digital. However, altimetric precision does not provide the same results, being the ratio *b/h* the main cause of the difference between both cameras.

#### **6. References**

Alamús, R., Kornus, W., Palà, V., Pérez, F., Arbiol, R., Bonet, R., Costa, J., Hernández, J., Marimon, J., Ortiz, M.A., Palma, E., Pla, M., Racero, S., Talaya, J. (2005). Validation

The following table (Table 21) summarizes the results of this comparison of differences for

1 110 49;110 0,001 2 31 -22;85 0,236 3 -291 -386;-196 0,000 4 150 62;237 0,002

For a significance level of 5%, differences between cameras are found for the operators 1, 3

In this chapter, an assessment of stereoscopic precision for large-format digital cameras has been studied. In all the flights, the influence of a set of variables on the precision in *XY* and in *Z* has been analyzed. These variables are: the distribution of points in the model; the

In particular, in every case the planimetric ratio *SXY*/*GSD* indicate that the planimetric error is similar for both flights. Besides this, it is supposed that the flight height is independent, even for large flight heights. The altimetric error seems to indicate that the main difference comes form the flight height and in minor level from the relation *b/h*. No significant

In conclusion, the approach of relating digital and film flights through the *GSD* is right on the planimetric side. However, the altimetric precision must be analyzed carefully in order to determine if it is possible to maintain traditional flight heights. In this sense, the indicators of precision *k*, as a fraction of the pixel of the image, are a good comparator.

The values for the quotient *(SXY/SZ)/(B/H)* show that those film flights provide slightly better results for planimetry than for altimetry, while in the case of digital camera just the opposite

The empirical conclusion that seems to be reached is that the negative impact on altimetric precision caused by the lower ratio *B/H* for digital camera flights is lower when the flight

In summary, the main conclusion to be drawn on the stereo manual measurements is that the planimetric accuracy is the same for both cameras: film and digital. However, altimetric precision does not provide the same results, being the ratio *b/h* the main cause of the

Alamús, R., Kornus, W., Palà, V., Pérez, F., Arbiol, R., Bonet, R., Costa, J., Hernández, J.,

Marimon, J., Ortiz, M.A., Palma, E., Pla, M., Racero, S., Talaya, J. (2005). Validation

Table 21. Test results comparing average differences for the personal equation of the operators who made observations on flights AR\_D and AR\_AE. Confidence level of 95%.

*(lower; upper) (mm)* p-value

personal equations for each operator and for each point.

*Operator Average (mm) Confidence interval*

and 4, whereas no differences are found for the operator 2.

operators and their experience; and the type of points.

occurs, having worse planimetric precision.

difference between both cameras.

differences are observed in the ratio *SZ/H* for all flights analyzed.

**5. Conclusions** 

height increases.

**6. References** 

process of the ICC digital camera. *Proceedings of ISPRS Workshop on High-Resolution Earth Imaging for Geospatial Information*, Hannover (Germany), May 2005.


**4** 

**Application of a** 

*3ReStl Designers, Inc., Maryland* 

*1,2South Korea* 

*3USA* 

**Photogrammetric System for** 

*1School of Civil and Environmental Engineering,* 

*Urban Design and Study,Chung-Ang University, Seoul 2Green Technology Institute, Chung-Ang University, Seoul* 

Junggeun Han1,\*, Kikwon Hong2 and Sanghun Kim3

**Monitoring Civil Engineering Structures** 

Several traditional measuring apparatus are used to check the stability of civil engineering structures and maintenance of them. The measured results are applied for the deformation and stability analysis of civil engineering structures. Currently, precision and micro measuring instruments are used for stability evaluations of civil engineering structures. Furthermore, the measuring apparatus have been changed from manual systems to automatic systems. For example, total station, one of the traditional and manual measuring methods, is transferred to digital photogrammetry with high technology development. Especially, the movement of target points is able to be measured in real-time automatically because it can be obtained 3-dimensional coordinates by digital photogrammetry. The use of automatic measuring methods has been researched in several different industries (Hannah, 1989; Lee et al., 2006). The applications of digital photogrammetry are increased in various civil engineering structures (Han et al., 2001; Han & Song, 2003; Han et al., 2007, 2008). It shows that the automatic and high-tech measuring system likeVisual Monitoring System (VMS) based on digital photogrammetry is able to apply to the stability evaluations of large

Most of recent measurement methods are based on image process method even though in some case, Global Positioning System (GPS) is used like the measurement of the deformation on the surface of large civil engineering structures (Stewart & Tsakiri, 2001). Also, automatic measuring method begins to use for stability evaluations of civil engineering structures; for examples, there are slope failure prediction (Han et al., 2001; Han & Song, 2003), displacement measurement (Bae, 2000; Kang et. al, 1995), monitoring of dams(Park et. al 2001; Han et. al, 2005). However, the current measuring systems like total station based on manual measuring are almost not possible to measure the movements of

**1. Introduction** 

civil engineering structures.

Corresponding Author

 \*

