Utilization of Unmanned Aerial Vehicle for Accurate 3D Imaging

Yoichi Kunii

## Abstract

In order to acquire geographical data by aerial photogrammetry, many images should be taken from an aerial vehicle. After that, the images are processed with the help of the structure-from-motion (SfM) technique. Multiple neighboring images with a high rate of overlapping should be obtained for high-accuracy measurement. In the event of natural disasters, UAV operation may sometimes involve risk and should be avoided. Therefore, an easy and convenient method of operating the UAVs is needed. Reports exist on some applications of the UAVs with other devices; however, it will be difficult to prepare a number of such devices in emergency. We considered the most suitable condition for image acquisition by using the UAV. Specifically, some of the altitudes and the rate of overlapping were attempted, and accuracies of the 3D measurement were confirmed. Furthermore, we developed a new camera calibration and measurement method that requires only a few images taken in a simple UAV flight. The UAV in this method was flied vertically and the images were taken at a different altitude. As a result, the plane and height accuracy was 0.093 and 0.166 m, respectively. These values were of higher accuracy than the results of the usual SfM software.

Keywords: UAV, 3D measurement, camera calibration, overlapping, accuracy

#### 1. Introduction

The demand for the unmanned aerial vehicles (UAVs) is increasing as they find applications in various fields. For example, more accurate geographical data can be acquired by using the UAVs than by using the usual aerial photogrammetry [1]. The UAVs can take high resolution images as they are able to fly at low altitudes [2]. In addition, the UAVs can be used for observation of natural disasters [3, 4] or for surveying the construction sites [5, 6]. Such applications need rapid and low-cost surveying, and the UAVs are well suited for that purpose [7]. In the case of applying this method to the public survey, the manual published by the Geographical Survey Institute of the Ministry of Land, Infrastructure and Transport in Japan prescribes that the overlap ratio between continuous images is 80% or more. Therefore, even in a narrow target area, it should be taking about several dozen sheets. Also, photogrammetry software equipped with SfM (Structure from Motion), which is now mainstream, also supports such a large number of images. However, since the imaging method as described above requires technology and labor for operating the UAV, there is a concern in terms of cost, such as requiring a dedicated operator when applying in various construction sites or the like.

Therefore, when applying survey by UAV in the landscaping space, with the aim of minimizing the labor of imaging while ensuring adequate measuring accuracy, measuring precision with respect to change in ground level and the number of images taken to be verified. In addition, we tried 3D modeling in urban plaza and hilly terrain by using the obtained results. Furthermore, we developed a new camera calibration and measurement method which requires only a few images taken in a simple UAV flight. The UAV in this method was flied vertically and the images were taken at a different altitude. We compared the measurement accuracy of the proposed method against the SfM method and evaluated the performance of the proposed method by checking the accuracy.

## 2. Background of UAV photogrammetry

UAV has been developed for military purposes in the United States since the 1950s and has been developed as a small unmanned reconnaissance aircraft around 1970 due to progress of electronic guidance technology and the like. Utilization of UAV in Japan started spreading because it was used since the late 1990s for spraying pesticides; now it is applied in information gathering and surveying at various sites, and its use in media and entertainment is expanding. Among them, aerial photogrammetry is an application field of particular importance. Normal aerial photogrammetry is carried out by a manned aircraft to image the ground above several hundred to several thousand meters from the altitude to the ground, mainly to create a topographic map. On the other hand, since the altitude of the UAV to the ground is as low as several tens to 100 m, it is possible to create a more detailed topographic map than the manned aircraft. Also, since UAV is inexpensive, maneuverable, and easy to operate compared with a manned aircraft, it demonstrates superior ability in capturing terrain during emergencies such as when a disaster occurs. Furthermore, it is expected to be a tool to improve the efficiency of surveying in earthworks and concrete works. Therefore, it can be said that evaluation of measurement accuracy for UAV photogrammetry is required due to such applications.

## 3. Acquisition of images for evaluation

#### 3.1 UAV devise and test site

The images for checking the accuracy were taken at a UAV test site in Kanagawa, Japan. The UAV test site is managed by the Japan Society for Photogrammetry and Remote Sensing. Figure 1 shows the entrance to the UAV test site. There are 76 points of circular ground marks that have Japanese national coordinate in the test area of about 5000 m<sup>2</sup> , as shown in Figure 2. The center coordinates of the ground marks were given by performing the ground survey of the whole site by a total station. This allowed comparing the given coordinates and the results of the UAV photogrammetry and checking the accuracy of the photogrammetry.

3.2 Altitude and overlapping rate

76 ground marks in the test site.

Figure 2.

149

Figure 1.

Entrance to UAV test site.

Utilization of Unmanned Aerial Vehicle for Accurate 3D Imaging

DOI: http://dx.doi.org/10.5772/intechopen.82626

The taking the image of the accuracy verification on the test site was carried out

on 23 October, 2016. The altitude of the UAV was set as three stages of 40, 60, and 80 m. The taking at each altitude was carried out so that the overlap ratio was 90% and the side lap ratio was 60%. As a result, the number of image acquired at each altitude at the ground level was 135 for 40 m, 57 for 60 m, and 26 for 80 m.

Figure 4 shows samples of images taken at each altitude.

Figure 3 shows the UAV "DJI Inspire 1" which was used for taking the images. The camera "FC350" on the Inspire 1 has 4000 2250 pixels and 4 mm focal length.

Utilization of Unmanned Aerial Vehicle for Accurate 3D Imaging DOI: http://dx.doi.org/10.5772/intechopen.82626

Figure 1. Entrance to UAV test site.

UAV, there is a concern in terms of cost, such as requiring a dedicated operator

Advanced Remote Sensing Technology for Synthetic Aperture Radar Applications,Tsunami…

Therefore, when applying survey by UAV in the landscaping space, with the aim of minimizing the labor of imaging while ensuring adequate measuring accuracy, measuring precision with respect to change in ground level and the number of images taken to be verified. In addition, we tried 3D modeling in urban plaza and hilly terrain by using the obtained results. Furthermore, we developed a new camera calibration and measurement method which requires only a few images taken in a simple UAV flight. The UAV in this method was flied vertically and the images were taken at a different altitude. We compared the measurement accuracy of the proposed method against the SfM method and evaluated the performance of the

UAV has been developed for military purposes in the United States since the 1950s and has been developed as a small unmanned reconnaissance aircraft around 1970 due to progress of electronic guidance technology and the like. Utilization of UAV in Japan started spreading because it was used since the late 1990s for spraying pesticides; now it is applied in information gathering and surveying at various sites, and its use in media and entertainment is expanding. Among them, aerial photogrammetry is an application field of particular importance. Normal aerial photogrammetry is carried out by a manned aircraft to image the ground above several hundred to several thousand meters from the altitude to the ground, mainly to create a topographic map. On the other hand, since the altitude of the UAV to the ground is as low as several tens to 100 m, it is possible to create a more detailed topographic map than the manned aircraft. Also, since UAV is inexpensive, maneuverable, and easy to operate compared with a manned aircraft, it demonstrates superior ability in capturing terrain during emergencies such as when a disaster occurs. Furthermore, it is expected to be a tool to improve the efficiency of surveying in earthworks and concrete works. Therefore, it can be said that evaluation of measurement accuracy for UAV photogrammetry is required due to such

The images for checking the accuracy were taken at a UAV test site in Kanagawa, Japan. The UAV test site is managed by the Japan Society for Photogrammetry and Remote Sensing. Figure 1 shows the entrance to the UAV test site. There are 76 points of circular ground marks that have Japanese national coordinate

UAV photogrammetry and checking the accuracy of the photogrammetry.

The camera "FC350" on the Inspire 1 has 4000 2250 pixels and 4 mm focal

the ground marks were given by performing the ground survey of the whole site by a total station. This allowed comparing the given coordinates and the results of the

Figure 3 shows the UAV "DJI Inspire 1" which was used for taking the images.

, as shown in Figure 2. The center coordinates of

when applying in various construction sites or the like.

proposed method by checking the accuracy.

2. Background of UAV photogrammetry

3. Acquisition of images for evaluation

3.1 UAV devise and test site

in the test area of about 5000 m<sup>2</sup>

applications.

length.

148

Figure 2. 76 ground marks in the test site.

## 3.2 Altitude and overlapping rate

The taking the image of the accuracy verification on the test site was carried out on 23 October, 2016. The altitude of the UAV was set as three stages of 40, 60, and 80 m. The taking at each altitude was carried out so that the overlap ratio was 90% and the side lap ratio was 60%. As a result, the number of image acquired at each altitude at the ground level was 135 for 40 m, 57 for 60 m, and 26 for 80 m. Figure 4 shows samples of images taken at each altitude.

Advanced Remote Sensing Technology for Synthetic Aperture Radar Applications,Tsunami…

4. Verification of measurement accuracy

DOI: http://dx.doi.org/10.5772/intechopen.82626

Utilization of Unmanned Aerial Vehicle for Accurate 3D Imaging

3D surveying for each ground mark was carried out by photogrammetry, and accuracy verification was carried out using the image of the test site obtained by such taking image. In the verification of accuracy, in addition to verification by each altitude to ground level, verification is also required when using images with overlap rates of 50, 60, 70, 80, and 90% in the form of thinning took images, respectively. As a result, the number of images in the verification was 135 with the relationship between the ground altitude and the overlap ratio of 40 m and 90%, which was the largest number, and the number of the images was 6 in the case of 80 m and 50%. In the verification, among the 76 points of the ground marks at the test site, 9 points, 13, 17, 25, 33, 42, 47, 70, 75, and 76, were set as control points. On the other hand, the other 67 points were set as verification points, and the accuracy verification of the 3D coordinates obtained for the verification point was decided. For accuracy verification, Agisoft PhotoScan Professional (hereinafter referred to as PhotoScan) which is a general photogrammetry software with SfM was used.

In order to verify the accuracy for each condition, root mean square errors

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

n � 1

Table 1 shows the results of accuracy verification carried out as mentioned above. The measurement accuracy shown in the same table is set by setting the value of the ground control point survey at each verification point to the true value and the value of the photogrammetry by the UAV as the measurement value and calculating the standard deviation calculated from the difference value of both at each point. From this result, it can be confirmed that an accuracy of about �0.05 m is obtained at any altitude of ground and overlap ratio. According to the precision standard of earthmoving specified by the Ministry of Land, Infrastructure and Transport, if it is within �0.1 m, it can be applied to the construction surveying and rock surveying. The results of this verification can be confirmed to satisfy the above-mentioned numerical values in any of the results. On the other hand, it is also possible to apply it to measurement of shape within �0.05 m. Regarding this numerical value, at the overlap rate of 90%, any ground altitude is satisfied; however, the result satisfied with 80% or less is mostly at the ground altitude of 40 m. In theory, in the photogrammetry, the altitude of the ground is low, and as the overlap rate becomes higher, the accuracy improves. It is thought that the results according

In addition to the above results, it was confirmed that certain results can be obtained at any ground altitude and overlap rate in this verification. In other words, in the landscaping space where various environments exist in natural space and urban space, the possibility that the method of flight of UAV is limited may be

<sup>3</sup> <sup>þ</sup> <sup>⋯</sup> <sup>þ</sup> <sup>v</sup><sup>2</sup>

n

¼ �

ffiffiffiffiffiffiffiffiffiffiffi v2 ½ � n � 1

(1)

r

<sup>2</sup> <sup>þ</sup> <sup>v</sup><sup>2</sup>

4.1 Details of the verification

4.2 Results of the verification

to the situation were obtained.

151

(RMSEs) were calculated with the following equation.

v2 <sup>1</sup> <sup>þ</sup> <sup>v</sup><sup>2</sup>

r

v : Residual error

n : Number of the data

σ<sup>0</sup> ¼ �

where, σ<sup>0</sup> : RMSE

Figure 3. DJI Inspire 1.

Figure 4. Sample images at each altitude. (a) 40 m, (b) 60 m, and (c) 80 m.

## 4. Verification of measurement accuracy

## 4.1 Details of the verification

3D surveying for each ground mark was carried out by photogrammetry, and accuracy verification was carried out using the image of the test site obtained by such taking image. In the verification of accuracy, in addition to verification by each altitude to ground level, verification is also required when using images with overlap rates of 50, 60, 70, 80, and 90% in the form of thinning took images, respectively. As a result, the number of images in the verification was 135 with the relationship between the ground altitude and the overlap ratio of 40 m and 90%, which was the largest number, and the number of the images was 6 in the case of 80 m and 50%. In the verification, among the 76 points of the ground marks at the test site, 9 points, 13, 17, 25, 33, 42, 47, 70, 75, and 76, were set as control points. On the other hand, the other 67 points were set as verification points, and the accuracy verification of the 3D coordinates obtained for the verification point was decided. For accuracy verification, Agisoft PhotoScan Professional (hereinafter referred to as PhotoScan) which is a general photogrammetry software with SfM was used.

## 4.2 Results of the verification

Figure 3. DJI Inspire 1.

Figure 4.

150

Sample images at each altitude. (a) 40 m, (b) 60 m, and (c) 80 m.

Advanced Remote Sensing Technology for Synthetic Aperture Radar Applications,Tsunami…

In order to verify the accuracy for each condition, root mean square errors (RMSEs) were calculated with the following equation.

$$\begin{aligned} \sigma\_0 &= \pm \sqrt{\frac{v\_1^2 + v\_2^2 + v\_3^2 + \dots + v\_n^2}{n-1}} = \pm \sqrt{\frac{[v^2]}{n-1}}\\ \text{where,} \\ \sigma\_0: &\quad \text{RMSE} \\ v: &\quad \text{Residual error} \\ n: &\quad \text{Number of the data} \end{aligned} \tag{1}$$

Table 1 shows the results of accuracy verification carried out as mentioned above. The measurement accuracy shown in the same table is set by setting the value of the ground control point survey at each verification point to the true value and the value of the photogrammetry by the UAV as the measurement value and calculating the standard deviation calculated from the difference value of both at each point. From this result, it can be confirmed that an accuracy of about �0.05 m is obtained at any altitude of ground and overlap ratio. According to the precision standard of earthmoving specified by the Ministry of Land, Infrastructure and Transport, if it is within �0.1 m, it can be applied to the construction surveying and rock surveying. The results of this verification can be confirmed to satisfy the above-mentioned numerical values in any of the results. On the other hand, it is also possible to apply it to measurement of shape within �0.05 m. Regarding this numerical value, at the overlap rate of 90%, any ground altitude is satisfied; however, the result satisfied with 80% or less is mostly at the ground altitude of 40 m. In theory, in the photogrammetry, the altitude of the ground is low, and as the overlap rate becomes higher, the accuracy improves. It is thought that the results according to the situation were obtained.

In addition to the above results, it was confirmed that certain results can be obtained at any ground altitude and overlap rate in this verification. In other words, in the landscaping space where various environments exist in natural space and urban space, the possibility that the method of flight of UAV is limited may be


Table 1. Result of accuracy verification.

considered; however, by this verification, it is possible to fly according to the local situation.

The image taken as mentioned above was processed by PhotoScan; 3D point cloud data of each feature captured in the image was generated. Next, a high density point cloud is generated from the obtained point cloud. Compared with the point cloud, the high density point cloud has a high density of point clouds composed of data. Therefore, it seems that texture is attached at the viewpoint from a distance, since it is a set of points to the last, the part where the hole is open as a surface becomes conspicuous because it is a set of points to the last. Figure 15 shows a target point indicated by the high density point cloud. Finally, texture mapping was performed for the high density point cloud, and a 3D model could be generated as shown in Figure 7. In addition, it was confirmed that the area of the plum tree calculated from the

obtained by ground survey by the total station. In addition, this plaza closed in 2017, and a new research building scheduled for completion in 2020 is being built in this place. In other words, since the results of this report acquired 3D data before closing the open space, it is expected to be utilized as a record of changes in the campus.

As an application to the natural space of UAV, it is necessary to perform 3D

modeling on the hilly area of about 200,000 m<sup>2</sup> between Tenjinzawa and

, which was almost the same as the area (6357.7 m2

)

created 3D model was 6358.4 m2

Figure 5. Yurinoki Plaza.

Utilization of Unmanned Aerial Vehicle for Accurate 3D Imaging

DOI: http://dx.doi.org/10.5772/intechopen.82626

Figure 6. Image from UAV.

153

5.2 Measurement for mountain area

From the above results, it is suggested that the usefulness of surveying in the landscaping space using UAV was suggested, so the case of 3D modeling by UAV conducted at the two survey sites is shown below.

## 5. Examples of application

## 5.1 Measurement for plaza

First of all, as an application to UAV's open space in urban space, we decided to do 3D modeling on Yurinoki Plaza at Tokyo University of Agriculture Setagaya Campus (Setagaya, Tokyo) as shown in Figure 5. In the Yurinoki Plaza, several trees were planted in a space of about 6000 m2 covered with lawn. In addition, there are buildings such as research buildings around the open space, and these buildings are also subject to 3D modeling. The images of Yurinoki Plaza were taken by UAV, and a total of 431 images were taken. Of these, 87 images were taken from the UAV in the vertical direction to the ground with an overlap rate of 80% from the altitude of 20 m to the ground and the other 344 aimed the camera in the horizontal direction. Figure 6 shows samples of the took image.

Utilization of Unmanned Aerial Vehicle for Accurate 3D Imaging DOI: http://dx.doi.org/10.5772/intechopen.82626

Figure 5. Yurinoki Plaza.

Figure 6. Image from UAV.

considered; however, by this verification, it is possible to fly according to the local

Advanced Remote Sensing Technology for Synthetic Aperture Radar Applications,Tsunami…

From the above results, it is suggested that the usefulness of surveying in the landscaping space using UAV was suggested, so the case of 3D modeling by UAV

First of all, as an application to UAV's open space in urban space, we decided to do 3D modeling on Yurinoki Plaza at Tokyo University of Agriculture Setagaya Campus (Setagaya, Tokyo) as shown in Figure 5. In the Yurinoki Plaza, several trees were planted in a space of about 6000 m2 covered with lawn. In addition, there are buildings such as research buildings around the open space, and these buildings are also subject to 3D modeling. The images of Yurinoki Plaza were taken by UAV, and a total of 431 images were taken. Of these, 87 images were taken from the UAV in the vertical direction to the ground with an overlap rate of 80% from the altitude of 20 m to the ground and the other 344 aimed the camera in the

conducted at the two survey sites is shown below.

horizontal direction. Figure 6 shows samples of the took image.

5. Examples of application

5.1 Measurement for plaza

situation.

152

Table 1.

Result of accuracy verification.

The image taken as mentioned above was processed by PhotoScan; 3D point cloud data of each feature captured in the image was generated. Next, a high density point cloud is generated from the obtained point cloud. Compared with the point cloud, the high density point cloud has a high density of point clouds composed of data. Therefore, it seems that texture is attached at the viewpoint from a distance, since it is a set of points to the last, the part where the hole is open as a surface becomes conspicuous because it is a set of points to the last. Figure 15 shows a target point indicated by the high density point cloud. Finally, texture mapping was performed for the high density point cloud, and a 3D model could be generated as shown in Figure 7. In addition, it was confirmed that the area of the plum tree calculated from the created 3D model was 6358.4 m2 , which was almost the same as the area (6357.7 m2 ) obtained by ground survey by the total station. In addition, this plaza closed in 2017, and a new research building scheduled for completion in 2020 is being built in this place. In other words, since the results of this report acquired 3D data before closing the open space, it is expected to be utilized as a record of changes in the campus.

### 5.2 Measurement for mountain area

As an application to the natural space of UAV, it is necessary to perform 3D modeling on the hilly area of about 200,000 m<sup>2</sup> between Tenjinzawa and

Advanced Remote Sensing Technology for Synthetic Aperture Radar Applications,Tsunami…

slope was excavated when constructing this expressway, and there was a circumstance that part of Matsuda Castle site was lost. In this research, the image taken at the Matsuda Castle site by UAV was performed on May 19, 2016, and the image taken with an 80% overlap ratio was secured from the altitude of 70 m. As a result,

The images taken as mentioned above were processed by PhotoScan. The difference in height from the vicinity of the top of Matsuda Castle to the Tomei Expressway, which was obtained from the created 3D model, is about 66.3 m, which is almost equal to the value (65.5 m) obtained from the Geographical Survey Institute. In addition, the created 3D model also includes the parts excavated by the road construction mentioned above. Therefore, the terrain before excavation clarified by the excavation survey was reproduced as shown in Figure 9 in complementing the current 3D model. As a result, it is expected that the drilling site by road construction will become visibly apparent, and it will be useful for preservation and man-

In the above sections, many images should be taken from an aerial vehicle which moves in the horizontal direction and at a fixed altitude [8]. After that, the images are processed with the help of the SfM technique [9]. Multiple neighboring images with a high rate of overlapping should be obtained for high accuracy measurement [10], which calls for labor and cost. In the event of natural disasters, UAV operation may sometimes involve risk [11] and should be avoided. Therefore, an easy and convenient method of operating the UAVs is strongly needed. Reports exist on some applications of the UAVs with other devices [12]; however, it will be difficult

In this research, we developed a method of limiting the movement of UAV only in the vertical direction, using only a small number of images vertically taken at different ground altitudes and performing aerial photogrammetry without using the ground reference point. In addition, in order to evaluate the performance of the developed method, verification was performed by comparing surveying accuracy

The images for checking the accuracy were also taken at a UAV test site, and DJI

the number of images taken was 949.

3D modeling for Matsuda Castle Ruins and excavation area.

Utilization of Unmanned Aerial Vehicle for Accurate 3D Imaging

DOI: http://dx.doi.org/10.5772/intechopen.82626

Figure 9.

agement of the future remains.

6. Development of new photogrammetric method

to prepare a number of such devices in emergency.

with general photogrammetry software.

6.1 Acquisition of images for evaluation

Inspire 1 also was used for taking the images.

155

Figure 7. 3D model of Yurinoki Plaza. (a) Vertical view, (b) Bird's-eye view.

Hatayazawa in Matsuda town, Kanagawa Prefecture, respectively, as shown in Figure 8. This area was a place where Matsuda Castle was built in the late Heian era (twelfth century) and is currently managed by Matsuda Town as Matsuda Castle Ruins. The Tomei Expressway passes the southern end of the slope; however, the

Utilization of Unmanned Aerial Vehicle for Accurate 3D Imaging DOI: http://dx.doi.org/10.5772/intechopen.82626

Figure 9. 3D modeling for Matsuda Castle Ruins and excavation area.

slope was excavated when constructing this expressway, and there was a circumstance that part of Matsuda Castle site was lost. In this research, the image taken at the Matsuda Castle site by UAV was performed on May 19, 2016, and the image taken with an 80% overlap ratio was secured from the altitude of 70 m. As a result, the number of images taken was 949.

The images taken as mentioned above were processed by PhotoScan. The difference in height from the vicinity of the top of Matsuda Castle to the Tomei Expressway, which was obtained from the created 3D model, is about 66.3 m, which is almost equal to the value (65.5 m) obtained from the Geographical Survey Institute. In addition, the created 3D model also includes the parts excavated by the road construction mentioned above. Therefore, the terrain before excavation clarified by the excavation survey was reproduced as shown in Figure 9 in complementing the current 3D model. As a result, it is expected that the drilling site by road construction will become visibly apparent, and it will be useful for preservation and management of the future remains.

## 6. Development of new photogrammetric method

In the above sections, many images should be taken from an aerial vehicle which moves in the horizontal direction and at a fixed altitude [8]. After that, the images are processed with the help of the SfM technique [9]. Multiple neighboring images with a high rate of overlapping should be obtained for high accuracy measurement [10], which calls for labor and cost. In the event of natural disasters, UAV operation may sometimes involve risk [11] and should be avoided. Therefore, an easy and convenient method of operating the UAVs is strongly needed. Reports exist on some applications of the UAVs with other devices [12]; however, it will be difficult to prepare a number of such devices in emergency.

In this research, we developed a method of limiting the movement of UAV only in the vertical direction, using only a small number of images vertically taken at different ground altitudes and performing aerial photogrammetry without using the ground reference point. In addition, in order to evaluate the performance of the developed method, verification was performed by comparing surveying accuracy with general photogrammetry software.

#### 6.1 Acquisition of images for evaluation

The images for checking the accuracy were also taken at a UAV test site, and DJI Inspire 1 also was used for taking the images.

Hatayazawa in Matsuda town, Kanagawa Prefecture, respectively, as shown in Figure 8. This area was a place where Matsuda Castle was built in the late Heian era (twelfth century) and is currently managed by Matsuda Town as Matsuda Castle Ruins. The Tomei Expressway passes the southern end of the slope; however, the

Advanced Remote Sensing Technology for Synthetic Aperture Radar Applications,Tsunami…

Figure 7.

Figure 8.

154

Matsuda Castle Ruins.

3D model of Yurinoki Plaza. (a) Vertical view, (b) Bird's-eye view.

where,

DOI: http://dx.doi.org/10.5772/intechopen.82626

L : Given distance mð Þ

Utilization of Unmanned Aerial Vehicle for Accurate 3D Imaging

Bz1, Bz2, … and Bz<sup>5</sup> could be calculated by the following equation.

Hi : Altitude of pictures approximate ð Þ ð Þ m

Bzi ¼ Hi � H<sup>1</sup> ð Þ i ¼ 1; 2; 3; ⋯

where,

6.2.2 Relative orientation

Figure 11.

157

Positional relation of vertical images.

Hi : Altitude of pictures approximate ð Þ ð Þ m

Therefore, the relative distance between the lowest principal point and the other

Bzi : Distance between principal points approximate ð Þ ð Þ m

The relative orientation is to obtain relative took points and postures with respect to a plurality of took images. Generally, relative orientation is often performed only between two images; however, in this study, based on the image No.1 in Figure 11 as a reference, relative orientation with respect to the other images after image 2. We decided to do all at the same time. In other words, it is assumed that image 1 is taken with no inclination at the origin of the relative coordinates, and the relative point and rotation angle at the time of taking after the image 2 are obtained at the same time. Furthermore, with respect to mutual orientation in this research, the interior orientation parameter of the camera is also set as an unknown quantity as a parameter common to each image, and the orientation is performed at the same time. Figure 12 shows a coplanar condition that focuses on only No. 1 and 5 images. The principal points of these two images and a common

(3)

li : Given distance on sensors mð Þ f : Focal length approximate ð Þ ð Þ m

Figure 10. Given distance between 2 points.

Since the method developed in this research eliminates the ground control point, the obtained 3D coordinates are local coordinates based on arbitrary origin and coordinate axes. Also, to calculate 3D coordinates by this method, it is necessary to give only the distance between two arbitrary points as a known quantity. In this research, we decided to treat the distance (14.831 m) between the airspace signs No. 27 and 35 as shown in Figure 10 as a known amount.

#### 6.2 Theory of the proposed method

In this research, photogrammetry is carried out by using a plurality of vertical images taken from UAV and acquiring common corresponding points for each image. In general photogrammetry procedures, first of all, after performing orientation processing (camera calibration) to obtain exterior orientation parameters such as shooting points and posture of the camera at the time of shooting and interior orientation parameters such as focal length and lens distortion correction coefficient, 3D surveying of the measurement point will be carried out. However, the method developed in this research is to obtain the optimal solution of each parameter while advancing the camera calibration and the 3D survey at the same time. The details of this method will be described below for each procedure.

### 6.2.1 Estimation of relative distance

We estimate relative positional relationships with each principal point with respect to a plurality of vertical images taken from the UAV. Figure 11 schematically shows the situation of the camera at the time of taking each image, and it is assumed to be 1, 2, … in descending order of altitude to ground. First, the approximate ground altitude for each image was calculated. In this calculation, let the arbitrary two point distance set as described above be a known amount L, let the length on the sensor when L is took on the image be l1, l2,… the focal point of the camera when the distance is f, the approximate imaging heights H1, H2, … for each image are obtained by the following equation.

$$H\_i = \frac{L}{l\_i} f \quad (i = 1, 2, 3, \cdots) \tag{2}$$

Utilization of Unmanned Aerial Vehicle for Accurate 3D Imaging DOI: http://dx.doi.org/10.5772/intechopen.82626

> where, Hi : Altitude of pictures approximate ð Þ ð Þ m L : Given distance mð Þ li : Given distance on sensors mð Þ f : Focal length approximate ð Þ ð Þ m

Therefore, the relative distance between the lowest principal point and the other Bz1, Bz2, … and Bz<sup>5</sup> could be calculated by the following equation.

$$\begin{aligned} B\_{xi} &= H\_{i} - H\_{1} \quad (i = 1, 2, 3, \cdots) \\ \text{where,} \\ B\_{xi} &\text{:Distance between principal points } (\text{approximate}) \ (\text{m}) \\ \text{....} \quad \text{: } \quad \text{: } \quad \text{: } \quad \text{: } \quad \text{: } \quad \text{: } \quad \text{: } \quad \text{: } \quad \text{: } \quad \text{: } \end{aligned}$$

Hi : Altitude of pictures approximate ð Þ ð Þ m

## 6.2.2 Relative orientation

Since the method developed in this research eliminates the ground control point,

In this research, photogrammetry is carried out by using a plurality of vertical images taken from UAV and acquiring common corresponding points for each image. In general photogrammetry procedures, first of all, after performing orientation processing (camera calibration) to obtain exterior orientation parameters such as shooting points and posture of the camera at the time of shooting and interior orientation parameters such as focal length and lens distortion correction coefficient, 3D surveying of the measurement point will be carried out. However, the method developed in this research is to obtain the optimal solution of each parameter while advancing the camera calibration and the 3D survey at the same time. The details of this method will be described below for each procedure.

We estimate relative positional relationships with each principal point with respect to a plurality of vertical images taken from the UAV. Figure 11 schematically shows the situation of the camera at the time of taking each image, and it is assumed to be 1, 2, … in descending order of altitude to ground. First, the approximate ground altitude for each image was calculated. In this calculation, let the arbitrary two point distance set as described above be a known amount L, let the length on the sensor when L is took on the image be l1, l2,… the focal point of the camera when the distance is f, the approximate imaging heights H1, H2, … for each

f ið Þ ¼ 1; 2; 3; ⋯ (2)

the obtained 3D coordinates are local coordinates based on arbitrary origin and coordinate axes. Also, to calculate 3D coordinates by this method, it is necessary to give only the distance between two arbitrary points as a known quantity. In this research, we decided to treat the distance (14.831 m) between the airspace signs No.

Advanced Remote Sensing Technology for Synthetic Aperture Radar Applications,Tsunami…

27 and 35 as shown in Figure 10 as a known amount.

6.2 Theory of the proposed method

Figure 10.

Given distance between 2 points.

6.2.1 Estimation of relative distance

156

image are obtained by the following equation.

Hi <sup>¼</sup> <sup>L</sup> li

The relative orientation is to obtain relative took points and postures with respect to a plurality of took images. Generally, relative orientation is often performed only between two images; however, in this study, based on the image No.1 in Figure 11 as a reference, relative orientation with respect to the other images after image 2. We decided to do all at the same time. In other words, it is assumed that image 1 is taken with no inclination at the origin of the relative coordinates, and the relative point and rotation angle at the time of taking after the image 2 are obtained at the same time. Furthermore, with respect to mutual orientation in this research, the interior orientation parameter of the camera is also set as an unknown quantity as a parameter common to each image, and the orientation is performed at the same time. Figure 12 shows a coplanar condition that focuses on only No. 1 and 5 images. The principal points of these two images and a common

Figure 11. Positional relation of vertical images.

Advanced Remote Sensing Technology for Synthetic Aperture Radar Applications,Tsunami…

relative orientation is done with Bz as a known quantity. Furthermore, in the mutual orientation in this method, since the interior orientation parameter common to each image is also treated as an unknown quantity, it is necessary to consider the interior orientation parameter with respect to the image coordinates of the image points p<sup>1</sup> and p5. That is, considering the principal point positions as u<sup>0</sup> and v0, the scale factors as a1, a2, a3, and a4, as for the lens distortion, the radiation direction (coefficients: k1, k2, and k3) and the tangential direction (p1, p2), (xi, yi) (i = 1, 5) in the Eq. (4) is obtained by converting the pixel coordinates (ui, vi) (i = 1, 5) obtained

<sup>i</sup> <sup>k</sup>1r<sup>2</sup> <sup>þ</sup> <sup>k</sup>2r<sup>4</sup> <sup>þ</sup> <sup>k</sup>3r<sup>6</sup> � � <sup>þ</sup> <sup>p</sup><sup>1</sup> <sup>r</sup><sup>2</sup> <sup>þ</sup> <sup>2</sup>x0<sup>2</sup>

By sequentially deriving the coplanar conditional expressions from each pair based on image 1, the parameters shown in Table 2 are unknown quantities in the mutual orientation here. In other words, if one set of corresponding points is obtained between each image, one coplanar condition formula can be obtained, so it is necessary to acquire corresponding points so that a coplanar condition formula exceeding the number of unknown quantities can be obtained. For example, if the number of images is five, the unknown quantity is 10 + 5 � (5 � 1) = 30; however, if 8 or more corresponding points are obtained, the coplanar conditional expression

becomes 8 � (5 � 1) = 32 or more, and it is possible to obtain a solution.

<sup>i</sup> <sup>k</sup>1r<sup>2</sup> <sup>þ</sup> <sup>k</sup>2r<sup>4</sup> <sup>þ</sup> <sup>k</sup>3r<sup>6</sup> � � <sup>þ</sup> <sup>p</sup><sup>2</sup> <sup>r</sup><sup>2</sup> <sup>þ</sup> <sup>2</sup>y<sup>0</sup>

k1, k2, k<sup>3</sup> : Coefficients of radial distortion p1, p<sup>2</sup> : Coefficients of tangential distortion

> <sup>i</sup> þ a2y<sup>0</sup> i

> > i

<sup>i</sup> : Measurement point mm ð Þ ui, vi : Measurement point pixel ð Þ xp, yp : Principal point pixel ð Þ a1, a2, a3, a<sup>4</sup> : Scale factor

i � � <sup>þ</sup> <sup>2</sup>p2x<sup>0</sup>

i <sup>2</sup> � � <sup>þ</sup> <sup>2</sup>p1x<sup>0</sup> i y0 i 9 = ;

(5)

i y0 i

from each took image by the following equation.

Utilization of Unmanned Aerial Vehicle for Accurate 3D Imaging

DOI: http://dx.doi.org/10.5772/intechopen.82626

<sup>i</sup> þ x<sup>0</sup>

<sup>i</sup> þ y<sup>0</sup>

<sup>r</sup> <sup>¼</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x02 <sup>i</sup> <sup>þ</sup> <sup>y</sup>0<sup>2</sup> i

vi ¼ yp þ a3x<sup>0</sup> þ a4y<sup>0</sup>

p ui ¼ xp þ a1x<sup>0</sup>

x0 i , y<sup>0</sup>

Table 2.

159

Unknown parameters of relative orientation.

xi ¼ x<sup>0</sup>

yi ¼ y<sup>0</sup>

where,

Figure 12. Coplanarity condition of two vertical images.

point P are set as one plane (epipolar plane). Hereinafter, the details of the present method will be described based on the figure.

Let the principal points of each image be O1(0, 0,0) and O5(Bx, By, Bz), image points of P be p1(x1, y1) and p5(x5, y5). Then, the relationship of these two images is expressed by the following coplanarity equation.

$$\begin{aligned} & \begin{bmatrix} B\_x & B\_y & B\_z \\ \end{bmatrix} \\ & \begin{bmatrix} X\_1 & Y\_1 & Z\_1 \\ \end{bmatrix} = \mathbf{0} \\ & \text{where,} \\ & \begin{pmatrix} X\_1 \\ Y\_1 \\ Z\_1 \\ \end{pmatrix} = \begin{pmatrix} x\_1 \\ y\_1 \\ -f \\ Z\_2 \\ \end{pmatrix} \\ & \begin{pmatrix} X\_5 \\ Y\_5 \\ Z\_5 \\ \end{pmatrix} = \mathbf{R} \begin{pmatrix} \kappa\_5 \\ y\_5 \\ -f \\ \end{pmatrix} + \begin{pmatrix} B\_x \\ B\_y \\ B\_z \\ \end{pmatrix} \\ & \mathbf{R} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos \alpha & -\sin \alpha \\ 0 & \sin \alpha & \cos \alpha \\ \end{pmatrix} \begin{pmatrix} \cos \phi & 0 & \sin \phi \\ 0 & 1 & 0 \\ -\sin \phi & 0 & \cos \phi \\ \end{pmatrix} \begin{pmatrix} \cos \kappa & -\sin \kappa & 0 \\ \sin \kappa & \cos \kappa & 0 \\ 0 & 0 & 1 \\ \end{pmatrix} \\ & \begin{array}{ll} & \omega\_0 \,\kappa \text{ + rotation angle of N.5} \\ & \text{f} \end{bmatrix} \end{aligned} (4)$$

The relative distance Bz<sup>1</sup> obtained by Eq. (3) is substituted for Bz in Eq. (4). That is, under normal coplanar conditions, Bx is a fixed value; however, in this case,

Utilization of Unmanned Aerial Vehicle for Accurate 3D Imaging DOI: http://dx.doi.org/10.5772/intechopen.82626

relative orientation is done with Bz as a known quantity. Furthermore, in the mutual orientation in this method, since the interior orientation parameter common to each image is also treated as an unknown quantity, it is necessary to consider the interior orientation parameter with respect to the image coordinates of the image points p<sup>1</sup> and p5. That is, considering the principal point positions as u<sup>0</sup> and v0, the scale factors as a1, a2, a3, and a4, as for the lens distortion, the radiation direction (coefficients: k1, k2, and k3) and the tangential direction (p1, p2), (xi, yi) (i = 1, 5) in the Eq. (4) is obtained by converting the pixel coordinates (ui, vi) (i = 1, 5) obtained from each took image by the following equation.

$$\begin{aligned} \mathbf{x}\_i &= \mathbf{x}\_i' + \mathbf{x}\_i'(k\_1r^2 + k\_2r^4 + k\_3r^6) + p\_1\left(r^2 + 2\mathbf{x}\_i'^2\right) + 2p\_2\mathbf{x}\_i'y\_i' \\ \mathbf{y}\_i &= \mathbf{y}\_i' + \mathbf{y}\_i'(k\_1r^2 + k\_2r^4 + k\_3r^6) + p\_2\left(r^2 + 2\mathbf{y}\_i'^2\right) + 2p\_1\mathbf{x}\_i'y\_i' \\ \text{where,} \\ \begin{aligned} &k\_1, k\_2, k\_3: \text{Coefficients of radial distortion} \\ &p\_1, p\_2: \text{Coefficients of tangential distortion} \\ &r = \sqrt{x\_i'^2 + y\_i'^2} \\ &u\_i = x\_p + a\_1\mathbf{x}\_i' + a\_3\mathbf{y}\_i' \\ &v\_i = y\_p + a\_3\mathbf{x}' + a\_4\mathbf{y}\_i' \\ &x\_p'\mathbf{y}\_i': \text{Measurement point (mm)} \\ &u\_i, v\_i: \text{Measurement point (pixel)} \\ &x\_p, y\_p: \text{Principal point (pixel)} \\ &a\_1, a\_2, a\_3, a\_4 : \text{Scale factor} \end{aligned} \tag{5}$$

By sequentially deriving the coplanar conditional expressions from each pair based on image 1, the parameters shown in Table 2 are unknown quantities in the mutual orientation here. In other words, if one set of corresponding points is obtained between each image, one coplanar condition formula can be obtained, so it is necessary to acquire corresponding points so that a coplanar condition formula exceeding the number of unknown quantities can be obtained. For example, if the number of images is five, the unknown quantity is 10 + 5 � (5 � 1) = 30; however, if 8 or more corresponding points are obtained, the coplanar conditional expression becomes 8 � (5 � 1) = 32 or more, and it is possible to obtain a solution.


#### Table 2.

Unknown parameters of relative orientation.

point P are set as one plane (epipolar plane). Hereinafter, the details of the present

Advanced Remote Sensing Technology for Synthetic Aperture Radar Applications,Tsunami…

Let the principal points of each image be O1(0, 0,0) and O5(Bx, By, Bz), image points of P be p1(x1, y1) and p5(x5, y5). Then, the relationship of these two images is

> cos ϕ 0 sin ϕ 0 10 � sin ϕ 0 cos ϕ

The relative distance Bz<sup>1</sup> obtained by Eq. (3) is substituted for Bz in Eq. (4). That

is, under normal coplanar conditions, Bx is a fixed value; however, in this case,

1

0

BB@

cos κ � sin κ 0 sin κ cos κ 0 0 01

1

(4)

CCA

CCA

method will be described based on the figure.

expressed by the following coplanarity equation.

Bx By Bz X<sup>1</sup> Y<sup>1</sup> Z<sup>1</sup> X<sup>5</sup> Y<sup>5</sup> Z<sup>5</sup> � � � � � � � �

Coplanarity condition of two vertical images.

x1 y1 �f

0

BB@

¼ 0

1

CCA

1

Bx By Bz 1

CCA

0

BB@

1

CCA

0

BB@

CCA þ

x5 y5 �f

10 0

ω, ϕ, κ : rotation angle of No:5

0 cos ω � sinω 0 sinω cosω

0

BB@

� � � � � � � �

Figure 12.

where, X1 Y1 Z1

> X5 Y5 Z5

1

1

0

BB@

f : focal length

CCA <sup>¼</sup> <sup>R</sup>

CCA ¼

0

BB@

0

BB@

R ¼

158

#### 6.2.3 Calculation of 3D actual coordinates

Since the relative orientation parameter and the interior orientation parameter for all the images were obtained by the above processing, here, the calculation of the 3D relative coordinates for each measurement point is performed under the collinear condition. The collinear condition is a condition in which the three points, the ground survey point (X, Y, Z), the image point (x, y) on the sensor, and the principal point (X0, Y0, Z0), exist in a straight line. Yes, it is expressed by the following equation as a collinear condition expression.

the final stage of this measurement process, all three orientation parameters for all

unknown quantities and the final line. The orientation process shall be carried out.

In order to evaluate the performance of the proposed method, image taking was carried out at the UAV test site and the measurement accuracy verification was carried out. The images were taken by UAV in the vertical direction from the center of the UAV test site. Moreover, every 5 m in the range of the ground altitude of approximately 70–90 m and acquires 5 photos in total as shown in Figure 13. In addition, 3D coordinates for 39 points of anti-aircraft signs, which are commonly found in 5 photos, were calculated by this development method and accuracy verification was carried out based on residuals with known coordinates. At that time, as shown in Figure 14, the origin is set to No. 27 anti-aircraft marker, the direction of No. 35 is the X axis, the plane formed by these two points and three points is the XY plane, the XY plane is set as the Z axis. In order to applying the proposed orientation method for acquisition of 3D coordinate of these anti-aircraft signs except origin point, the orientation can be performed by using only 2 images. However, in the case of a small number of images, the observation equation and the number of unknown quantities compete with each other, and the convergence state of the calculation by the least squares method becomes unstable. Even in the images taken in this research, trial was done with a small number of sheets; however, it was difficult to stably obtain a convergent solution with 4 or less, so we decided to use all 5 images. Table 3 shows the results of final orientation for 5 photos. Since the ground altitude in the table is an approximate value obtained by independent positioning with GPS mounted on UAV, a difference of several meters is generated

As shown in Table 4, the accuracy verification results showed that the mean square error was within 0.200 m for both plane and height. When this precision is applied to the surveying accuracy at the earthmoving site, it is considered that the 3D point group within the position accuracy of 0.20 m can be applied to partial payment measurement, and it was recognized that it can be applied as a simple

Meanwhile, as a comparison target, measurement accuracy was also calculated by general photogrammetry software. The software used is PhotoScan. In this research, PhotoScan also captured the 5 images shown in Figure 13 and calculated

Vertical images for checking accuracy. (a) 70 m, (b) 75 m, (c) 80 m, (d) 85 m, and (e) 90 m.

the orientation parameters and all the measurement points are regarded as

Utilization of Unmanned Aerial Vehicle for Accurate 3D Imaging

DOI: http://dx.doi.org/10.5772/intechopen.82626

6.3 Checking accuracy

from the Z coordinate in the orientation result.

method for earthwork.

Figure 13.

161

$$\begin{aligned} x &= -f \frac{a\_{11}(X - X\_0) + a\_{12}(Y - Y\_0) + a\_{13}(Z - Z\_0)}{a\_{31}(X - X\_0) + a\_{32}(Y - Y\_0) + a\_{33}(Z - Z\_0)} \\ y &= -f \frac{a\_{21}(X - X\_0) + a\_{22}(Y - Y\_0) + a\_{23}(Z - Z\_0)}{a\_{31}(X - X\_0) + a\_{32}(Y - Y\_0) + a\_{33}(Z - Z\_0)} \end{aligned}$$

where,


That is, since two collinear conditional expressions for one measurement point are obtained for each image, if there are two or more images, it is possible to obtain 3D relative coordinates by 2 � 2 = 4 or more collinear conditional expressions. It is possible to solve the three unknown quantities. As a result, 3D relative coordinates for all measurement points are obtained.

Further, all of the obtained 3D relative coordinates are converted into the coordinates of the real scale by the length given as the known amount as shown in Figure 10. That is, from the ratio between the actual length and the length on the sensor between points known as known amounts, the 3D relative coordinates for all the measurement points are converted to the coordinates on the real scale. When converting to real scale coordinates, it is necessary to set the coordinate origin and coordinate axes.

#### 6.2.4 Absolute orientation

Since the 3D coordinates on the real scale with respect to all the measurement points are obtained by the above processing, here, the interior orientation parameter common to each image and each exterior orientation parameter are determined by absolute orientation. In other words, in this orientation, all collinear conditional expressions are derived with all measurement points from which 3D coordinates are obtained as ground reference points, and the interior orientation parameter shown in Table 2 and the took points and attitude angles for each image. All exterior orientation parameters are to be obtained at the same time. As a result, absolute orientation for each image is completed.

#### 6.2.5 Final orientation

The orientation parameters for every camera and the absolute 3D coordinates for every measurement point were acquired by the procedure described above. However, errors in estimation of the absolute 3D coordinates are possible due to conversion from the relative coordinates if using only one given distance. Therefore, as the final stage of this measurement process, all three orientation parameters for all the orientation parameters and all the measurement points are regarded as unknown quantities and the final line. The orientation process shall be carried out.

## 6.3 Checking accuracy

6.2.3 Calculation of 3D actual coordinates

x ¼ �f

y ¼ �f

where,

0

BB@

a<sup>11</sup> a<sup>12</sup> a<sup>13</sup> a<sup>21</sup> a<sup>22</sup> a<sup>23</sup> a<sup>31</sup> a<sup>32</sup> a<sup>33</sup>

coordinate axes.

6.2.4 Absolute orientation

6.2.5 Final orientation

160

1

CCA ¼

0

BB@

for all measurement points are obtained.

orientation for each image is completed.

following equation as a collinear condition expression.

a11ð Þþ X � X<sup>0</sup> a12ð Þþ Y � Y<sup>0</sup> a13ð Þ Z � Z<sup>0</sup> a31ð Þþ X � X<sup>0</sup> a32ð Þþ Y � Y<sup>0</sup> a33ð Þ Z � Z<sup>0</sup>

a21ð Þþ X � X<sup>0</sup> a22ð Þþ Y � Y<sup>0</sup> a23ð Þ Z � Z<sup>0</sup> a31ð Þþ X � X<sup>0</sup> a32ð Þþ Y � Y<sup>0</sup> a33ð Þ Z � Z<sup>0</sup>

> 10 0 0 cos ω � sinω 0 sinω cosω

Since the relative orientation parameter and the interior orientation parameter for all the images were obtained by the above processing, here, the calculation of the 3D relative coordinates for each measurement point is performed under the collinear condition. The collinear condition is a condition in which the three points, the ground survey point (X, Y, Z), the image point (x, y) on the sensor, and the principal point (X0, Y0, Z0), exist in a straight line. Yes, it is expressed by the

Advanced Remote Sensing Technology for Synthetic Aperture Radar Applications,Tsunami…

1

0

BB@

That is, since two collinear conditional expressions for one measurement point are obtained for each image, if there are two or more images, it is possible to obtain 3D relative coordinates by 2 � 2 = 4 or more collinear conditional expressions. It is possible to solve the three unknown quantities. As a result, 3D relative coordinates

Further, all of the obtained 3D relative coordinates are converted into the coor-

Since the 3D coordinates on the real scale with respect to all the measurement points are obtained by the above processing, here, the interior orientation parameter common to each image and each exterior orientation parameter are determined by absolute orientation. In other words, in this orientation, all collinear conditional expressions are derived with all measurement points from which 3D coordinates are obtained as ground reference points, and the interior orientation parameter shown in Table 2 and the took points and attitude angles for each image. All exterior orientation parameters are to be obtained at the same time. As a result, absolute

The orientation parameters for every camera and the absolute 3D coordinates for every measurement point were acquired by the procedure described above. However, errors in estimation of the absolute 3D coordinates are possible due to conversion from the relative coordinates if using only one given distance. Therefore, as

dinates of the real scale by the length given as the known amount as shown in Figure 10. That is, from the ratio between the actual length and the length on the sensor between points known as known amounts, the 3D relative coordinates for all the measurement points are converted to the coordinates on the real scale. When converting to real scale coordinates, it is necessary to set the coordinate origin and

cos ϕ 0 sin ϕ 0 10 � sin ϕ 0 cos ϕ 1

0

BB@

cos κ � sin κ 0 sin κ cos κ 0 0 01

1

CCA

(6)

CCA

CCA

In order to evaluate the performance of the proposed method, image taking was carried out at the UAV test site and the measurement accuracy verification was carried out. The images were taken by UAV in the vertical direction from the center of the UAV test site. Moreover, every 5 m in the range of the ground altitude of approximately 70–90 m and acquires 5 photos in total as shown in Figure 13. In addition, 3D coordinates for 39 points of anti-aircraft signs, which are commonly found in 5 photos, were calculated by this development method and accuracy verification was carried out based on residuals with known coordinates. At that time, as shown in Figure 14, the origin is set to No. 27 anti-aircraft marker, the direction of No. 35 is the X axis, the plane formed by these two points and three points is the XY plane, the XY plane is set as the Z axis. In order to applying the proposed orientation method for acquisition of 3D coordinate of these anti-aircraft signs except origin point, the orientation can be performed by using only 2 images. However, in the case of a small number of images, the observation equation and the number of unknown quantities compete with each other, and the convergence state of the calculation by the least squares method becomes unstable. Even in the images taken in this research, trial was done with a small number of sheets; however, it was difficult to stably obtain a convergent solution with 4 or less, so we decided to use all 5 images. Table 3 shows the results of final orientation for 5 photos. Since the ground altitude in the table is an approximate value obtained by independent positioning with GPS mounted on UAV, a difference of several meters is generated from the Z coordinate in the orientation result.

As shown in Table 4, the accuracy verification results showed that the mean square error was within 0.200 m for both plane and height. When this precision is applied to the surveying accuracy at the earthmoving site, it is considered that the 3D point group within the position accuracy of 0.20 m can be applied to partial payment measurement, and it was recognized that it can be applied as a simple method for earthwork.

Meanwhile, as a comparison target, measurement accuracy was also calculated by general photogrammetry software. The software used is PhotoScan. In this research, PhotoScan also captured the 5 images shown in Figure 13 and calculated

Figure 13.

Vertical images for checking accuracy. (a) 70 m, (b) 75 m, (c) 80 m, (d) 85 m, and (e) 90 m.

the 3D coordinates and measurement accuracy for anti-aircraft signs. At that time, we tried two patterns, one with only the No. 27, 35, and 62 as the reference point and one with the 39 points as the reference point. In addition, we also decided to compare it with the case where shooting was performed by a general method as photogrammetry. In other words, with the altitude of the ground set constant at approximately 70 m, the UAV was made to fly in parallel, a total of 57 images were taken to cover the entire test site with securing an overlap rate of 80%, and the image was taken into PhotoScan. The 3D coordinates and measurement accuracy in the case were also calculated. At that time, we decided to use the same number of 9 points as the standard photogrammetry. Also, as an index for evaluating each measurement accuracy obtained above, the standard accuracy generally used in

photogrammetry was calculated by the following equation [13].

<sup>f</sup> <sup>σ</sup>p, <sup>σ</sup><sup>z</sup> <sup>¼</sup> ffiffi

σx, σy, σ<sup>z</sup> : Standard error for each axis mð Þ

In the above equation, since five vertical took images are used in this study, H in Eq. (7) is the average value (83.944 m) of ground altitude after orientation for 5 images, B is 5. The standard accuracy was calculated using the distance (20.137 m) between the two most distant images. As for the reading accuracy, as in the general photogrammetry, one pixel was used, and the pixel was converted into the length

As a result, in the case of using only 5 vertical images, the plan accuracy was lower than the standard accuracy for both the proposed method and PhotoScan; however, for the height accuracy only the proposed method exceeded the standard accuracy. In other words, it was confirmed that the proposed method can obtain the accuracy equivalent to that of ordinary photogrammetry, especially in the height direction, although the imaging method is simple and the ground reference point is unnecessary. On the other hand, when images taken by general parallel imaging were processed by PhotoScan, the accuracy was high enough to be applicable to the volume control of the earthworks. From the above results, it is necessary to select the shooting method by UAV according to the situation; however, it can be said that the proposed method is useful for grasping the situation of the site easily in a short time.

Figure 15 shows distribution of residuals of X and Y coordinates with respect to 39 points in an arrow direction. From the figure, within the range of about 10–20 m from the origin, the residuals at most verification points are within �0.04 m, which is equivalent to the standard accuracy, but No. 45, 50 and 51, 55, 61, and 66, the residual is around �0.2 m, and it can be confirmed that the accuracy deteriorates. Also, in the same figure, the distribution of the verification point positions can be confirmed to be relatively wide, ranging from about 40 m in the X direction to 50–60 m in the Y direction. In other words, in this verification, it is speculated that the verification point where the Y coordinate is far from the origin is due to the

2 p H f H <sup>B</sup> <sup>σ</sup><sup>p</sup>

(7)

<sup>σ</sup><sup>x</sup> <sup>¼</sup> <sup>σ</sup><sup>y</sup> <sup>¼</sup> <sup>H</sup>

Utilization of Unmanned Aerial Vehicle for Accurate 3D Imaging

DOI: http://dx.doi.org/10.5772/intechopen.82626

H : Altitude mð Þ f : Focal length mð Þ B : Base line mð Þ

σ<sup>p</sup> : Pointing accuracy mð Þ

where,

on the sensor of the camera and was used.

6.4 Consideration of the results

163

#### Figure 14. Local coordinate system.


#### Table 3. Results of final orientation.


Utilization of Unmanned Aerial Vehicle for Accurate 3D Imaging DOI: http://dx.doi.org/10.5772/intechopen.82626

the 3D coordinates and measurement accuracy for anti-aircraft signs. At that time, we tried two patterns, one with only the No. 27, 35, and 62 as the reference point and one with the 39 points as the reference point. In addition, we also decided to compare it with the case where shooting was performed by a general method as photogrammetry. In other words, with the altitude of the ground set constant at approximately 70 m, the UAV was made to fly in parallel, a total of 57 images were taken to cover the entire test site with securing an overlap rate of 80%, and the image was taken into PhotoScan. The 3D coordinates and measurement accuracy in the case were also calculated. At that time, we decided to use the same number of 9 points as the standard photogrammetry. Also, as an index for evaluating each measurement accuracy obtained above, the standard accuracy generally used in photogrammetry was calculated by the following equation [13].

$$\begin{aligned} \sigma\_{\mathbf{x}} &= \sigma\_{\mathbf{\gamma}} = \frac{H}{f}\sigma\_{p}, \quad \sigma\_{x} = \sqrt{2} \frac{H}{f} \frac{H}{B} \sigma\_{p} \\ \text{where,} \\ \sigma\_{\mathbf{x}}, \sigma\_{p}, \sigma\_{x}: \text{Standard error for each axis } (\mathbf{m}) \\ H: \text{Altitude } (\mathbf{m}) \\ f: \text{Focal length } (\mathbf{m}) \\ B: \text{Base line } (\mathbf{m}) \\ \sigma\_{p}: \text{Paining accuracy } (\mathbf{m}) \end{aligned} \tag{7}$$

In the above equation, since five vertical took images are used in this study, H in Eq. (7) is the average value (83.944 m) of ground altitude after orientation for 5 images, B is 5. The standard accuracy was calculated using the distance (20.137 m) between the two most distant images. As for the reading accuracy, as in the general photogrammetry, one pixel was used, and the pixel was converted into the length on the sensor of the camera and was used.

As a result, in the case of using only 5 vertical images, the plan accuracy was lower than the standard accuracy for both the proposed method and PhotoScan; however, for the height accuracy only the proposed method exceeded the standard accuracy. In other words, it was confirmed that the proposed method can obtain the accuracy equivalent to that of ordinary photogrammetry, especially in the height direction, although the imaging method is simple and the ground reference point is unnecessary. On the other hand, when images taken by general parallel imaging were processed by PhotoScan, the accuracy was high enough to be applicable to the volume control of the earthworks. From the above results, it is necessary to select the shooting method by UAV according to the situation; however, it can be said that the proposed method is useful for grasping the situation of the site easily in a short time.

#### 6.4 Consideration of the results

Figure 15 shows distribution of residuals of X and Y coordinates with respect to 39 points in an arrow direction. From the figure, within the range of about 10–20 m from the origin, the residuals at most verification points are within �0.04 m, which is equivalent to the standard accuracy, but No. 45, 50 and 51, 55, 61, and 66, the residual is around �0.2 m, and it can be confirmed that the accuracy deteriorates. Also, in the same figure, the distribution of the verification point positions can be confirmed to be relatively wide, ranging from about 40 m in the X direction to 50–60 m in the Y direction. In other words, in this verification, it is speculated that the verification point where the Y coordinate is far from the origin is due to the

Figure 14.

Table 3.

Table 4.

162

Results of checking accuracy.

Results of final orientation.

Local coordinate system.

Advanced Remote Sensing Technology for Synthetic Aperture Radar Applications,Tsunami…

point. In addition, in order to evaluate the performance of the developed method, verification was performed by comparing surveying accuracy with general photogrammetry software. As a result, since the developed method uses only a small number of vertical took images, it is presumed that the imaging effort can be reduced as compared with the usual method. Also, since the ground reference point

On the other hand, the accuracy verification was performed by comparing with the accuracy of the ground survey by the total station; however, it is inferior in the case of using the general imaging method and software, it was confirmed that the measurement with accuracy of. Specifically, in the general method, it is about 0.040 m, whereas in the proposed method, it is about 0.100 m. Also, since the shooting method simply shoots UAV in the vertical direction and shoots several images, it is possible to drastically reduce the time and labor involved in shooting. From these facts, it is expected that the present development method will be used for surveying the current conditions at the earthmoving site and grasping the

As a future task, we need to consider means for further improving accuracy. In particular, since it is confirmed that the accuracy of this method decreases with respect to a point away from the origin, it is desirable to stabilize the accuracy with respect to the position of the measurement point. Specific countermeasures include verifying the optimum number of photos according to the situation, verifying the optimum altitude difference between the photos, and using GNSS (GPS) positioning information at UAV flight. In this study, 3D coordinates are obtained as local coordinates without using the ground reference point; however, it is necessary to continue discussion on a method for efficiently obtaining global coordinates such as

Department of Landscape Architecture Science, Tokyo University of Agriculture,

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

\*Address all correspondence to: y3kunii@nodai.ac.jp

provided the original work is properly cited.

is unnecessary, preparation for imaging is unnecessary.

Utilization of Unmanned Aerial Vehicle for Accurate 3D Imaging

DOI: http://dx.doi.org/10.5772/intechopen.82626

damage situation at the time of a disaster.

planar rectangular coordinates.

Author details

Yoichi Kunii

Tokyo, Japan

165

#### Figure 15.

decrease in the Y coordinate accuracy in the proposed method. From these results, it is considered preferable to set the origin as close to the measurement object as possible when applying this method in the field.

On the other hand, in order to confirm the utility for 3D measurement by the UAV, measurement accuracy of this result was compared with measurement accuracy by satellite image [14] and aerial image [15]. As a result, the RMSEs of measurement by using the satellite image were 0.3 to 1.0 m, and the case of the aerial image were 0.1 to 0.5 m. Such results were dependent on several number of GCPs. Therefore, it can be said the UAV is utilized for accurate measurement in a limited area.

### 7. Conclusions

In this research, we developed a method of using an image taken vertically from UAV and performing aerial photogrammetry without using the ground reference

#### Utilization of Unmanned Aerial Vehicle for Accurate 3D Imaging DOI: http://dx.doi.org/10.5772/intechopen.82626

point. In addition, in order to evaluate the performance of the developed method, verification was performed by comparing surveying accuracy with general photogrammetry software. As a result, since the developed method uses only a small number of vertical took images, it is presumed that the imaging effort can be reduced as compared with the usual method. Also, since the ground reference point is unnecessary, preparation for imaging is unnecessary.

On the other hand, the accuracy verification was performed by comparing with the accuracy of the ground survey by the total station; however, it is inferior in the case of using the general imaging method and software, it was confirmed that the measurement with accuracy of. Specifically, in the general method, it is about 0.040 m, whereas in the proposed method, it is about 0.100 m. Also, since the shooting method simply shoots UAV in the vertical direction and shoots several images, it is possible to drastically reduce the time and labor involved in shooting. From these facts, it is expected that the present development method will be used for surveying the current conditions at the earthmoving site and grasping the damage situation at the time of a disaster.

As a future task, we need to consider means for further improving accuracy. In particular, since it is confirmed that the accuracy of this method decreases with respect to a point away from the origin, it is desirable to stabilize the accuracy with respect to the position of the measurement point. Specific countermeasures include verifying the optimum number of photos according to the situation, verifying the optimum altitude difference between the photos, and using GNSS (GPS) positioning information at UAV flight. In this study, 3D coordinates are obtained as local coordinates without using the ground reference point; however, it is necessary to continue discussion on a method for efficiently obtaining global coordinates such as planar rectangular coordinates.

## Author details

decrease in the Y coordinate accuracy in the proposed method. From these results, it is considered preferable to set the origin as close to the measurement object as

Advanced Remote Sensing Technology for Synthetic Aperture Radar Applications,Tsunami…

On the other hand, in order to confirm the utility for 3D measurement by the UAV, measurement accuracy of this result was compared with measurement accuracy by satellite image [14] and aerial image [15]. As a result, the RMSEs of measurement by using the satellite image were 0.3 to 1.0 m, and the case of the aerial image were 0.1 to 0.5 m. Such results were dependent on several number of GCPs. Therefore, it can be said the UAV is utilized for accurate measurement in a

In this research, we developed a method of using an image taken vertically from UAV and performing aerial photogrammetry without using the ground reference

possible when applying this method in the field.

Error distribution of the proposed method.

limited area.

164

Figure 15.

7. Conclusions

Yoichi Kunii Department of Landscape Architecture Science, Tokyo University of Agriculture, Tokyo, Japan

\*Address all correspondence to: y3kunii@nodai.ac.jp

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## References

[1] Valavanis K, Vachtsevanos G, editors. Handbook of Unmanned Aerial Vehicles. Netherlands: Springer; 2015. DOI: 10.1007/978-90-481-9707-1

[2] Beaudoin L, Avanthey L, Gademer A, Roux M, Rudant J. Dedicated payloads for low altitude remote sensing in natural environments. In: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XL-3/W3. La Grande Motte, France; 2015. pp. 405-410

[3] Galarreta J, Kerle N, Gerke M. UAVbased urban structural damage assessment using object-based image analysis and semantic reasoning. Natural Hazards and Earth System Sciences. 2015;15:1087-1101. DOI: 10.5194/nhess-15-1087-2015

[4] Li M, Li D, Fanb D. A study on automatic UAV image mosaic method for paroxysmal disaster. In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXIX-B6. Melbourne, Australia; 2012. pp. 123-128

[5] Barazzetti L, Brumana R, Oreni D, Previtali M, Roncoroni F. Trueorthophoto generation from UAV images: Implementation of a combined photogrammetric and computer vision approach. In: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. II-5. Riva del Garda, Italy; 2014. pp. 77-81

[6] Feifei X, Zongjian L, Dezhu G, Huad L. Study on construction of 3D building based on UAV images. In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXIX-B1. Melbourne, Australia; 2012. pp. 469-473

[7] Tanzi T, Chandra M, Isnard J, Camara D, Sebastien O, Harivelo F. Towards "drone-borne" disaster management: Future application scenarios. In: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. III-8. Prague, Czech Republic; 2016. pp. 181-189

In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XLI-B1. Prague, Czech Republic; 2016.

DOI: http://dx.doi.org/10.5772/intechopen.82626

Utilization of Unmanned Aerial Vehicle for Accurate 3D Imaging

[13] Yanagi H, Chikatsu H. Performance evaluation of 3D modeling software for

Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XLI-B5. Prague, Czech Republic; 2016.

[15] Jung J, Bang K, Sohn G, Armenakis C. Matching aerial images to 3D

building models based on context-based geometric hashing. In: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. III-1. Czech Republic; 2016. pp. 17-23. DOI: 10.5194/isprsannals-III-1-17-2016

UAV photogrammetry. In: The International Archives of the

[14] Rupnika E, Deseillignya MP, Delormeb A, Klingerb Y. Refined satellite image orientation in the free open-source photogrammetric tools Apero/Micmac. In: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. III-1. Czech Republic; 2016. pp. 83-90. DOI: 10.5194/isprsannals-III-1-83-2016

pp. 985-990

pp. 147-152

167

[8] Amrullah C, Suwardhi D, Meilano I. Product accuracy effect of oblique and vertical non-metric digital camera utilization in UAV-photogrammetry to determine fault plane. In: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XLI-B1. Prague, Czech Republic; 2016. pp. 41-48

[9] Westoby J, Brasinton J, Glasser F, Hambrey J, Reynolds M. 'Structurefrom-motion' photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology. 2012;179: 300-314

[10] Bagheri O, Ghodsian M, Saadatseresht M. Reach scale application of UAV + SfM method in shallow rivers hyperspatial bathymetry. In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XL-1/W5. Kish Island, Iran; 2015. pp. 77-81

[11] Longhitano G, Quintanilha J. Rapid acquisition of environmental information after accidents with hazardous cargo through remote sensing by UAV. In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XL-1/W1. Hannover, Germany; 2013. pp. 201-205

[12] Persad R, Armenakis C. Coregistration of DSMs generated by UAV and terrestrial laser scanning systems.

Utilization of Unmanned Aerial Vehicle for Accurate 3D Imaging DOI: http://dx.doi.org/10.5772/intechopen.82626

In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XLI-B1. Prague, Czech Republic; 2016. pp. 985-990

References

[1] Valavanis K, Vachtsevanos G, editors. Handbook of Unmanned Aerial Vehicles. Netherlands: Springer; 2015. DOI: 10.1007/978-90-481-9707-1

[2] Beaudoin L, Avanthey L, Gademer A, Roux M, Rudant J. Dedicated payloads for low altitude remote sensing in natural environments. In: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XL-3/W3. La Grande Motte, France; 2015. pp. 405-410

[7] Tanzi T, Chandra M, Isnard J, Camara D, Sebastien O, Harivelo F. Towards "drone-borne" disaster management: Future application scenarios. In: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. III-8.

Prague, Czech Republic; 2016.

Republic; 2016. pp. 41-48

[10] Bagheri O, Ghodsian M,

[9] Westoby J, Brasinton J, Glasser F, Hambrey J, Reynolds M. 'Structurefrom-motion' photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology. 2012;179:

Saadatseresht M. Reach scale application of UAV + SfM method in shallow rivers hyperspatial bathymetry. In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XL-1/W5. Kish Island, Iran; 2015.

[11] Longhitano G, Quintanilha J. Rapid

hazardous cargo through remote sensing by UAV. In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XL-1/W1. Hannover, Germany; 2013. pp. 201-205

acquisition of environmental information after accidents with

[12] Persad R, Armenakis C. Co-

registration of DSMs generated by UAV and terrestrial laser scanning systems.

[8] Amrullah C, Suwardhi D, Meilano I. Product accuracy effect of oblique and vertical non-metric digital camera utilization in UAV-photogrammetry to determine fault plane. In: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XLI-B1. Prague, Czech

pp. 181-189

Advanced Remote Sensing Technology for Synthetic Aperture Radar Applications,Tsunami…

300-314

pp. 77-81

[3] Galarreta J, Kerle N, Gerke M. UAV-

based urban structural damage assessment using object-based image analysis and semantic reasoning. Natural Hazards and Earth System Sciences. 2015;15:1087-1101. DOI: 10.5194/nhess-15-1087-2015

[4] Li M, Li D, Fanb D. A study on automatic UAV image mosaic method for paroxysmal disaster. In: The International Archives of the

Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXIX-B6. Melbourne, Australia; 2012.

[5] Barazzetti L, Brumana R, Oreni D, Previtali M, Roncoroni F. Trueorthophoto generation from UAV images: Implementation of a combined photogrammetric and computer vision approach. In: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. II-5. Riva del Garda, Italy; 2014. pp. 77-81

[6] Feifei X, Zongjian L, Dezhu G, Huad L. Study on construction of 3D building

Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXIX-B1. Melbourne, Australia; 2012.

based on UAV images. In: The International Archives of the

pp. 123-128

pp. 469-473

166

[13] Yanagi H, Chikatsu H. Performance evaluation of 3D modeling software for UAV photogrammetry. In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XLI-B5. Prague, Czech Republic; 2016. pp. 147-152

[14] Rupnika E, Deseillignya MP, Delormeb A, Klingerb Y. Refined satellite image orientation in the free open-source photogrammetric tools Apero/Micmac. In: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. III-1. Czech Republic; 2016. pp. 83-90. DOI: 10.5194/isprsannals-III-1-83-2016

[15] Jung J, Bang K, Sohn G, Armenakis C. Matching aerial images to 3D building models based on context-based geometric hashing. In: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. III-1. Czech Republic; 2016. pp. 17-23. DOI: 10.5194/isprsannals-III-1-17-2016

## *Edited by Maged Marghany*

The advance in space machineries has created a novel technology for observing and monitoring the Earth from space. Most earth observation remote sensing considerations focus on using conventional image processing algorithms or classic edge detection tools. Nevertheless, these techniques do not implement modern physics, applied mathematics, signal communication, remote sensing data, and innovative space technologies. This book provides readers with methods to comprehend how to monitor coastal environments, disaster areas, and infrastructure from space with advanced talent remote sensing technology to bridge the gaps between modern space technology, image processing algorithms, mathematical models and the critical issue of the coastal and infrastructure investigations.

Published in London, UK © 2019 IntechOpen © vvvita / iStock

Advanced Remote Sensing Technology for Synthetic Aperture Radar Applications,

Tsunami Disasters, and Infrastructure

Advanced Remote Sensing

Technology for Synthetic

Aperture Radar Applications,

Tsunami Disasters, and

Infrastructure

*Edited by Maged Marghany*