*3.1.2 Remote sensing sensors potential terrain swath coverage in disaster events (RSTSC*p*)*

In emergency scenarios, the remote sensing sensors potential terrain swath coverage estimation, in nadir angle and off-nadir angle (RSTSCp), as an operational procedure implemented on the satellite platform through the roll maneuvers, is an effective and reliable operational strategy to forecast in diverse disaster events, the expected terrain swath width to be scanned with the remote sensing sensors in the future satellite passes, using different view angles of the sensors over the terrain or areas that will be covered in a planned mission. In consequence, it is an important strategy in the disaster management, because it makes possible the prediction and planning in advance the terrain extensions affected by the occurrence of disasters that possibly will be explored by the satellite sensors. Fundamentally, three mathematical approaches can be used to calculate the remote sensing sensor potential terrain swath coverage in nadir angle and off-nadir angle (RSTSCp). These mathematical formulations or methods are the next: oblique spherical triangle method, the spherical method using intersecting lines, and the planar surface projection method [5]. In specific, the oblique spherical triangle method based on the earth model illustrated in **Figure 4** is the method selected to predict the remote sensing sensor potential terrain swath coverage in nadir angle and off-nadir angle (RSTSCp), because it is the most reliable and accurate method to perform the aforementioned operational calculation.

The oblique spherical triangle method previously mentioned and selected to predict the remote sensing sensor potential terrain swath coverage in nadir angle and off-nadir angle (RSTSCp) is taken into account; It is specified that this methodology is based on a mathematical approach or solution by which is projected a straight line from the remote sensing satellite on-orbit operation until a perpendicular plane

#### **Figure 4.**

*Oblique spherical triangle method to predict the remote sensing sensor potential terrain swath coverage in nadir angle and off-nadir angle (RSTSC* p*).*

with reference to the earth's surface, creating in this intersection point between the projected line and the earth surface an angle denominated non-included angle, designated with the letter (*f*), as it is shown in **Figure 4**; this angle corresponds to the remote sensing sensors' instantaneous field of view (IFOV) and represents the smallest solid angle subtended by the sensor opening from a specific height in orbit at one interval of time given on the earth surface. Generally speaking, the instantaneous field of view (IFOV) is the area on the ground viewed by the sensor at a given instant of time, an area that specifies the dimension on the ground of each pixel over the surface scanned. Additionally, in reference to an oblique triangle, three more angles characterized like included angles, described also in **Figure 4**, are created by imaginary lines represented for the remote sensing satellite ranging or height (h) in orbit, the earth radius (*re*), and the boresight angle or sensor FOV (*s*), forming altogether all these angles a triangle [6]. As result, considering the oblique spherical triangle method and the law of sines implementation to solve the triangle formed in **Figure 4**, it is feasible to calculate the remote sensing sensor potential terrain swath coverage in nadir angle and off-nadir angle (RSTSCp). Therefore, the mathematical formulation using the law of sines to estimate the *RSTSCp* is next discussed.

Since the three angles (α, ø, *s*) described in **Figure 4** must sum 180°, so *f* = 180 − α −*s*, solving (α) through the law of the sines, we have Eq. (2):

$$\alpha = \sin^{-1} \cdot \left(\frac{\sin \left(\varsigma\right) \cdot \left\{r\_\epsilon + h\right\}}{r\_\epsilon}\right) - \mathfrak{s} \tag{2}$$

where α is the non-included angle (IFOV); *s* is the boresight angle (FOV); *re* is the radius of the earth; and *h* is the satellite height.

However, to compute the remote sensing sensor potential terrain swath coverage in nadir angle and off-nadir angle (RSTSCp) Eq. (3) is used.

$$\text{RSTSC}\_p = \left(\frac{\alpha}{2\pi}\right) \cdot r\_e \tag{3}$$

**195**

response in disasters.

*disaster management*

*Emergency Communications Network for Disaster Management*

Satellite-2 in different adjacent orbits with the high resolution camera (HRC) using a FOV (*s*) angle of: 17°, to cover the terrain extension obtained from the calculation of the potential terrain swath coverage off-nadir angle (RSTSCp). In **Figure 5**, the Remote Sensing Satellite-2 high-resolution camera (HRC), potential view capacity with a field of view maximum at +29° achieved through the roll maneuver to cover a

*Remote Sensing Satellite-2 high-resolution camera (HRC) potential view capacity with field of view at +29°.*

In resume, the prediction of the remote sensing sensor potential terrain swath coverage in nadir angle and off-nadir angle (RSTSCp) is a strategy or operational procedure useful for planning the images collection opportunities on the diverse areas that are required to be scanned immediately after disaster events or on those zones that are involved in imminent hazard situations. It is possible to obtain results that are more accurate about the potential sensor terrain swath coverage in nadir angle and off-nadir angle (RSTSCp) in real operation by the use of the satellite ranging data, measured and obtained periodically from its ephemerides predictions. Information provided through the operational software packages is installed in the remote sensing satellites ground control stations, since the satellite fly height on orbit influences the sensors' field of view (FOV) performance, which also affects the sensor swath coverage on the surface explored and the images resolution captured by the sensor. At the same time, besides to the strategies or operational procedures implemented for management the remote sensing satellites roll maneuvers on orbit, with the aim to change the cameras field of view (FOV) angles to enhance the cameras' coverage and also their revisit capability on the distinct areas affected by disaster events, there also exist other important technical aspects mentioned before in this chapter related to the cameras spatial resolution, and that must be considered to improve the remote sensing satellites operational performance inside the emergency communications network. Technical cameras or sensors parameters such as remote sensing sensor pixels size at nadir and off-nadir angle and the remote sensing sensor dwelling time for an along track scan are considered; and operational parameters taken into account are to be estimated as part of the strategies proposed to accomplish a better coverage and images capturing on the areas required in the course of emergency

in consecutive passes is shown.

*3.1.3 Remote sensing sensors pixels size estimation at nadir and off-nadir angles to* 

The images captured for the remote sensing sensor have a particular structure based on a format integrated by a matrix of organized rows and columns or cells (pixels), denominated altogether, all these rows and columns, as raster imagery. In

*DOI: http://dx.doi.org/10.5772/intechopen.85872*

territory of 916,445 km<sup>2</sup>

**Figure 5.**

where *RSTSCp* is the remote sensing sensor potential terrain swath coverage; α is the non-include angle (IFOV); and *re* is the radius of the earth.

For instance, with the purpose of demonstrating the previous mathematical formulation for the high-resolution camera (HRC), of the Remote Sensing Satellite-2, a field of view angle after roll maneuver on orbit operation is considered: FOV (*s*) = 17° (12 degrees under the maximum FOV reached by this camera through the roll maneuver strategy). In the same way, to this satellite, an average ranging or height on orbit = 645 km and for the earth's radius, a value = 6378.137 km, is précised. Nevertheless, taking as a reference the triangle illustrated in **Figure 4**, which geometrically describes the oblique spherical triangle method to predict the remote sensing sensor potential terrain swath coverage in nadir angle and off-nadir angle ( RSTSCp), from Eq. (2), *α* is solved and obtained for the High-Resolution Camera of the Remote Sensing Satellite-2, an IFOV = 1.78°, and then with Eq. (3), it is computed for this High-Resolution Camera, a potential terrain swath coverage off-nadir angle (RSTSCp) = 1807.81 km. Through this result, it is noticed that the high-resolution camera (HRC) of the Remote Sensing Satellite-2 in successive passes in different adjacent orbits due to the roll maneuver strategy implementation has the capacity to cover an extension equal to 1807.81 km of land over a defined territory.

Therefore, given that the maximum swath coverage of the high-resolution camera (HRC) off-nadir to 29° of inclination (maximum off-nadir angle) is = 709 km (information specified in **Table 2**) and the potential terrain swath coverage offnadir angle (RSTSCp) = 1807.81 km calculated from Eq. (3), it is estimated a period of time: 1,807,810/709,000 = 2.5 days, through successive passes of the Remote Sensing *Emergency Communications Network for Disaster Management DOI: http://dx.doi.org/10.5772/intechopen.85872*

**Figure 5.**

*Natural Hazards - Risk, Exposure, Response, and Resilience*

using the law of sines to estimate the *RSTSCp* is next discussed.

α = *sin*−1 ∙ (

−*s*, solving (α) through the law of the sines, we have Eq. (2):

in nadir angle and off-nadir angle (RSTSCp) Eq. (3) is used.

the non-include angle (IFOV); and *re* is the radius of the earth.

cover an extension equal to 1807.81 km of land over a defined territory.

Therefore, given that the maximum swath coverage of the high-resolution camera (HRC) off-nadir to 29° of inclination (maximum off-nadir angle) is = 709 km (information specified in **Table 2**) and the potential terrain swath coverage offnadir angle (RSTSCp) = 1807.81 km calculated from Eq. (3), it is estimated a period of time: 1,807,810/709,000 = 2.5 days, through successive passes of the Remote Sensing

radius of the earth; and *h* is the satellite height.

*RSTSCp* = (

Since the three angles (α, ø, *s*) described in **Figure 4** must sum 180°, so *f* = 180 − α

sin (*s*) ∙ (*re* + *h*)

where α is the non-included angle (IFOV); *s* is the boresight angle (FOV); *re* is the

However, to compute the remote sensing sensor potential terrain swath coverage

where *RSTSCp* is the remote sensing sensor potential terrain swath coverage; α is

For instance, with the purpose of demonstrating the previous mathematical formulation for the high-resolution camera (HRC), of the Remote Sensing Satellite-2, a field of view angle after roll maneuver on orbit operation is considered: FOV (*s*) = 17° (12 degrees under the maximum FOV reached by this camera through the roll maneuver strategy). In the same way, to this satellite, an average ranging or height on orbit = 645 km and for the earth's radius, a value = 6378.137 km, is précised. Nevertheless, taking as a reference the triangle illustrated in **Figure 4**, which geometrically describes the oblique spherical triangle method to predict the remote sensing sensor potential terrain swath coverage in nadir angle and off-nadir angle ( RSTSCp), from Eq. (2), *α* is solved and obtained for the High-Resolution Camera of the Remote Sensing Satellite-2, an IFOV = 1.78°, and then with Eq. (3), it is computed for this High-Resolution Camera, a potential terrain swath coverage off-nadir angle (RSTSCp) = 1807.81 km. Through this result, it is noticed that the high-resolution camera (HRC) of the Remote Sensing Satellite-2 in successive passes in different adjacent orbits due to the roll maneuver strategy implementation has the capacity to

\_\_\_α

\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ *re* ) <sup>−</sup> *<sup>s</sup>* (2)

2π) <sup>∙</sup> *re* (3)

with reference to the earth's surface, creating in this intersection point between the projected line and the earth surface an angle denominated non-included angle, designated with the letter (*f*), as it is shown in **Figure 4**; this angle corresponds to the remote sensing sensors' instantaneous field of view (IFOV) and represents the smallest solid angle subtended by the sensor opening from a specific height in orbit at one interval of time given on the earth surface. Generally speaking, the instantaneous field of view (IFOV) is the area on the ground viewed by the sensor at a given instant of time, an area that specifies the dimension on the ground of each pixel over the surface scanned. Additionally, in reference to an oblique triangle, three more angles characterized like included angles, described also in **Figure 4**, are created by imaginary lines represented for the remote sensing satellite ranging or height (h) in orbit, the earth radius (*re*), and the boresight angle or sensor FOV (*s*), forming altogether all these angles a triangle [6]. As result, considering the oblique spherical triangle method and the law of sines implementation to solve the triangle formed in **Figure 4**, it is feasible to calculate the remote sensing sensor potential terrain swath coverage in nadir angle and off-nadir angle (RSTSCp). Therefore, the mathematical formulation

**194**

*Remote Sensing Satellite-2 high-resolution camera (HRC) potential view capacity with field of view at +29°.*

Satellite-2 in different adjacent orbits with the high resolution camera (HRC) using a FOV (*s*) angle of: 17°, to cover the terrain extension obtained from the calculation of the potential terrain swath coverage off-nadir angle (RSTSCp). In **Figure 5**, the Remote Sensing Satellite-2 high-resolution camera (HRC), potential view capacity with a field of view maximum at +29° achieved through the roll maneuver to cover a territory of 916,445 km<sup>2</sup> in consecutive passes is shown.

In resume, the prediction of the remote sensing sensor potential terrain swath coverage in nadir angle and off-nadir angle (RSTSCp) is a strategy or operational procedure useful for planning the images collection opportunities on the diverse areas that are required to be scanned immediately after disaster events or on those zones that are involved in imminent hazard situations. It is possible to obtain results that are more accurate about the potential sensor terrain swath coverage in nadir angle and off-nadir angle (RSTSCp) in real operation by the use of the satellite ranging data, measured and obtained periodically from its ephemerides predictions. Information provided through the operational software packages is installed in the remote sensing satellites ground control stations, since the satellite fly height on orbit influences the sensors' field of view (FOV) performance, which also affects the sensor swath coverage on the surface explored and the images resolution captured by the sensor. At the same time, besides to the strategies or operational procedures implemented for management the remote sensing satellites roll maneuvers on orbit, with the aim to change the cameras field of view (FOV) angles to enhance the cameras' coverage and also their revisit capability on the distinct areas affected by disaster events, there also exist other important technical aspects mentioned before in this chapter related to the cameras spatial resolution, and that must be considered to improve the remote sensing satellites operational performance inside the emergency communications network. Technical cameras or sensors parameters such as remote sensing sensor pixels size at nadir and off-nadir angle and the remote sensing sensor dwelling time for an along track scan are considered; and operational parameters taken into account are to be estimated as part of the strategies proposed to accomplish a better coverage and images capturing on the areas required in the course of emergency response in disasters.

### *3.1.3 Remote sensing sensors pixels size estimation at nadir and off-nadir angles to disaster management*

The images captured for the remote sensing sensor have a particular structure based on a format integrated by a matrix of organized rows and columns or cells (pixels), denominated altogether, all these rows and columns, as raster imagery. In this sense, one pixel constitutes the smallest physical point sampled of a raster image, and the pixels size in the raster image represents the smallest point size on the surface captured by the remote sensing sensor in function to the sensor instantaneous field of view (IFOV). Especially, the sensor pixel resolution is affected by the change in sensor scan angles due to the roll maneuver strategy between others operational aspects, which originates variations in the pixels dimensions, becoming increasingly distorted away from the nadir as view zenith angles increase. For this reason, the remote sensing sensor resolution looks distorted along the track and also across track direction at the extreme edges on the surface scanned [7].

However, the images pixels size captured by the remote sensing sensor is an important sensor performance characteristic necessary to be estimated, when the sensor scan angle is changed through the satellite roll maneuvers, with the objective to increase their potential swath coverage off-nadir angle to cover a specific extension of terrain in a region previously planned; since the pixel size estimation at nadir and off-nadir angles in disaster events is a useful method to define how much the sensor resolution can vary through the pixels spatial size variation along track scan and across track scan. It will also help to define the relation between the sensor resolution variation with reference to the different scan angles or FOV, as well as the influence of different FOV angle on the resolution of the images captured over the terrain in the diverse remote sensing satellites roll maneuvers required on orbit in case of emergency. The remote sensing pixels size geometrical characterization in nadir and off-nadir angles is described in **Figure 6**, where it is explained through a graphical representation the sensor FOV angles changes and their influence on the pixels size variation on the ground resolution cells.

In particular, the Remote Sensing Satellite-1 and Remote Sensing Satellite-2, satellites platforms considered to integrate the emergency communications network proposed in this chapter, are designed with cameras whose resolution is adequate to observe the geometry of diverse objectives and the characteristics related to the phenomena associated with the disasters events. In this respect, the sensors resolution belonging to these satellites platforms is represented by the ground sampling distance (GSD) and for each pixel with a defined spatial size in function to the sensor pointing angle or field of view (FOV) in nadir or off-nadir angle; next, for the aforementioned satellites platforms, their camera resolution characteristics are specified with the respective spatial pixels size to each one: the Remote Sensing Satellite-1 payload is integrated for two (02) PAN and multispectral cameras (PMC) designed with PAN and MS detectors to operate using both functions at the same time in the images capturing process; the panchromatic (PAN) sensor has a ground

**197**

*Emergency Communications Network for Disaster Management*

.

(GSD) in nadir ≤ 4 m with a pixel spatial size ≤16 m<sup>2</sup>

distance (GSD) ≤ 30 m and pixel spatial size ≤900 m2

off-nadir angle is introduced which is as follows:

tangent; and β = sensor field of view (FOV).

width; and *SPn* = sensor pixels number.

*SEr* = \_\_\_\_\_\_\_

sampling distance (GSD) in nadir ≤ 2.5 m and a pixel spatial size ≤ 6.25 m2

ground sampling distance (GSD) in nadir ≤ 1 m with pixel spatial size ≤1m2

and in multispectral (MS) operation, the sensor has a ground sampling distance

platform, the shortwave infrared (SWIR) sensor in nadir has a ground sampling

(LWIR) sensor has a ground sampling distance (GSD) ≤ 60 m in nadir with a pixel

of the satellites platforms that integrate the emergency communications network is a critical aspect that must be managed in an accurate way, with the aim to optimize the resolution of the images captured depending on the type of disaster events. Each step of the mathematical formulation to estimate the pixels size in nadir and

Step 1: first, Eq. (4) is specified and next, the sensor field of view (FOV) swath

*SFOVsw* = **2** ∙ *h* ∙ **tan**(

Step 2: Using Eq. (5), the sensor effective resolution is computed.

where *SFOVsw* = sensor field of view (FOV) swath width; *h* = satellite height; tan =

*SFOVsw SPn*

where *SEr* = sensor effective resolution; *SFOVsw* = sensor field of view (FOV) swath

Step 3: Finally, solving Eq. (6), the pixel size captured by the sensor is estimated.

where *SPse* = sensor pixels size estimation; and *SEr* = sensor effective resolution. To explain the application of the previously mathematical approach formulated to estimate the pixels size in nadir and off-nadir angle in the remote sensing sensors, as an example, wide swath multispectral camera (WMC) as a remote sensor is taken which is installed in the payload of the Remote Sensing Satellite-1. This WMC is a medium-resolution push broom sensor with time delay integration (TDI) and capability to observe, in the visible range, a field of view (FOV) = 16.44° in nadir and maximum field of view (FOV) = 31° off-nadir achieved through the roll maneuver in orbit operation. Also, as additional information to develop this example is regarded for the Remote Sensing Satellite-1 on-orbit operation an average altitude or height = 650 km. Therefore, in first place, from Eq. (4), the computation of the WMC field of view (FOV) swath width in nadir is carried out, whose value is ≤187.796 km; afterward using Eq. (5) and given that this sensor has 12,000 pixels with 6.5 μm of size, the sensor effective resolution,(*SEr*) =

. Overall, the camera's resolution performance characteristic

β\_\_

*SPse* = (*SEr*)**<sup>2</sup>** (6)

multispectral (MS) function, the sensor has a ground sampling distance (GSD)

designed with two (02) wide swath multispectral cameras (WMC) which operate in four (04) spectral bands with a ground sampling distance (GSD) in nadir ≤ 16 m

On the other hand, the Remote Sensing Satellite-2 has one (01) high-resolution camera (HRC) with optical sensors to produce panchromatic (PAN) and multispectral (MS) data simultaneously. In panchromatic (PAN) operation, this sensor has a

; in

,

(5)

; also, this satellite platform is

. Likewise, in this satellite

**<sup>2</sup>**) (4)

and the long wave infra-red

*DOI: http://dx.doi.org/10.5772/intechopen.85872*

and pixel spatial size ≤256m2

spatial size ≤3600 m2

width is estimated.

in nadir ≤10 m with a pixel spatial size ≤ 100 m2

**Figure 6.** *Pixels size geometrical characterization in nadir and off-nadir angles.*

#### *Emergency Communications Network for Disaster Management DOI: http://dx.doi.org/10.5772/intechopen.85872*

*Natural Hazards - Risk, Exposure, Response, and Resilience*

the extreme edges on the surface scanned [7].

this sense, one pixel constitutes the smallest physical point sampled of a raster image, and the pixels size in the raster image represents the smallest point size on the surface captured by the remote sensing sensor in function to the sensor instantaneous field of view (IFOV). Especially, the sensor pixel resolution is affected by the change in sensor scan angles due to the roll maneuver strategy between others operational aspects, which originates variations in the pixels dimensions, becoming increasingly distorted away from the nadir as view zenith angles increase. For this reason, the remote sensing sensor resolution looks distorted along the track and also across track direction at

However, the images pixels size captured by the remote sensing sensor is an important sensor performance characteristic necessary to be estimated, when the sensor scan angle is changed through the satellite roll maneuvers, with the objective to increase their potential swath coverage off-nadir angle to cover a specific extension of terrain in a region previously planned; since the pixel size estimation at nadir and off-nadir angles in disaster events is a useful method to define how much the sensor resolution can vary through the pixels spatial size variation along track scan and across track scan. It will also help to define the relation between the sensor resolution variation with reference to the different scan angles or FOV, as well as the influence of different FOV angle on the resolution of the images captured over the terrain in the diverse remote sensing satellites roll maneuvers required on orbit in case of emergency. The remote sensing pixels size geometrical characterization in nadir and off-nadir angles is described in **Figure 6**, where it is explained through a graphical representation the sensor FOV angles changes and their influence on the pixels size variation on the ground

In particular, the Remote Sensing Satellite-1 and Remote Sensing Satellite-2, satellites platforms considered to integrate the emergency communications network proposed in this chapter, are designed with cameras whose resolution is adequate to observe the geometry of diverse objectives and the characteristics related to the phenomena associated with the disasters events. In this respect, the sensors resolution belonging to these satellites platforms is represented by the ground sampling distance (GSD) and for each pixel with a defined spatial size in function to the sensor pointing angle or field of view (FOV) in nadir or off-nadir angle; next, for the aforementioned satellites platforms, their camera resolution characteristics are specified with the respective spatial pixels size to each one: the Remote Sensing Satellite-1 payload is integrated for two (02) PAN and multispectral cameras (PMC) designed with PAN and MS detectors to operate using both functions at the same time in the images capturing process; the panchromatic (PAN) sensor has a ground

**196**

**Figure 6.**

resolution cells.

*Pixels size geometrical characterization in nadir and off-nadir angles.*

sampling distance (GSD) in nadir ≤ 2.5 m and a pixel spatial size ≤ 6.25 m2 ; in multispectral (MS) function, the sensor has a ground sampling distance (GSD) in nadir ≤10 m with a pixel spatial size ≤ 100 m2 ; also, this satellite platform is designed with two (02) wide swath multispectral cameras (WMC) which operate in four (04) spectral bands with a ground sampling distance (GSD) in nadir ≤ 16 m and pixel spatial size ≤256m2 .

On the other hand, the Remote Sensing Satellite-2 has one (01) high-resolution camera (HRC) with optical sensors to produce panchromatic (PAN) and multispectral (MS) data simultaneously. In panchromatic (PAN) operation, this sensor has a ground sampling distance (GSD) in nadir ≤ 1 m with pixel spatial size ≤1m2 , and in multispectral (MS) operation, the sensor has a ground sampling distance (GSD) in nadir ≤ 4 m with a pixel spatial size ≤16 m<sup>2</sup> . Likewise, in this satellite platform, the shortwave infrared (SWIR) sensor in nadir has a ground sampling distance (GSD) ≤ 30 m and pixel spatial size ≤900 m2 and the long wave infra-red (LWIR) sensor has a ground sampling distance (GSD) ≤ 60 m in nadir with a pixel spatial size ≤3600 m2 . Overall, the camera's resolution performance characteristic of the satellites platforms that integrate the emergency communications network is a critical aspect that must be managed in an accurate way, with the aim to optimize the resolution of the images captured depending on the type of disaster events. Each step of the mathematical formulation to estimate the pixels size in nadir and off-nadir angle is introduced which is as follows:

Step 1: first, Eq. (4) is specified and next, the sensor field of view (FOV) swath width is estimated.

$$\text{SFOV}\_{sw} = 2 \cdot h \cdot \tan\left(\frac{\beta}{2}\right) \tag{4}$$

where *SFOVsw* = sensor field of view (FOV) swath width; *h* = satellite height; tan = tangent; and β = sensor field of view (FOV).

Step 2: Using Eq. (5), the sensor effective resolution is computed.

$$\text{SE}\_r = \frac{\text{SFOV}\_{\text{av}}}{\text{SP}\_w} \tag{5}$$

where *SEr* = sensor effective resolution; *SFOVsw* = sensor field of view (FOV) swath width; and *SPn* = sensor pixels number.

Step 3: Finally, solving Eq. (6), the pixel size captured by the sensor is estimated.

$$\text{SP}\_{\text{se}} = \left(\text{SE}\_{r}\right)^{2} \tag{6}$$

where *SPse* = sensor pixels size estimation; and *SEr* = sensor effective resolution.

To explain the application of the previously mathematical approach formulated to estimate the pixels size in nadir and off-nadir angle in the remote sensing sensors, as an example, wide swath multispectral camera (WMC) as a remote sensor is taken which is installed in the payload of the Remote Sensing Satellite-1. This WMC is a medium-resolution push broom sensor with time delay integration (TDI) and capability to observe, in the visible range, a field of view (FOV) = 16.44° in nadir and maximum field of view (FOV) = 31° off-nadir achieved through the roll maneuver in orbit operation. Also, as additional information to develop this example is regarded for the Remote Sensing Satellite-1 on-orbit operation an average altitude or height = 650 km. Therefore, in first place, from Eq. (4), the computation of the WMC field of view (FOV) swath width in nadir is carried out, whose value is ≤187.796 km; afterward using Eq. (5) and given that this sensor has 12,000 pixels with 6.5 μm of size, the sensor effective resolution,(*SEr*) =

≤187,796/12,000 = ≤15.64 m in nadir, and with Eq. (6), the pixels size in nadir to this sensor, (*SPse*) <sup>=</sup> ≤245 m2 , are estimated. In the same way, by Eq. (4), at the WMC maximum off-nadir pointing angle (31°), a field of view (FOV) swath width ≤ 360.521 km is also calculated. As already known, this sensor has 12,000 pixels with 6.5 μm of size, and considering these specifications with Eq. (5), the sensor effective resolution, CEr = 354,975/12,000 = 29.58m *SEr* = ≤360,521/12,000 = ≤30 m with an off-nadir pointing angle in 31° of FOV is calculated; finally, through Eq. (6), a pixels size at the same pointing angle off-nadir for this sensor is computed, (*SPse*) <sup>=</sup> ≤902 m2 . In summary, through the analysis of the above results, it is easy to deduct that the ground area represented by each pixel in nadir pointing angle has a better resolution than the pixels at off-nadir pointing angles. Such a phenomenon is due to the spatial resolution, which varies from the image center to the swath edge, and hence, also the pixels spatial size. Technical aspects are considered in those maneuver situations in which the changes of pointing angles of the sensors are necessaries to management of diverse disaster events in a shortest possible time.

#### *3.1.4 Remote sensing sensors dwell time estimation in disaster events*

At the present time, there are principally two (02) types of passive sensor technologies for optical cameras used frequently in the remote sensing satellites applications to images scanning and collection over the earth surface; such technologies are the whisk broom scanning sensors and the push broom scanning sensors. In this regard, the whisk broom scanning sensors, also known as spotlight in the across-track scanners, is a technology that uses a mirror to scan across the satellite's path over the ground track, reflecting the light captured into a single detector which collects the pixels of the images one at a time through the movement of the mirror back and forth [8]. In this type of sensor, the mechanism used to move the mirror makes this technology vulnerable to rapid degradation in function of the working hours to which the mechanism is subjected. It is also an expensive technology since it demands a special design of the movement mechanism parts. **Figure 7** describes the whisk broom sensors scanning working principle, where the remote sensing satellite camera sweeps in a direction perpendicular to the satellite flight path.

Likewise, the whisk broom sensors have the following operation characteristics: each line over the earth surface is scanned from one side of the sensor to the other through a rotating mirror, while the satellite platform moves forward over the earth's surface. Different successive scans of the mirror build up a two-dimensional image of the earth's surface, and by means of a bank of internal detectors in the

**199**

*Emergency Communications Network for Disaster Management*

arc below the satellite system usually of around 90–120°.

charge-coupled devices (CCDs) positioned end to end.

scanning working principle.

**Figure 8.**

cameras, each one sensitive to a specific range of wavelengths is detected and the energy for each spectral band is measured; after the energy is captured by each detector like an electrical signal, it is transformed into a digital data and stored on the remote sensing satellite. In the whisk broom scanning, the IFOV and the satellite height in orbit define the sensor spatial resolution, whereas the images swaths are in function to the mirror sweep that is represented by the sensor angular field of view; angle measured in degrees and used to record the pixels of the scan lines of the images. All the whisk broom sensor data are collected on the land surface within an

On the other hand, the push broom scanning is also referred to as along-track scanning; the sensors used here is a linear array of detectors, arranged perpendicular to the flight direction of the satellite to cover all the pixels in the along-track dimension at the same time. In consequence, as the spacecraft flies forward, the image is collected one line at a time, with all pixels in a line being measured simultaneously [9, 10]. It is important to highlight that the push broom sensors have a drawback in its sensitivity which is very varying; if they are not perfectly calibrated, this can cause stripes in the data acquired. **Figure 8** shows the push broom sensors

The push broom sensors have the next working principle: these optical sensors are designed with a linear matrix of detectors situated at the focal plane of the image. In specific, this matrix is formed by a lens system, which is pushed along track in direction to the satellite flight track projection over the scanning surface; the detectors matrix movement is similar to the displacement of the sows of a broom being pushed along a floor; during this displacement, each detector captures or measures the energy of every land resolution cells on an individual basis; after the energy has been detected, it is sampled electronically and digitally stored on the satellite platform. The push broom sensor's spatial resolution is determined by the size of its instantaneous field of view (IFOV) angle. Also, the push broom sensors are integrated by an independent linear matrix in charge to measure each spectral band or channel. In this sense, the linear matrixes normally consist of numerous

A push broom sensor receives a stronger signal than a whisk broom scanner since it looks at each pixel area for longer; this provides a much longer detector dwell time than the across-track scanner on each surface pixel, thus allowing much higher sensitivity and a narrower bandwidth of observation, operation characteristic that improves the radiometric resolution. General speaking, the sensor dwell time is the amount of time the scanner has to collect photons from a ground

*DOI: http://dx.doi.org/10.5772/intechopen.85872*

*Push broom sensors technology scanning principle.*

**Figure 7.** *Whisk broom sensors technology scanning principle.*

*Emergency Communications Network for Disaster Management DOI: http://dx.doi.org/10.5772/intechopen.85872*

**Figure 8.** *Push broom sensors technology scanning principle.*

*Natural Hazards - Risk, Exposure, Response, and Resilience*

sensor, (*SPse*) <sup>=</sup> ≤245 m2

in a shortest possible time.

≤187,796/12,000 = ≤15.64 m in nadir, and with Eq. (6), the pixels size in nadir to this

mum off-nadir pointing angle (31°), a field of view (FOV) swath width ≤ 360.521 km is also calculated. As already known, this sensor has 12,000 pixels with 6.5 μm of size, and considering these specifications with Eq. (5), the sensor effective resolution, CEr = 354,975/12,000 = 29.58m *SEr* = ≤360,521/12,000 = ≤30 m with an off-nadir pointing angle in 31° of FOV is calculated; finally, through Eq. (6), a pixels size at the

mary, through the analysis of the above results, it is easy to deduct that the ground area represented by each pixel in nadir pointing angle has a better resolution than the pixels at off-nadir pointing angles. Such a phenomenon is due to the spatial resolution, which varies from the image center to the swath edge, and hence, also the pixels spatial size. Technical aspects are considered in those maneuver situations in which the changes of pointing angles of the sensors are necessaries to management of diverse disaster events

At the present time, there are principally two (02) types of passive sensor technologies for optical cameras used frequently in the remote sensing satellites applications to images scanning and collection over the earth surface; such technologies are the whisk broom scanning sensors and the push broom scanning sensors. In this regard, the whisk broom scanning sensors, also known as spotlight in the across-track scanners, is a technology that uses a mirror to scan across the satellite's path over the ground track, reflecting the light captured into a single detector which collects the pixels of the images one at a time through the movement of the mirror back and forth [8]. In this type of sensor, the mechanism used to move the mirror makes this technology vulnerable to rapid degradation in function of the working hours to which the mechanism is subjected. It is also an expensive technology since it demands a special design of the movement mechanism parts. **Figure 7** describes the whisk broom sensors scanning working principle, where the remote sensing satellite camera sweeps in a direction perpendicular to the satellite flight path.

Likewise, the whisk broom sensors have the following operation characteristics: each line over the earth surface is scanned from one side of the sensor to the other through a rotating mirror, while the satellite platform moves forward over the earth's surface. Different successive scans of the mirror build up a two-dimensional image of the earth's surface, and by means of a bank of internal detectors in the

same pointing angle off-nadir for this sensor is computed, (*SPse*) <sup>=</sup> ≤902 m2

*3.1.4 Remote sensing sensors dwell time estimation in disaster events*

, are estimated. In the same way, by Eq. (4), at the WMC maxi-

. In sum-

**198**

**Figure 7.**

*Whisk broom sensors technology scanning principle.*

cameras, each one sensitive to a specific range of wavelengths is detected and the energy for each spectral band is measured; after the energy is captured by each detector like an electrical signal, it is transformed into a digital data and stored on the remote sensing satellite. In the whisk broom scanning, the IFOV and the satellite height in orbit define the sensor spatial resolution, whereas the images swaths are in function to the mirror sweep that is represented by the sensor angular field of view; angle measured in degrees and used to record the pixels of the scan lines of the images. All the whisk broom sensor data are collected on the land surface within an arc below the satellite system usually of around 90–120°.

On the other hand, the push broom scanning is also referred to as along-track scanning; the sensors used here is a linear array of detectors, arranged perpendicular to the flight direction of the satellite to cover all the pixels in the along-track dimension at the same time. In consequence, as the spacecraft flies forward, the image is collected one line at a time, with all pixels in a line being measured simultaneously [9, 10]. It is important to highlight that the push broom sensors have a drawback in its sensitivity which is very varying; if they are not perfectly calibrated, this can cause stripes in the data acquired. **Figure 8** shows the push broom sensors scanning working principle.

The push broom sensors have the next working principle: these optical sensors are designed with a linear matrix of detectors situated at the focal plane of the image. In specific, this matrix is formed by a lens system, which is pushed along track in direction to the satellite flight track projection over the scanning surface; the detectors matrix movement is similar to the displacement of the sows of a broom being pushed along a floor; during this displacement, each detector captures or measures the energy of every land resolution cells on an individual basis; after the energy has been detected, it is sampled electronically and digitally stored on the satellite platform. The push broom sensor's spatial resolution is determined by the size of its instantaneous field of view (IFOV) angle. Also, the push broom sensors are integrated by an independent linear matrix in charge to measure each spectral band or channel. In this sense, the linear matrixes normally consist of numerous charge-coupled devices (CCDs) positioned end to end.

A push broom sensor receives a stronger signal than a whisk broom scanner since it looks at each pixel area for longer; this provides a much longer detector dwell time than the across-track scanner on each surface pixel, thus allowing much higher sensitivity and a narrower bandwidth of observation, operation characteristic that improves the radiometric resolution. General speaking, the sensor dwell time is the amount of time the scanner has to collect photons from a ground

resolution cell. However, the dwell time depends on some factors, such as satellite speed, the width of the scan line, time per scan line, and time per pixel. Therefore, it is a sensor performance parameter that requires to be estimated, when the remote sensing satellite sensors view angle is changed through the satellite roll maneuvers, to scan areas affected by disasters from different scan angles, due to its impact on the sensors radiometric resolution.

Since the remote sensing satellites with push broom sensors are the proposed platforms to integrate the emergency communications network planned, the mathematical approach applicable to calculate the dwell time, considering the push broom sensors through its along-track scanning, is specified in Eq. (7):

$$\mathbf{DT}\_{\rm att} = \left( \mathbf{GR}\_{\rm av} | \mathbf{Sat}\_{\rm v} \right) \tag{7}$$

where *DTats* = dwell time for along-track scan; *GRce* = ground resolution cell; and *Satv =* satellite orbital velocity.

From the above mathematical approach, considering the Remote Sensing Satellite-2 High-Resolution Camera (HRC) specifications in Multispectral (MS) band, with a ground resolution cell of: ≤4 m ∙≤4 m, information specified in section 3.1.3, a Remote Sensing Satellite-2 mean orbit velocity of 7.8 km/s; using Eq. (7), the dwell time computation for Satellite-2 High Resolution Camera (HRC) along-track scanning is carried out, *DTats =* (≤ 4m ∙ *cell* | 7.8\_\_\_ km <sup>s</sup> )= ≤ 0.51ms ∙ cell; average time projected to be used by this High Resolution Camera (HRC), to collect photons from a ground resolution cell over the earth surface; technical specification must be taken into consideration to maneuver the remote sensing satellite in orbit with the aim to change the cameras scanning angles, in order to know the cameras photons acquisition time on each ground resolution cell for each satellite pass over an specific area affected by disasters using different cameras scanning angles; sensor operating characteristic that influences its radiometric resolution. To optimize the cameras dwell time calculation for the along-track scanning, it is recommended to use the satellite ephemerides data to obtain its speed projection on orbit, since the satellite speed is not constant and varies according to the satellite position on the orbit, phenomena that impact the cameras dwell time estimation.

### **3.2 Operational procedure to manage the remote sensing sensors spectral resolution in disaster events**

The electromagnetic spectrum is integrated for a range of different wavelengths or spectral energy divided into regions defined as bands, and each object or target on the ground responds to a spectral reflectance inside this spectrum or has a spectral signature. In this context, the remote sensing sensors' spectral resolution describes the ability presented for these sensors to discriminate or capture wavelengths' intervals of the electromagnetic spectrum. While finer is the spectral resolution, narrower will be the wavelength range for a particular channel or band resolved by the sensor.

For instance, there are panchromatic sensors designed particularly, the with a single channel detector and capacity to capturing or resolving spectral data in a broad wavelength range of the visible electromagnetic spectrum. Therefore, the black and white bands of the spectral data are only solved by these sensors and the physical properties are measured in the apparent brightness of the targets. In specific, the spectral information related to the colors of the objectives is not captured in the panchromatic band. Furthermore, there are multispectral sensors designed with multichannel detectors to capture spectral data in different narrow wavelength bands inside a spectral band defined, resolving multilayer images that contain both the brightness and spectral

**201**

**Table 3.**

Multispectral (MS)

*Emergency Communications Network for Disaster Management*

~ 11.3 ± 0.1 μm and 11.5 ± 0.1 μm ~ 12.5 ± 0.1 μm [11, 12].

resolution band of the remote sensing sensors in disaster events.

colors information of the targets captured. On the other hand, the hyperspectral sensors can collect 50 or more narrow bands. Particularly, the multispectral bandwidths are quite large, generally from 50 to 400 μm, frequently covering an entire color; for example, a whole red portion, while the hyperspectral sensors measure the radiance or

From this point of view, there are remote sensing sensors with different spectral resolutions; for instance, panchromatic band for medium spectral resolution with a center wavelength located at 0.675 μm; panchromatic band for high spectral resolution with a center wavelength situated at 0.65 μm; multispectral band with center wavelengths in: B1/blue at 0.485 μm, B2/green at 0.555 μm, B3/red at 0.66 μm, and in B4/NIR at 0.83 μm; and also infrared spectral resolution with wavelengths in short-wave infrared (SWIR), covering the next spectrum: 0.9 ± 0.05 μm ~ 1.1 ± 0.05 μm, 1.18 ± 0.05 μm ~ 1.3 ± 0.05 μm, 1.55 ± 0.05 μm ~ 1.7 ± 0.05 μm, and in long wave infrared (LWIR), with wavelengths in the following range: 10.3 ± 0.1 μm

Regularly, the remote sensing sensors are designed with a specific purpose focused on the applications of their spectral bands, whose objective is to collect different types of images, taking advantage of the microwave spectrum and its incidence angle on the earth's surface; operation characteristics allow establishing the appropriated exploitation or application for each sensor, since it was before mentioned that each target and ground characteristic presents a particular spectral signature or spectral response to the different wavelengths of the electromagnetic spectrum. Reflectance behavior provides the sensors the adequate spectral information to discriminate the different details of the targets measured. In this regard, due to the importance of the spectral resolution application in disaster events considering the diverse phenomena with specific features that may occur, a methodology inside the emergency communications network to management of the remote sensing sensors spectral resolution capabilities is proposed, in order to optimize and achieve a proper performance for each spectral

Methodology based on the operational technical strategies implementation, such as: databases design and management to store the images pixels considering their spectral derivation with the aim to create the spectral signatures thereof (tagging) inside the sensors field of view, technical criterion formulation to management of the wavelengths specifications handled for each sensor in reference to the targets spectral features to be captured and the technical procedure implementation to accomplish the real-time spectral data analysis with the objective to discriminate and evaluate the diverse scenes colors that potentially can be presented in diverse images

**Spectral band Remote sensing sensors potential spectral applications in disaster management**

Infrared (IR) For monitoring and assessment: volcanoes eruptions and their associated events,

behavior and floods scenarios behavior.

*Remote sensing sensors potential spectral applications in disaster management.*

For monitoring and assessment: deforestation scenarios, water mass courses, fuels leak or oil spill limits, ice block coverage, terrain geological patterns, wildfire threats and spread, droughts, vegetation classes, coastal characteristics evolution, bathymetric trends, sediment-laden waters behavior, landslide, floods, urban damages differentiation and recreation, epidemic diseases behavior, emissions of diverse gases in

particular and aerosols components, between others polluting elements.

moisture content of soil and vegetation, earthquake damages magnitude, surfaces thermal trends, hotspots, lava lakes formation, gas emission and propagation, land desertification and deforestation evolution, coastal erosion development, wildfires progress, damages in fires scenarios by the observation through the smoke, climate

reflectance of an object in many narrow bands, often from 5 to 10 μm.

*DOI: http://dx.doi.org/10.5772/intechopen.85872*

#### *Emergency Communications Network for Disaster Management DOI: http://dx.doi.org/10.5772/intechopen.85872*

*Natural Hazards - Risk, Exposure, Response, and Resilience*

the sensors radiometric resolution.

*Satv =* satellite orbital velocity.

scanning is carried out, *DTats =* (≤ 4m ∙ *cell* | 7.8\_\_\_

that impact the cameras dwell time estimation.

**resolution in disaster events**

resolved by the sensor.

resolution cell. However, the dwell time depends on some factors, such as satellite speed, the width of the scan line, time per scan line, and time per pixel. Therefore, it is a sensor performance parameter that requires to be estimated, when the remote sensing satellite sensors view angle is changed through the satellite roll maneuvers, to scan areas affected by disasters from different scan angles, due to its impact on

Since the remote sensing satellites with push broom sensors are the proposed platforms to integrate the emergency communications network planned, the mathematical approach applicable to calculate the dwell time, considering the push

*DTats* = (*GRce*|*Satv*) (7)

where *DTats* = dwell time for along-track scan; *GRce* = ground resolution cell; and

km

to be used by this High Resolution Camera (HRC), to collect photons from a ground resolution cell over the earth surface; technical specification must be taken into consideration to maneuver the remote sensing satellite in orbit with the aim to change the cameras scanning angles, in order to know the cameras photons acquisition time on each ground resolution cell for each satellite pass over an specific area affected by disasters using different cameras scanning angles; sensor operating characteristic that influences its radiometric resolution. To optimize the cameras dwell time calculation for the along-track scanning, it is recommended to use the satellite ephemerides data to obtain its speed projection on orbit, since the satellite speed is not constant and varies according to the satellite position on the orbit, phenomena

**3.2 Operational procedure to manage the remote sensing sensors spectral** 

The electromagnetic spectrum is integrated for a range of different wavelengths or spectral energy divided into regions defined as bands, and each object or target on the ground responds to a spectral reflectance inside this spectrum or has a spectral signature. In this context, the remote sensing sensors' spectral resolution describes the ability presented for these sensors to discriminate or capture wavelengths' intervals of the electromagnetic spectrum. While finer is the spectral resolution, narrower will be the wavelength range for a particular channel or band

For instance, there are panchromatic sensors designed particularly, the with a single channel detector and capacity to capturing or resolving spectral data in a broad wavelength range of the visible electromagnetic spectrum. Therefore, the black and white bands of the spectral data are only solved by these sensors and the physical properties are measured in the apparent brightness of the targets. In specific, the spectral information related to the colors of the objectives is not captured in the panchromatic band. Furthermore, there are multispectral sensors designed with multichannel detectors to capture spectral data in different narrow wavelength bands inside a spectral band defined, resolving multilayer images that contain both the brightness and spectral

<sup>s</sup> )= ≤ 0.51ms ∙ cell; average time projected

From the above mathematical approach, considering the Remote Sensing Satellite-2 High-Resolution Camera (HRC) specifications in Multispectral (MS) band, with a ground resolution cell of: ≤4 m ∙≤4 m, information specified in section 3.1.3, a Remote Sensing Satellite-2 mean orbit velocity of 7.8 km/s; using Eq. (7), the dwell time computation for Satellite-2 High Resolution Camera (HRC) along-track

broom sensors through its along-track scanning, is specified in Eq. (7):

**200**

colors information of the targets captured. On the other hand, the hyperspectral sensors can collect 50 or more narrow bands. Particularly, the multispectral bandwidths are quite large, generally from 50 to 400 μm, frequently covering an entire color; for example, a whole red portion, while the hyperspectral sensors measure the radiance or reflectance of an object in many narrow bands, often from 5 to 10 μm.

From this point of view, there are remote sensing sensors with different spectral resolutions; for instance, panchromatic band for medium spectral resolution with a center wavelength located at 0.675 μm; panchromatic band for high spectral resolution with a center wavelength situated at 0.65 μm; multispectral band with center wavelengths in: B1/blue at 0.485 μm, B2/green at 0.555 μm, B3/red at 0.66 μm, and in B4/NIR at 0.83 μm; and also infrared spectral resolution with wavelengths in short-wave infrared (SWIR), covering the next spectrum: 0.9 ± 0.05 μm ~ 1.1 ± 0.05 μm, 1.18 ± 0.05 μm ~ 1.3 ± 0.05 μm, 1.55 ± 0.05 μm ~ 1.7 ± 0.05 μm, and in long wave infrared (LWIR), with wavelengths in the following range: 10.3 ± 0.1 μm ~ 11.3 ± 0.1 μm and 11.5 ± 0.1 μm ~ 12.5 ± 0.1 μm [11, 12].

Regularly, the remote sensing sensors are designed with a specific purpose focused on the applications of their spectral bands, whose objective is to collect different types of images, taking advantage of the microwave spectrum and its incidence angle on the earth's surface; operation characteristics allow establishing the appropriated exploitation or application for each sensor, since it was before mentioned that each target and ground characteristic presents a particular spectral signature or spectral response to the different wavelengths of the electromagnetic spectrum. Reflectance behavior provides the sensors the adequate spectral information to discriminate the different details of the targets measured. In this regard, due to the importance of the spectral resolution application in disaster events considering the diverse phenomena with specific features that may occur, a methodology inside the emergency communications network to management of the remote sensing sensors spectral resolution capabilities is proposed, in order to optimize and achieve a proper performance for each spectral resolution band of the remote sensing sensors in disaster events.

Methodology based on the operational technical strategies implementation, such as: databases design and management to store the images pixels considering their spectral derivation with the aim to create the spectral signatures thereof (tagging) inside the sensors field of view, technical criterion formulation to management of the wavelengths specifications handled for each sensor in reference to the targets spectral features to be captured and the technical procedure implementation to accomplish the real-time spectral data analysis with the objective to discriminate and evaluate the diverse scenes colors that potentially can be presented in diverse images


**Table 3.**

*Remote sensing sensors potential spectral applications in disaster management.*

based on the design of a library with the known spectral signatures of the targets previously studied or analyzed. In **Table 3**, an overview of the applications of remote sensing sensors' potential spectral resolutions in the multispectral (MS) band and infrared (IR) band is provided, taking into consideration diverse disaster scenarios.
