Open access peer-reviewed chapter - ONLINE FIRST

Advanced Digital Image Analysis of Remotely Sensed Data Using JavaScript API and Google Earth Engine

Written By

Anwarelsadat Elmahal and Eltaib Ganwa

Submitted: 31 January 2024 Reviewed: 01 February 2024 Published: 09 May 2024

DOI: 10.5772/intechopen.1004501

Revolutionizing Earth Observation - New Technologies and Insights IntechOpen
Revolutionizing Earth Observation - New Technologies and Insights Edited by Rifaat Abdalla

From the Edited Volume

Revolutionizing Earth Observation - New Technologies and Insights [Working Title]

Dr. Rifaat Abdalla

Chapter metrics overview

88 Chapter Downloads

View Full Metrics

Abstract

Earth observation satellites (EOS) have been continuously observing and monitoring the globe and its physical, chemical, and biological characteristics since the late 1950s. EOS has been providing an enormous quantity of spatial data, and it is predicted that the global satellite data market will expand exponentially. Processing such vast and unique data in an efficient manner is necessary. Google Earth Engine (GEE) is an example of a cloud-based geospatial platform that revolutionized the way users store, process, analyze, and visualize spatial data. This chapter delves into the fundamentals of digital image processing, employing cutting-edge techniques to process, analyze, and visualize remotely sensed data through the use of Google Earth Engine (GEE) and JavaScript API. This chapter presents an in-depth exploration of digital image processing, specifically focusing on remotely sensed data. It highlights advanced techniques and includes two case studies that demonstrate practical applications in preprocessing and detailed analysis of land use and land cover (LULC) changes.

Keywords

  • image preprocessing
  • image analysis
  • GEE
  • JavaScript API
  • LULCC

1. Introduction

The first artificial satellite, Sputnik 1, was launched by the Soviet Union on October 4, 1957 [1, 2]. Since then, the number of space missions has increased and Earth observation satellites (EOS) have been continuously observing and monitoring the globe and its physical, chemical, and biological characteristics [3, 4]. Satellite images have offered and continue to provide significant and reliable sources of information, this is because of their synoptic view, repetitive global coverage, and map-like format [5]. Common digital image processing techniques for information extraction have become inefficient as the number of satellites has increased significantly, along with the ongoing advances in spatial, spectral, and temporal data acquisition [5]. Therefore, we require new techniques to analyze remotely sensed data for a variety of reasons, including (i) improving data processing accuracy and efficiency; (ii) extracting new information with greater accuracy; and (iii) making data more accessible and usable [6, 7, 8]. Advances in the internet and technology have contributed to novel approaches in processing remotely sensed data in a variety of ways, including (i) increased data accessibility; (ii) increased computational power; and most importantly ; and (iii) the development of new algorithms [9, 10]. As a result, many online platforms for processing remotely sensed data have been created, such as Microsoft Planetary Computer; Earth on AWS; EOS Data Hub, and Google Earth Engine (GEE). However, Google Earth Engine is a leading online platform for processing remotely sensed data. It is more mature, has more features, and is better suited for advanced processing tasks than other platforms [9]. GEE is an online platform that leverages Google’s multi-petabyte data catalog for analyzing environmental data at the planetary scale [9]. GEE as a spatial cloud computing platform is categorized as PaaS, a platform as a service [11].

In comparison to traditional on-premises deployment [12, 13], PaaS offers several advantages, such as (i) scalability, which eliminates the need to manage the underlying infrastructure, (ii) agility, which facilitates the quick and easy development and deployment of applications, and (iii) cost savings, because you only pay for the resources that you use. GEE is a powerful tool for digital image processing, and its code editor provides a structured environment for writing and debugging code, using either native or non-native. Users could access Google Earth Engine (GEE) via the JavaScript API, Python, and rgee, as outlined in reference [11]. Nonetheless, this chapter will focus specifically on the detailed exploration of the JavaScript API.

The chapter will introduce you to advanced digital image processing for remotely sensed data using the computational power of Google Earth Engine (GEE) and its native JavaScript (JS) API. The reader will be also introduced to the conceptual approaches in the areas of digital imaging systems, preprocessing, and visualizations of images. In conclusion, two practical case studies were implemented to offer users an interactive experience in learning how to effectively use GEE and the JS API. These hands-on exercises encompass a range of techniques, such as preprocessing, visualization, and image classification. Moreover, Case study 2 provides an in-depth analysis of image classification, focusing on key measures, such as overall accuracy and accuracy per class, offering comprehensive insights into the application of these processes in practical scenarios.

Advertisement

2. Basics of satellite imagery

Satellite imagery, enhanced by sophisticated sensors and instruments, captures electromagnetic radiation (EMR) that is either emitted or reflected from the Earth’s surface, including the oceans and the atmosphere [14]. This includes the use of multispectral and hyperspectral images, across various wavelengths. Such technology provides detailed and nuanced information about the Earth’s diverse features, including landforms, water bodies, and atmospheric conditions. The collected data provide an essential understanding of the planet’s physical and environmental characteristics, covering a wide range of natural phenomena.

2.1 Remote sensing mechanisms

Based on their sensing mechanisms, satellites could be categorized into passive and active satellites [15]. Passive satellites operate by detecting, receiving, and registering EMR that is either reflected or emitted from the Earth [16]. These satellites are outfitted with sensors capable of detecting different types of electromagnetic radiation (EMR), including visible and near-infrared, enabling them to produce clear and detailed images of the Earth’s surface. Some examples of passive remote sensing satellites are the Landsat series, MODIS satellites, the Sentinel-2 mission, and the advanced very high resolution radiometer (AVHR). In contrast, active sensors like the synthetic aperture radar (SAR) on Sentinel-1 emit their own energy and analyze the reflected signal [16, 17]. This technology excels in terrain mapping and environmental monitoring under any weather and environmental conditions.

2.2 Image acquisition

Satellite remote sensing involves collecting electromagnetic radiation from the Earth or other celestial bodies using satellites, which are then converted into digital images. This process is characterized by several essential steps [15], which may include (i) the use of onboard sensors to capture the EMR; (ii) the transmission of this data to ground stations via high-frequency radio waves; and (iii) the processing of this data, which includes correcting or reducing errors and distortions and enhancing their overall quality for further in-depth analysis. Each stage is crucial, utilizing advanced technology and methods to guarantee the final images’ precision and utility.

Advertisement

3. Digital image processing

Typically, data obtained through remote sensing is susceptible to both random and systematic errors, which can introduce noise and bias, thereby affecting the precision of the measurements [14, 16]. For numerous tasks related to images, such as computer vision, image analysis, and machine learning, image preprocessing serves as a vital initial step. In the context of remote sensing, digital image processing encompasses the use of computer algorithms for the manipulation and analysis of digital images. It includes tasks such as enhancing, restoring, and extracting valuable information from images, which is essential for a wide range of applications.

3.1 Challenges in satellite image processing

Satellite image processing entails overcoming various challenges, including atmospheric distortions, the need for significant storage and computational resources, and the impact of satellite movement on image consistency. Advanced correction techniques, precise calibration, and sophisticated analysis are essential to address these issues and unlock the full potential of satellite imagery for diverse applications.

3.2 Digital image processing steps

The main operations of remote sensing digital image processing and analysis could be categorized into three groups: (i) data preparation and enhancement, (ii) information extraction and analysis, and (iii) data integration and presentation. Refining and reducing errors in data obtained from remote sensing is essential for improving the accuracy and reliability of the data. Such enhancements help users across various fields to make better-informed choices.

3.2.1 Data preparation and enhancement

3.2.1.1 Image rectifications

Image distortions in remotely sensed data can result from sensor orientation, Earth’s curvature, atmospheric conditions, and satellite dynamics [18]. The process of image rectification or preprocessing addresses both systematic and random errors acquired during data collection. Systematic errors, being consistent, can be corrected, whereas random errors are unpredictable and vary with each measurement [19, 20]. The necessity for error correction is determined by the error’s significance, its impact on the analysis, the risk of introducing new errors, and the error pattern’s consistency [14, 15, 16, 21]. The rectification process includes three key steps: (1) image preprocessing to correct sensor orientation and adjust for Earth’s curvature and satellite dynamics; (2) geometric transformation and resampling to mathematically transform and interpolate pixel values for a specific map projection; and (3) radiometric and quality control to offset atmospheric effects and ensure image quality through radiometric calibration. These steps ensure that the rectified images are accurate and of high quality for remote sensing analyses.

3.2.1.2 Image registration

Image registration in remote sensing is a comprehensive process that aligns multiple satellite or aerial images of the same area taken at different times or angles and integrates these images with other data types, such as vector maps [22]. This integration provides an accurate and consistent representation of ground features, enhancing spatial accuracy and context by combining imagery with GIS maps. This approach is crucial for applications where accurate geographic information is key. The process involves feature detection, matching control points, applying a transformation model, and resampling the images with quality assurance measures.

Resampling methods adjust pixel values during image transformations, with the main types being nearest neighbor, bilinear interpolation, and bicubic interpolation. These methods strike a balance between image quality and computational efficiency, which is crucial for image scaling and geometric corrections.

Orthorectification corrects geometric distortions in aerial or satellite imagery, aligning the image to a standard map projection while accounting for Earth’s curvature, camera tilt, and landscape topography [15, 16]. This ensures each pixel is accurately aligned to its true geographic position.

3.2.1.3 Atmospheric correction

The objective of the atmospheric correction in satellite and aerial imagery processing is to remove the distortions caused by the Earth’s atmosphere [15]. Atmospheric correction is essential in remote sensing to ensure data accurately reflects the Earth’s surface by adjusting for light scattering and absorption by atmospheric particles and gasses. This step strengthens the accuracy and reliability of the data, critical for environmental monitoring and LULC mapping. Methods to mitigate atmospheric distortions include dark object subtraction, atmospheric modeling, and radiative transfer models [14, 15, 16, 23]. This provides users with the flexibility to either apply these corrections or select images with minimal distortion for their work.

3.2.1.4 Noise reduction

Noise reduction in image processing is a technique used to minimize unwanted random variations, or “noise,” in digital images. This noise, often seen as grainy specks, can degrade image quality. Noise removal could be considered as an enhancement procedure [16]. Noise reduction algorithms aim to smooth out these imperfections while preserving important details. Though essential in image processing, excessive noise reduction may cause a loss of detail, making it important to strike a balance.

3.2.1.5 Speckle filtering

Speckle filtering is a noise reduction technique used in radar imagery, especially for synthetic aperture radar (SAR) data, to mitigate the granular “speckle” noise. This noise, appearing as a salt-and-pepper pattern, hinders image clarity. Speckle filtering algorithms focus on reducing this noise while preserving essential image details [24]. It is crucial for applications such as environmental monitoring and agricultural mapping, where clear radar imagery is essential. However, careful application is needed to avoid losing important features in the process of noise reduction.

3.2.1.6 Gaussian smoothing

Gaussian smoothing is an image processing technique that uses a Gaussian filter to blur an image and reduce noise [18]. This method averages pixel values, giving more weight to central pixels, to create a subtle, natural-looking blur. It is commonly used as a preprocessing step to minimize noise before tasks like edge detection, helping to avoid false edges. However, it is important to apply this technique carefully to prevent excessive blurring and loss of important image details.

Advertisement

4. Image enhancement techniques

In the context of remote sensing, “image enhancement” refers to the process of improving the visual appearance of an image or making certain features of an image more discernible for analysis [15, 16]. This process is crucial in remote sensing as it helps in better interpreting and analyzing the data collected from satellite or aerial imagery. Image enhancement techniques are typically grouped into two main categories: spatial domain and frequency domain [16].

  1. Spatial domain techniques, which could be further categorized into (i) contrast enhancement, (ii) spatial filtering, (iii) histogram equalizer, and (iv) edge filtering.

  2. Frequency domain techniques, which could also be categorized into (i) Fourier Transform and (ii) frequency filtering.

4.1 Spatial domain techniques

These techniques operate directly on the pixels of the image. They are primarily concerned with modifying the pixel values in an image to achieve the desired enhancement.

4.1.1 Contrast enhancement

Contrast enhancement refers to the modification of an image’s tonal distribution. This process entails scientifically adjusting pixel brightness values to either expand or compress the intensity spectrum. Such manipulation augments the visual distinction of different features within the image, thereby facilitating more effective analysis and interpretation for remote sensing applications.

4.1.2 Spatial filtering

Spatial filtering in digital image processing is a technique used to manipulate or enhance an image by altering the spatial frequency components. It involves applying filters, typically in the form of a kernel or matrix, over an image to perform operations such as smoothing, sharpening, and edge detection [25]. Smoothing filters reduce noise and detail, while sharpening filters enhance edges and fine details. Edge detection filters identify and outline significant transitions in intensity. Spatial filtering is pivotal in various applications, including enhancing image features for analysis, preprocessing for further image-processing tasks, and improving the visual quality of images.

4.1.3 Histogram equalizer

Histogram equalization is a method in image processing that adjusts the contrast of an image. It works by effectively spreading out the most frequent intensity values, thus stretching the range of intensity levels in the image’s histogram. This results in an image with a more uniform distribution of intensities, enhancing the overall contrast [18]. Histogram equalization is particularly useful in images with backgrounds and foregrounds that are both bright or dark, making it easier to identify features and details in such images. Histogram equalization can sometimes overly enhance noise, obscure details in very light or dark areas, and may not be suitable for every kind of imagery.

4.1.4 Edge filtering

The objective of edge filtering is to detect and enhance edges within an image. Edges in an image are significant local changes in intensity, typically occurring at the boundaries of different objects or features [26]. Edge filtering is accomplished by applying various algorithms or operators, which work by highlighting intensity transitions [27]. These filters operate by convolving a kernel or filter matrix over the image. This process calculates a new value for each pixel by combining it with its neighboring pixel values, effectively emphasizing areas of high-intensity change. The result is an image where the edges are more pronounced, making it easier to identify shapes, boundaries, and structural details in the image [27].

Edge filtering is crucial in various applications, including image analysis, computer vision, pattern recognition, and machine learning, as it aids in object detection, segmentation, and feature extraction. Edge filtering in remote sensing is crucial for many applications to list a few, precision agriculture, environmental monitoring, urban planning, road mapping, geological studies, and archeological exploration.

4.2 Frequency domain techniques

The frequency domain, crucial in signal processing and related fields, focuses on analyzing signals or mathematical functions by their frequency components rather than time-based behavior. This may involve transforming an image into its sinusoidal frequency components, typically using mathematical methods like the Fourier Transform [22]. In this domain, different frequencies represent various image features, with low frequencies indicating gradual intensity changes and high frequencies highlighting rapid changes such as edges and details. This approach is advantageous for tasks such as filtering, image compression, and frequency-based image enhancement, as it allows more efficient and effective processing.

4.2.1 Fourier Transform

The Fourier Transform (FT) is a mathematical transform widely used in signal processing and physics. It converts a signal from its original domain, often time or space, into a representation in the frequency domain (see Eq. (1)). The Fourier Transform decomposes a function (signal) into its constituent frequencies. It is based on the principle that any signal can be represented as a sum of simple sinusoids (sine and cosine functions) with varying amplitudes and frequencies. FT could be categorized into (i) continuous, which is used for continuous signals, it transforms a time-domain signal into a continuous spectrum of frequencies, and ii) discrete, which applies to discrete signals and is commonly used in digital signal processing. It computes the spectrum of a finite set of samples. How to perform Fourier Transform analysis using JS is provided below in Eq. (1) and Box 1.

// Step 1: Import the FFT library.
// This line imports the FFT (Fast Fourier Transform) library 'fft.js'.
// The FFT algorithm is used for computing the Discrete Fourier Transform (DFT) and its inverse.
const FFT = require('fft.js');
// Step 2: Define an example function to transform
// This function 'exampleFunction' is a sample function for which we want to perform the Fourier //Transform.
// Here, it's defined as a sine function. Replace this with any function you want to analyze.
function exampleFunction(t) {
return Math.sin(t);}
// Step 3: Set the number of samples
// 'n' represents the number of samples. For FFT, it's best to use a power of 2 for efficiency.
// Here, 1024 samples are used.
const n = 1024;ural logarithm, i
// Step 4: Create an FFT instance
// This line creates a new FFT instance 'f' with 'n' points.
const f = new FFT(n);
// Step 5: Sample the function and prepare the input array
// This step involves creating an array of 'n' samples by evaluating the 'exampleFunction'
// at equally spaced intervals over 2π. This array 'input' represents the signal in the time domain.const input = new Array(n).fill(0).map((_, idx) => exampleFunction(2 * Math.PI * idx / n));
// Step 6: Perform the FFT.Here, the FFT is performed on the input data.
// 'output' is an array that will store the FFT result. It's initialized using 'createComplexArray'. 'realTransform' performs the actual FFT computation on the 'input' and stores the result in 'output'. 'completeSpectrum' computes the full frequency spectrum.
const output = f.createComplexArray();
f.realTransform(output, input);
f.completeSpectrum(output);
// The 'output' array now contains the result of the FT and represents the frequency //domain representation of the original time-domain signal.

Box 1.

The code that explains the FT steps needed for conducting it.

Fω=fteiωtdtE1

where, F(ω) is the Fourier Transform of f(t). ∫ denotes the integral over the entire domain of f(t), which is usually time (t). f(t) is the original function (or signal) being transformed. e(−iωt) is the complex exponential function, where e is the base of the imaginary unit, ω (omega) is the angular frequency.

4.2.2 Z-Transforms

The Z-Transform is a mathematical technique used in digital signal processing and control systems to convert discrete-time signals into a complex frequency domain, simplifying the analysis and design of digital systems, particularly for stability and frequency response.

4.2.3 Laplace Transforms

Laplace Transforms is a mathematical technique used to transform complex, time-based functions into a simpler, frequency-based domain [28]. This method is particularly valuable in engineering and physics for analyzing linear time-invariant systems. By converting differential equations, which often describe dynamic systems, into algebraic equations in the Laplace domain, simplifies the process of solving these equations. The Laplace Transform provides a powerful tool for understanding and manipulating the behavior of systems over time.

4.2.4 Cosine and Sine

Cosine and Sine Transforms, especially the Discrete Cosine Transform (DCT), are crucial for image compression and analysis, prominently used in JPEG compression [19]. DCT converts image data from the spatial to the frequency domain, focusing on lower frequency components vital for human vision, enabling efficient compression by preserving important visual information and discarding less perceptible details. While the Discrete Sine Transform (DST) is less commonly used than DCT, both play key roles in reducing image data redundancy and correlations, essential for compression, feature extraction, and noise reduction. These transforms are also vital for filtering, edge detection, and pattern recognition in digital image processing, improving the processing and analysis of digital images.

4.2.5 Hilbert Transforms

The Hilbert Transform is a crucial tool in digital image processing for converting real signals into complex ones, encapsulating amplitude and phase details. This is particularly useful for enhancing edge detection and representation in images, facilitating the analysis of textures and patterns through the extraction of instantaneous amplitude and phase information.

4.2.6 Frequency filtering

Frequency filtering is a technique for enhancing or suppressing specific features of an image by manipulating its frequency components [15]. Low-pass filters are used for smoothing and noise reduction by retaining low-frequency components, while high-pass filters are employed for edge enhancement and sharpening by preserving high frequencies. This method is critical for tasks such as noise reduction, edge detection, image sharpening, and feature extraction, playing a key role in both the preprocessing and post-processing stages of image analysis.

Advertisement

5. Image transformation and analysis

Image transformation and analysis encompass a range of operations aimed at extracting meaningful information from images. These transformations may include (i) image segmentation divides the image into meaningful segments or regions, (ii) feature extraction focuses on identifying and extracting specific features or patterns from the image, (iii) object recognition entails categorizing objects or patterns in the image, (iv) classification assigns pixels or regions to predefined classes or categories and (v) morphological operations analyze and process image structures based on shape and form. These operations collectively contribute to a comprehensive approach for understanding and interpreting the content of digital images, facilitating tasks such as object identification and image classification.

5.1 Satellite image classification

Image classification involves sorting and labeling pixels into predefined categories [15]. In principle, it is possible to group similar pixels into informational classes that are of interest to the users. The pixels carry the spectral information which is normally captured as digital numbers based on the reflections or the emissions from different bands. The classified pixels represent the informational classes that are of interest to the users. It is possible to categorize image classification methods into three types: (i) supervised, (ii) unsupervised, and (iii) hybrid [18].

5.1.1 Supervised classification

It involves the use of known, labeled data to classify new, unlabeled data. It is a type of machine learning, in which the algorithm is “trained” on a dataset where the categories or outcomes are already known. The process involves selecting representative samples for each class (training data) and using these to guide the algorithm in recognizing similar patterns in the rest of the data [27]. Supervised classification is notably effective for precise, well-labeled training datasets. This technique is widely implemented across various fields. However, our discourse will predominantly concentrate on the utilization of this technique for the delineation of land cover categories in satellite-based imagery. For the purpose of evaluating the precision of the model, the dataset is partitioned into training and testing subsets. For the purpose of verifying and evaluating the precision of the model, the dataset is partitioned into training and testing subsets. The training set educates the model to recognize patterns, while the testing set, unseen during training, is used to assess model performance, ensuring an unbiased evaluation of its practical applicability.

5.1.2 Unsupervised classification

On the other hand, the unsupervised classification does not require any prior labeling of data. Instead, the algorithm independently analyses the data to find patterns or clusters based on inherent similarities and differences. This approach is applied when predefined categories are absent, or during data examination to discover inherent groupings [27]. While processing can be relatively fast, the task of labeling the classes is challenging and time-intensive [23]. Common applications include identifying clusters in data for market research or ecological studies. The post-processing phase of unsupervised classification involves extensive techniques such as sieving, clumping, merging, and assigning significant categories to clusters. Additionally, this phase includes integrating the classified data with other datasets. One of the major challenges in this process is conducting accuracy assessments, due to the inherent nature of unsupervised classification.

5.1.3 Hybrid classification

It combines elements of both supervised and unsupervised methods. It may start with an unsupervised approach to explore and segment the data into clusters and then apply supervised techniques to label these clusters based on known samples.

In complex situations where neither solely supervised nor purely unsupervised approaches are adequate, this method can prove to be especially beneficial. Hybrid classification leverages the strengths of both approaches, providing a more nuanced and accurate classification in diverse datasets.

5.2 Accuracy assessment

Accuracy assessment evaluates the precision of a classification model or algorithm in predicting known outcomes, crucial for determining the model’s reliability. In supervised classification, it assesses how well the model learns from training data and its accuracy on new data, using metrics such as overall accuracy and confusion matrix. An overall accuracy of over 80% is generally acceptable, but higher thresholds may be required for critical applications such as urban planning. These metrics offer a comprehensive view of model performance, particularly important in datasets with imbalanced class distribution.

Advertisement

6. Spatial cloud computing and GEE: harnessing the power of computational digital image processing

Spatial cloud computing is the type of cloud computing that is specifically designed to process and visualize spatial data [9]. It combines geospatial science and cloud computing in a way that leverages the power of distributed computing to enable geospatial discoveries [29]. There are three cloud computing service models: (i) Infrastructure as a Service (IaaS); (ii) Platform as a Service (PaaS); and (iii) Software as a Service (SaaS), which can be used for a variety of cloud computing applications, including spatial computing applications [30].

Google Earth Engine (GEE), serves as an exemplary case of spatial cloud computing, empowering users to access, explore, analyze, and visualize satellite imagery and remote sensing data hosted in the cloud. Google Earth Engine (GEE) is a unique PaaS, that uses Google’s massive data catalog to analyze environmental and global data [31]. Google Earth Engine is used by scientists, researchers, and developers to study changes, trends, and variability on the Earth’s surface [32]. GEE is considered by many researchers and users as a powerful tool in environmental monitoring and analysis, which may include climate change, biodiversity, land use and land cover changes, and disaster and risk management. GEE’s power lies in its accessibility, cost-effectiveness, global reach, and cloud-based computing capabilities, making it an indispensable tool for environmental research and analysis.

In this chapter, we will delve into the utilization of GEE and JavaScript programming, using the code editor provided by GEE, for digital image processing and analysis of remotely sensed data.

6.1 Google earth engine common datasets

Google Earth Engine provides a constantly growing database of Earth observation images and scientific data, which is measured in petabytes. The public data archive includes more than forty years of historical images and scientific data (Google Earth Engine, 2023). The datasets provided by GEE catalog could be categorized into three themes: (1) Earth observation imagery, (2) atmospheric and weather, and (3) socioeconomic data. For more in-depth information on this topic, please visit the Earth engine datasets catalog at:

https://developers.google.com/earth-engine/datasets/catalog

6.2 JavaScript and code editor: a gateway to Earth engine’s geospatial computation power

To access Google Earth Engine through the code editor, you will need a Google account, a web browser with JavaScript enabled, and familiarity with JavaScript. For the most seamless and secure Google Earth Engine experience, we recommend Chrome. Its exceptional performance, compatibility, and security features make it the ideal choice for GEE users. To utilize the Google Earth Engine (GEE) code editor, a user must have a Google account and be registered with GEE, often receiving email confirmation within a few days. An uninterrupted internet connection and a contemporary web browser such as Chrome, are essential for accessing this online application. Chrome is recommended due to its superior optimization of web-based applications and integration with Google’s services. A user’s proficiency with the platform will be significantly enhanced by having foundational knowledge in JavaScript or Python and familiarity with GIS concepts. Once registered, the user can start exploring the various features of the code editor through its website. Visit code.earthengine.google.com, sign in with your Google account, and utilize the Earth Engine JavaScript API within the web-based IDE. The code editor offers features such as a map display, API reference documentation, and a git-based script manager for developing complex geospatial workflows.

Advertisement

7. Post-processing and visualization

Post-processing and visualization in digital image processing involves a set of operations performed on processed images to refine results and facilitate interpretation. These operations typically include (i) image fusion to combine information from multiple sources, (ii) image filtering to enhance or smooth specific features, (iii) rendering to visualize the final processed image, and (iv) quality assessment to evaluate the accuracy and reliability of the results [15, 16]. Post-processing is crucial for refining the output, ensuring optimal visual representation, and preparing the image for effective interpretation and analysis. The visualization step allows users to perceive and understand the processed data, providing insights into the underlying patterns or information present in the image.

Advertisement

8. Case study 1: satellite digital image analysis and data extraction using GEE and JS

8.1 Script overview

The methodology explained in the script (Box 2) is an intricate application of remote sensing techniques, implemented through the Google Earth Engine (GEE) and underscored by the use of band maths concepts. It focuses on the analysis of satellite imagery from the Landsat 9 series, employing band maths to develop indices such as the Normalized Difference Vegetation Index (NDVI), Water Index (WI), and Urban Index (UI). These indices are pivotal for the identification and delineation of diverse land cover features such as urban areas, vegetation, and water bodies within a specified region of interest (ROI), in this case part of Khartoum State. Leveraging the analytical power of GEE, this approach is instrumental for environmental monitoring and geographic analysis, providing nuanced insights into land use patterns and the effective management of natural resources. The process effectively showcases the utility of remote sensing and Geographic Information System (GIS) technologies in environmental and urban analysis. Through the strategic manipulation of different spectral bands, the script facilitates a comprehensive portrayal of the land cover characteristics within the chosen ROI. This level of analysis is invaluable across various domains, including urban planning, environmental conservation, agricultural monitoring, and natural resource management. The capacity to identify and categorize different land characteristics enhances our knowledge and strengthens the decision-making for future land utilization and conservation initiatives.

// Step 1: Define the Region of Interest (ROI)
// This defines the geographic area for the analysis, using coordinates to create a rectangle.
var roi = ee.Geometry.Rectangle([32.25, 15.25, 33, 16]);
// Step 2: Select Image Collection
// This selects the Landsat 9 image collection and filters it to include only images that intersect with the ROI.
var collection = ee.ImageCollection('LANDSAT/LC09/C02/T1_L2')
.filterBounds(roi);
// Step 3: Filter by Date and Cloud Cover, and Select the Image with Least Cloud Cover
// This filters the image collection to a specific date range and sorts by cloud cover, selecting the image with the least cloud cover.
collection = collection.filterDate('2022-01-01', '2023-12-31');
collection = collection.sort('CLOUD_COVER');
var leastCloudyImage = collection.first();
// Apply scaling factors function
// This function scales optical and thermal bands to enhance image visualization.
function applyScaleFactors(image) {
var opticalBands = image.select('SR_B.').multiply(0.0000275).add(-0.2);
var thermalBands = image.select('ST_B.*').multiply(0.00341802).add(149.0);
return image.addBands(opticalBands, null, true)
   .addBands(thermalBands, null, true);}
var scaledImage = applyScaleFactors(leastCloudyImage);
// Step 4: Clip, Create True Color Composite, and Add to Map
// This clips the image to the ROI and creates a True Color Composite for visualization.
var clippedImage = scaledImage.clip(roi);
var visualization = {bands: ['SR_B4', 'SR_B3', 'SR_B2'], min: 0.0, max: 0.3};
var trueColorComposite = clippedImage.visualize(visualization);
Map.addLayer(trueColorComposite, {}, 'TrueColor Composite');
// Step 5: Create and Add False Color Composite to Map
// This creates a False Color Composite using specific bands to highlight different features like vegetation and water.
var falseColorComposite = clippedImage.visualize({bands: ['SR_B5', 'SR_B4', 'SR_B3'], min: 0.0, max: 0.3});
Map.addLayer(falseColorComposite, {}, 'FalseColor Composite');
// Step 6: Urban Area Identification
// This identifies urban areas using a spectral index and masks non-urban areas for //visualization.
var urbanAreas = clippedImage.normalizedDifference(['SR_B4', 'SR_B3']).gt(0.2);
var urbanMask = urbanAreas.updateMask(urbanAreas);
Map.addLayer(urbanMask, {palette: ['red']}, 'Urban Areas');
// Step 7: Vegetation Delineation
// This delineates vegetation using the NDVI index and masks non-vegetation areas for visualization.
var ndvi = clippedImage.normalizedDifference(['SR_B5', 'SR_B4']).gt(0.3);
var vegetationMask = ndvi.updateMask(ndvi);
Map.addLayer(vegetationMask, {palette: ['green']}, 'Vegetation Areas');
// Step 8: Water Body Delineation
// This delineates water bodies using the NDWI index and masks non-water areas for visualization.
var ndwi = clippedImage.normalizedDifference(['SR_B3', 'SR_B5']).gt(0.3);
var waterMask = ndwi.updateMask(ndwi);
Map.addLayer(waterMask, {palette: ['blue']}, 'Water Bodies');
// Center the map on the ROI
Map.centerObject(roi);

Box 2.

Explaining the coding steps in case study.

The script presented in Box 2 outlines eight actionable steps that users can directly implement in the GEE (Google Earth Engine) code editor. This is designed to engage users in understanding the significance of image processing through GEE and its JavaScript API. Moreover, the user is equipped to derive essential insights from the imagery by applying band maths methods. This process includes the use of prominent spectral indices, such as the Normalized Difference Vegetation Index (NDVI), Aquatic Spectral Indices for water analysis, and Anthropogenic Land Use Indices for urban area delineation. These indices play a critical role in the nuanced assessment and interpretation of satellite imagery data. Box 2 explains the coding steps in case study 1. The code could be accessed and reused from https://github.com/Elmahal/DIP_GEE_JS_Codes

8.2 Case study 1: model outputs

In our first case study, we delve into extracting data from Landsat 9 images from 2022, focusing on processing techniques and region of interest clipping (ROI), as detailed in our code snippet. This includes generating true and false color composites (Figure 1). The aim is to identify, visualize, and extract information for essential environmental elements such as vegetation and water bodies using normalized difference indices (NDVI and NDWI, respectively), displayed in green and blue (Figure 2). Improving the quality of the extracted information could include optimizing spectral band selection to match the study area’s unique characteristics, adjusting index thresholds for clearer feature differentiation, and integrating more data layers or indices for comprehensive environmental analysis.

Figure 1.

True color composite (left) and false color composite (right).

Figure 2.

Information extraction for vegetation (left) and water (right) using NDVI and NDWI, respectively.

Advertisement

9. Case study 2: advanced LULC analysis through supervised classification of Sentinel-2 imagery

The Sentinels, satellite constellations crafted by the European Space Agency (ESA), incorporate both radar and optical imaging capabilities, making them well-suited for tasks such as environmental surveillance, climate monitoring, and air quality assessment [33]. In this chapter, we will explore together the data provided by Sentinel-2, the multispectral instrument (MSI). The MSI offers high-resolution multispectral imagery covering various wavelengths in the electromagnetic spectrum. The data are crucial for monitoring vegetation, soil, and water resources, as well as detecting and analyzing changes in land cover (The Sentinel missions, 2023) [33]. The information provided by Sentinel-2 supports informed decision-making in environmental management, conservation, and disaster response, making it a valuable tool for humanitarian efforts and mitigating the impact of natural disasters (Box 3).

// Importing the region of interest (roi) – Greater Khartoum
var roi = ee.FeatureCollection('projects/ee-anwareltayeb2/assets/KRT_SA'); // //”KRT_SA”, is the region of our interest but could be replace by your own roi
// Import the five classes for classification
exports.water = water;
exports.agric = agric;
exports.ind = ind;
exports.urban = urban;
exports.barren = barren;
// Load Sentinel-2 ImageCollection for the specified date range and region
var image = ee.ImageCollection('COPERNICUS/S2_SR_HARMONIZED')
    .filterDate('2023-01-01', '2023-04-15')
    .filterBounds(KRT_SA)
    .select(['B4', 'B3', 'B2', 'B8']) // Selecting specific bands
    .median();
// Visualisation parameters
var visParamsFalse = {
bands: ['B4', 'B3', 'B2'],
min: 0.0,
max: 3000,
gamma: 1.4
};
// Add image layer to the map for visualization
Map.addLayer(image.clip(KRT_SA), visParamsFalse, 'KRTM23');
Map.centerObject(KRT_SA, 10);
// Supervised classification with CART
// Define labels and bands
var label = 'Class';
var bands = ['B2', 'B3', 'B4', 'B8'];
// Assign class labels to each feature in the collections
var waterWithClass = water.map(function(feature) {
return feature.set('Class', 0);
});
var agricWithClass = agric.map(function(feature) {
return feature.set('Class', 1);
});
var indWithClass = ind.map(function(feature) {
return feature.set('Class', 2);
});
var urbanWithClass = urban.map(function(feature) {
return feature.set('Class', 3);
});
var barrenWithClass = barren.map(function(feature) {
return feature.set('Class', 4);
});
// Merge the class collections into one dataset for training
var training = waterWithClass.merge(agricWithClass)
.merge(indWithClass)
.merge(urbanWithClass)
.merge(barrenWithClass);
print(training); // Check the training dataset
// Prepare the input image for classification
var input = image.select(bands);
// Sample the regions for training the classifier
var trainImage = input.sampleRegions({
collection: training,
properties: [label],
scale: 10
});
print(trainImage);
// Split the data into training and testing sets
var trainingData = trainImage.randomColumn();
var trainSet = trainingData.filter(ee.Filter.lessThan('random', 0.8));
var testSet = trainingData.filter(ee.Filter.greaterThanOrEquals('random', 0.8));
// Train the classifier
var classifier = ee.Classifier.smileCart().train(trainSet, label, bands);
// Classify the image
var classified = input.classify(classifier);
print(classifier.getInfo());
// Define a palette for the land cover classes
var landcoverPalette = [
"#0000FF", // water
"#00FF00", // agriculture
"#800080", // industrial
"#FFFF00", // urban
"#D2B48C" // barren
];
// Add classified layer to the map
Map.addLayer(classified.clip(KRT_SA), {palette: landcoverPalette, min: 0, max: 5}, "classified");
// Accuracy assessment and confusion matrix
var confusionMatrix = ee.ConfusionMatrix(testSet.classify(classifier)
.errorMatrix({
actual: 'Class',
predicted: 'classification'
}));
print("ConfusionMatrix:", confusionMatrix);
print("Overall Accuracy:", confusionMatrix.accuracy());
// Calculating areas of the created classes in GEE
var areaImage = ee.Image.pixelArea().addBands(classified);
var areas = areaImage.reduceRegion({
reducer: ee.Reducer.sum().group({
groupField: 1,
groupName: 'class',
}),
geometry: roi.geometry(),
scale: 500,
maxPixels: 1e10
});
print(areas);
// Processing the area calculations
var classAreas = ee.List(areas.get('groups'));
var classAreaLists = classAreas.map(function(item) {
var areaDict = ee.Dictionary(item);
var classNumber = ee.Number(areaDict.get('class')).format();
var area = ee.Number(areaDict.get('sum')).divide(1e6).round();
return ee.List([classNumber, area]);
});
var result = ee.Dictionary(classAreaLists.flatten());
print(result);

Box 3.

The comprehensive code for case study 2 with elaborated comments.

9.1 Get your hands wet with LULC supervised classification

In this section, the user will be guided through the detailed process of utilizing the Google Earth Engine code editor, JavaScript programming, and Sentinel-2 imagery. The emphasis will be on the application of supervised image classification techniques for LULC analysis utilizing machine learning algorithms. The chosen study area for this application is part of Khartoum state in Sudan, a country currently experiencing profound human challenges. This region’s selection underscores both its geographic significance and the current humanitarian context. The goal is to classify the LULC into distinct categories such as urban areas, agricultural land, forests, and water bodies.

A vital step in the classification process is the overall accuracy assessment of the model. This involves evaluating how well the model performs in correctly classifying various LULC types, ensuring the accuracy of the chosen algorithm and the exactness of the classification results.

This is achieved through the confusion matrix, which rigorously evaluates the model’s performance. The overall accuracy of 80% is considered good for most applications [34]. However, many factors can play a role in the accuracy of the model’s performance. This section highlights the effective use of spatial cloud computing in the detailed examination and classification of various land types and their utilizations within the selected region of interest. The final outcome is the development of a comprehensive land use and land cover (LULC) map, achieved through the application of Google Earth Engine (GEE) for both visualization and quantitative analysis.

9.2 Accessing Sentinel data via GEE/JS API

Unlike traditional approaches to manipulating Earth Observation (EO) data, GEE provides novel methods to handle voluminous data, with a variety of formats, processing, and analyzing it without having a developed computer system [11]. To access the code editor and interact with the GEE platform, JavaScript serves as one of the native GEE APIs. A simple way of accessing the Sentinel data is to rely on the available codes provided by the Earth Engine Data Catalog (https://developers.google.com/earth-engine/datasets). Once the user accesses the code via the code editor, it is possible to customize it and build on it according to the user’s need. But before that the user needs to understand the components of the Sentinel data. The Sentinel-2 MSI data consists of (i) bands, (ii) quality indicators, and (iii) metadata (https://sentinels.copernicus.eu/web/sentinel/user-guides/sentinel-2-msi).

9.3 Understanding the LULC supervised classification step by step

In the following paragraphs, we are going to explain the codes used for supervised classification of satellite imagery from Sentinel-2, specifically for a study area (ROI) in Greater Khartoum, Sudan. It includes image preparation, classification, accuracy assessment, and area calculation for different LULC classes. We will divide the process into individual steps, making it easier for the user to follow each part one at a time.

9.4 Loading Sentinel-2 image collection

The process begins by loading a Sentinel-2 image collection, where images are filtered by date and specific bands (10-meter resolution) are selected. This step is crucial for capturing high-resolution, relevant spectral data. The median of these images is computed to reduce noise and cloud cover, ensuring a clear and consistent dataset for analysis.

9.5 Selecting the region of interest (ROI)

Selecting the ROI involves balancing ideal study requirements against practical data limitations. GIS tools are instrumental in defining and visualizing ROIs. In this case, the ROI “KRT_SA” is predefined and imported as a feature collection, representing the study area.

9.6 Defining visualization parameters

Visualization parameters are set using the “visParamsFalse” object. This configuration dictates how the Google Earth Engine represents satellite imagery on the map. It utilizes bands 4, 3, and 2 (red, green, blue) for true-color representation. The pixel value range and gamma adjustment enhance contrast and detail, optimizing the visual interpretation of the image.

9.7 Adding the visualized layer to the basemap

The processed image is added to the map as a layer with custom visualization settings and labeled for easy identification. The map is then centered on the ROI at a defined zoom level, focusing on the display on the area of interest.

9.8 Preparing training areas for classification

For this case study, five land use and land cover (LULC) classes are identified, each with a unique hexadecimal color code. Training areas could be imported or digitized on the map, and each is assigned a class label corresponding to a specific land cover type.

9.9 Key factors in choosing training areas

Selecting training areas requires consideration of various factors to ensure accurate classification. These include capturing the full spectrum of each class, ensuring geographic distribution, maintaining temporal consistency, verifying the accuracy of training data, understanding spectral properties, and considering landscape dynamics.

9.10 Data preparation for supervised classification

Data for supervised classification is prepared by labeling different datasets according to their land cover type. This labeled data trains a classification model (CART algorithm) to predict land cover types based on spectral characteristics.

9.11 Merging classes

A comprehensive training dataset is prepared by merging datasets representing different classes into one large dataset. This method ensures a varied and representative sample for the machine learning model.

9.12 Implementing supervised classification with CART algorithm

The script selects relevant bands, samples regions based on labeled training data, randomizes these samples, and splits them into training and testing sets. A CART classifier is then trained, typical in remote sensing applications for tasks such as land cover classification.

9.13 Visualizing land cover classification results

The script classifies an input image using a trained classifier, assigns a color palette for different land cover classes, and adds the classified image to the map. This visualization aids in interpreting satellite imagery.

9.14 Evaluating the classification model performance

Model accuracy is assessed using a confusion matrix, which evaluates performance on a test dataset. This step is vital in understanding and validating the classification model’s effectiveness.

9.15 Class-based area analysis in satellite imagery using Google Earth Engine

The concluding phase entails the examination and classification of specific zones within satellite imagery into distinct categories, each representing a different type of land cover. This process exemplifies the conversion of unprocessed satellite images into significant, systematically categorized spatial data for use in environmental and geographic research. The code presented is an advanced instance of spatial image processing through Google Earth Engine (GEE). It targets a designated area of interest (ROI) in the image and utilizes the “reduceRegion” function to aggregate pixel values in this region. A critical element of this analysis is the aggregation of pixel values into designated classes, representing various land cover types, vegetation indices, or other ecological attributes. This analytical method is refined through settings such as spatial resolution (scale) and maximum pixel limits. It further manipulates this list to translate the area calculations into a more understandable unit, such as square kilometers, correlating them with their corresponding class labels.

Ultimately, the script generates a meticulously arranged dictionary, associating each class with its respective area size. This output is instrumental for tasks in environmental surveillance, resource management, and land utilization studies. The code showcases the robust capability of GEE in converting basic satellite images into informative, classified spatial data crucial for ecological and geographical research. The code could be accessed and reused from https://github.com/Elmahal/DIP_GEE_JS_Codes

9.16 Case study 2: model outputs

The focus of case study 2 is to explore the output and the performance of a supervised classification model, which is CART algorithm. The classifier is trained with labeled data [water, agriculture, industrial, urban, and barren lands]. The result of the output is shown in Figure 3. Furthermore, the classification’s effectiveness is assessed via a confusion matrix, which evaluates the overall accuracy which was found to be 85% a globally accepted accuracy in LULC classification. The accuracy per class is displayed in Table 1. The accuracy assessment is a requirement in any classification process and it is crucial for understanding the classifier’s performance in accurately identifying each land cover type. Improving the model’s performance can be achieved through several strategies. One approach is to enhance both the quantity and quality of the training data. Another method involves incorporating a wider variety of representative samples from each class to ensure a comprehensive learning base. Additionally, expanding the selection of spectral bands used in the analysis can provide more detailed information for the model to process. Lastly, integrating additional features such as elevation and texture into the model can significantly improve its ability to classify data accurately.

Figure 3.

The output of the LULC classes overlaid over Google map.

ActualCorrect predictionsTotal actualAccuracy (%)
Water9010090.0
Agriculture8010080.0
Industrial8510085.0
Urban8010080.0
Barren9510095.0

Table 1.

Per class accuracy for evaluating the CART model performance.

Advertisement

10. Conclusion

In this chapter, we have delved into the nuances of digital image processing by leveraging the capabilities of JavaScript, Google Earth Engine (GEE), and the code editor. This exploration bridges the gap between theoretical concepts and their practical application in analyzing and visualizing remotely sensed data. The chapter is enhanced by including two case studies, offering practical experience to the readers. These studies act as basic exercises, accommodating the needs of both novice and experienced users. They demonstrate the practical applications of digital image analysis concepts, with JavaScript playing a key role in showcasing its adaptability and efficacy in processing remote sensing data. The skills and insights gained from this chapter are not limited to JavaScript; they are applicable across various programming languages, making them versatile for a broad audience. This is particularly relevant for users of Python or R, who can adapt these techniques to their preferred programming environments. The chapter emphasizes the effective use of JavaScript API and Google Earth Engine in remote sensing and digital image analysis, setting a foundation for future advancements in geospatial data analysis across fields, such as research, environmental monitoring, and resource management. It encourages further exploration and learning in the evolving field of digital image analysis and its significant potential in remote sensing.

References

  1. 1. Feldman H. Sputnik: The First Satellite. New York: The Rosen Publishing Group, Inc; 2002
  2. 2. Dickson P. Sputnik: The Shock of the Century. New York: U of Nebraska Press; 2019
  3. 3. Jakhu RS, Pelton JN. Global Space Governance: An International Study. Berlin: Springer; 2017
  4. 4. de Carvalho Alves M, Sanches L, Silva de Menezes F, Trindade LRSLC. Multisensor analysis for environmental targets identification in the region of Funil dam, state of Minas Gerais, Brazil. Applied Geomatics. 2023;15(4):807-827. DOI: 10.1007/s12518-023-00523-w
  5. 5. Varshney PK, Arora MK. Advanced Image Processing Techniques for Remotely Sensed Hyperspectral Data. Berlin: Springer Science & Business Media; 2013
  6. 6. Chen L-C, Papandreou G, Kokkinos I, Murphy K, Yuille AL. Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs. 2016. DOI: 10.48550/arXiv.1412.7062
  7. 7. Zhu XX, Tuia D, Mou L, Xia G-S, Zhang L, Xu F, et al. Deep learning in remote sensing: A comprehensive review and list of resources. IEEE Geoscience and Remote Sensing Magazine. 2017;5:8-36. DOI: 10.1109/MGRS.2017.2762307
  8. 8. Ma L, Liu Y, Zhang X, Ye Y, Yin G, Johnson BA. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS Journal of Photogrammetry and Remote Sensing. 2019;152:166-177. DOI: 10.1016/j.isprsjprs.2019.04.015
  9. 9. Gorelick N, Hancher M, Dixon M, Ilyushchenko S, Thau D, Moore R. Google earth engine: Planetary-scale geospatial analysis for everyone. Remote Sensing of Environment, Big Remotely Sensed Data: Tools, Applications and Experiences. 2017;202:18-27. DOI: 10.1016/j.rse.2017.06.031
  10. 10. Yan J, Ma Y, Wang L, Choo K-KR, Jie W. A cloud-based remote sensing data production system. Future Generation Computer Systems. 2018;86:1154-1166. DOI: 10.1016/j.future.2017.02.044
  11. 11. Elmahal AE, Musa MMI, Elmahal AE, Musa MMI. Spatial Cloud Computing Using Google Earth Engine and R Packages. London, UK: IntechOpen; 2023. DOI: 10.5772/intechopen.1002686
  12. 12. Weinman J. Cloudonomics: The Business Value of Cloud Computing. New Jersey: John Wiley & Sons; 2012
  13. 13. Davison D. The Total Economic Impact of Google Cloud Platform. Forrester Research; 2020
  14. 14. Jensen J. Remote Sensing of the Environment: An Earth Resource Perspective. 2nd ed. Upper Saddle River, NJ: Pearson; 2006
  15. 15. Campbell JB, Wynne RH. Introduction to Remote Sensing. 5th ed. New York: Guilford Press; 2011
  16. 16. Lillesand T, Kiefer RW, Chipman J. Remote Sensing and Image Interpretation. New Jersey: John Wiley & Sons; 2015
  17. 17. Curlander JC, McDonough RN. Synthetic Aperture Radar: Systems and Signal Processing. New Jersey: Wiley; 1992
  18. 18. Borra S, Thanki R, Dey N. Satellite Image Analysis: Clustering and Classification. Berlin: Springer; 2019
  19. 19. Manolakis DG, Lockwood RB, Cooley TW. Hyperspectral Imaging Remote Sensing: Physics, Sensors, and Algorithms. Cambridge: Cambridge University Press; 2016
  20. 20. Favorskaya MN, Jain LC. Handbook on Advances in Remote Sensing and Geographic Information Systems: Paradigms and Applications in Forest Landscape Modeling. Berlin: Springer; 2017
  21. 21. Al-Fares W. Historical Land Use/Land Cover Classification Using Remote Sensing: A Case Study of the Euphrates River Basin in Syria. Berlin: Springer Science & Business Media; 2013
  22. 22. Moigne JL, Netanyahu NS, Eastman RD. Image Registration for Remote Sensing. Berlin: Cambridge University Press; 2011
  23. 23. Aronoff S. Remote Sensing for GIS Managers. Redlands, California: ESRI Press; 2005
  24. 24. Alves M d C, Sanches L. Remote Sensing and Digital Image Processing with R. Boca Raton, Florida: CRC Press; 2023
  25. 25. Soergel U. Review of radar remote sensing on urban areas. In: Soergel U, editor. Radar Remote Sensing of Urban Areas, Remote Sensing and Digital Image Processing. Netherlands, Dordrecht: Springer; 2010. pp. 1-47. DOI: 10.1007/978-90-481-3751-0_1
  26. 26. Spiteri A. Remote sensing 96: Integrated applications for risk assessment and disaster prevention for the mediterranean. In: Proceedings of the 16th EARSeL Symposium, Malta, 20–23 May 1996. Boca Raton, Florida: CRC Press; 1997
  27. 27. Varshney PK, Arora MK. Advanced Image Processing Techniques for Remotely Sensed Hyperspectral Data. Berlin: Springer Science & Business Media; 2004
  28. 28. Kuhfittig PKF. Introduction to the Laplace Transform. Berlin: Springer Science & Business Media; 2013
  29. 29. Yang C, Goodchild M, Huang Q, Nebert D, Raskin R, Xu Y, et al. Spatial cloud computing: How can the geospatial sciences use and help shape cloud computing? International Journal of Digital Earth. 2011;4:305-329. DOI: 10.1080/17538947.2011.587547
  30. 30. Mell P, Grance T. The NIST Definition of Cloud Computing NIST (Special Publication (SP) 800-145). Gaithersburg, Maryland: National Institute of Standards and Technology; 2011. DOI: 10.6028/NIST.SP.800-145
  31. 31. Aybar et al. Rgee: An R package for interacting with Google Earth Engine. Journal of Open Source Software. 2020;5(51):2272. DOI: 10.21105/joss.02272
  32. 32. Google Earth Engine. Google Earth Engine [Online]. Available from: https://earthengine.google.com/ [Accessed: January 30, 2023]
  33. 33. MSI Instrument - Sentinel-2 MSI Technical Guide. Sentinel Online, European Space Agency [Online]. Available from: https://sentinels.copernicus.eu/web/sentinel/technical-guides/sentinel-2-msi/msi-instrument [Accessed: January 30, 2023]
  34. 34. Olson DL, Delen D. Advanced Data Mining Techniques. Berlin: Springer Science & Business Media; 2008

Written By

Anwarelsadat Elmahal and Eltaib Ganwa

Submitted: 31 January 2024 Reviewed: 01 February 2024 Published: 09 May 2024