• *The third perspective is the implementation architecture logic* (**Figure 21**).

The implementation of these paradigms requires a concept of integrating artificial intelligence with software (SW) system architectures enabling interactive multiuser operations in real time relative to the user reaction times. End users will be able to work on shared user scenarios, results of their analyses, or information extraction procedures.

The central component is a **data index (DI)** which is a very specific database model for very fast, real-time management, processing, and distribution of large

**93**

*Artificial Intelligence Data Science Methodology for Earth Observation*

structured and unstructured distributed multi-temporal data sets. The data can be efficiently uploaded on demand, coping with large volumes of data from various

The **data preparation** needs to be able to support various tasks for the ARD generation. A **workflow orchestration engine** will be relaying data and offers

○ A deep neural network (**DNN module**) for physically meaningful feature

○ **Spatiotemporal analysis**, e.g., spatiotemporal pattern analysis and extraction for understanding the evolution classes, fusing information from various sources, not just identifying objects, but in particular spatiotemporal

The extracted information and data content are again indexed in the DI and provided (via web services) to one of the four human-machine interface (HMI) modules (i.e., **visual browsing, visual analytics, active learning, and event analysis**) supporting advanced big data visualization and active learning paradigms. Once a researcher is satisfied with the results, they can be shared with a restricted group or publically via the **collaborative layer**. These architectures are generically based on federated approaches, making it possible to deploy various components where they

○ **Data mining** to explore heterogeneous multi-temporal data sets.

fit best, using cloud technologies and web services for communication.

The advantages and benefits of the proposed approach are:

contrary with the classical classification proposed in AI.

• We are able to process multi-sensor data.

resolution (e.g., Sentinel-1 or Sentinel-2).

• We do clustering considering the physical parameters behind the sensors

• With very few examples, we are able to classify the images with high accuracy.

• We are able to create a semantic scheme adapted to different EO sensors (SAR or multispectral), high resolution (e.g., TerraSAR-X or WorldView)/medium

During the next years, we expect a wide variety of new satellite image data that

can be easily downloaded, handled, and analyzed by individual users. We also think that a number of new geophysical databases and browse tools will become available so that each user has easy access to numerous additional satellite data sources together with auxiliary geophysical data from common libraries and data management tools supporting in-depth image data analyses and their interpretation. Innovative application fields (such as autonomous driving based on machine learning and artificial intelligence) will bring us still more data handling tools and

*DOI: http://dx.doi.org/10.5772/intechopen.86886*

heterogeneous sources.

various processor steps:

learning

**6. Conclusions**

**7. Future work**

patterns and context

**Figure 21.** *The logic* implementation architecture *scheme.*

structured and unstructured distributed multi-temporal data sets. The data can be efficiently uploaded on demand, coping with large volumes of data from various heterogeneous sources.

The **data preparation** needs to be able to support various tasks for the ARD generation. A **workflow orchestration engine** will be relaying data and offers various processor steps:


The extracted information and data content are again indexed in the DI and provided (via web services) to one of the four human-machine interface (HMI) modules (i.e., **visual browsing, visual analytics, active learning, and event analysis**) supporting advanced big data visualization and active learning paradigms. Once a researcher is satisfied with the results, they can be shared with a restricted group or publically via the **collaborative layer**. These architectures are generically based on federated approaches, making it possible to deploy various components where they fit best, using cloud technologies and web services for communication.
