Contents


Preface

Today, modern tools and techniques for the collection and analysis of data in all fields of science and technology are proving to be more complex. The growing complexities are evidenced by the need for a more generalized and standardized description (integration) of the various data sources and formats to allow for the flexible exploration of different data types. Theoretically, the challenge has been how to create automated systems capable of providing an understandable format for the different datasets, as well as making the derived formats and/or standards applicable across the different platforms. Over the past few decades, one of the recent technologies that have proved indispensable in this area is the Linked Open Data (LOD). The LOD systems consist of a number of machine-readable datasets with Resource Description Framework (RDF) triples that are useful in describing data classes and the underlying properties. Moreover, indications from early research note that one of the problems with such existing data or information processing systems is the need for not just representing the data (or information) in formats that can be easily understood by humans, but also for building the intelligent systems that process the information that they contain or support. In other words, "machine-understandable" systems. By machine-understandable system, we assume that the extracted information or models are either semantically

labeled (annotated) to ease the analysis process, or represented in a formal structure (ontology) that allows a computer (the reasoning engine) to infer new facts by making use of the underlying relations. Indeed, the main idea for such data or information processing systems or those aspects of aggregating the data and computing the hierarchy of several process elements; is that they should not only be machine-readable but also machine-understandable. An adequate knowledgebase system is perceived to be, on the one hand, understandable by people, and on the other hand understandable by the machines. As devices become smarter and produce data about themselves, it has become increasingly important for data scientists to take advantage of more powerful tools and/or data integration techniques to help provide a common standard for information dissemination across the different platforms. To this end, the content of this book demonstrates that technologies such as the semantic web, machine learning, deep learning, natural language processing, internet of things, knowledge graph, process mining, and artificial intelligence, etc. which encompasses the wider spectrum of the LOD are of paramount importance. Therefore, this book presents two main drivers for the LOD technologies as follows: (i) encoding knowledge about specific data and process domains, and (ii) advanced reasoning and analysis of the big datasets

This book intends to provide the reader with a comprehensive overview of the latest developments within the LOD framework and the benefits of the supported methods – ranging from the semantics-aware techniques that exploit knowledge kept in big data to improved data reasoning (big analysis) beyond the possibilities offered by most traditional data mining techniques. Fundamentally, the book covers the entire spectrum of "Linked Open Data - Applications, Trends and Future Developments". It consists of six chapters selected after a rigorous review

at a more conceptual level.
