**Author details**

An excerpt of TULIP vocabulary syntax (detail specification can be found at http://purl.org/tulip/spec) for tlp:element and tlp:member is shown in **Figure 13**.

We have designed and implemented two reference libraries. The first library is a Python library that has functions to extract/transform Webpages and create TULIP datasets. The second library is a JavaScript library to query TULIP endpoint, consume its result sets, and manipulate them. The code is provided in the GitHub repositories at https://github.com/julthep/tulip and https://github.com/julthep/

Furthermore, we experimented with TULIP vocabulary using Wikipedia as the data source by converting a number of articles into TULIP data format. The result datasets can be accessed via SPARQL endpoint at http://tlpedia.org/sparql/ or downloaded at http://tlpedia.org/datasets/. These datasets will be updated periodically, and the number of imported articles will be increased on a regular basis.

Our proposal is different from existing research, which mainly efforts on transforming data from tables and lists into facts in various formats. TULIP focuses on extracting data from tables and lists into the dataset in the form of five-star Linked Data. Also, the RDF triples result can be used to recreate tables and lists in the same format as the source data because the designed schema focuses on the ability to preserve the structure of the original table and list. Another essential feature is that the acquired RDF triples can also be embedded in a package file such as XML or HTML with RDFa to be used to create tables and lists on a Webpage as an

TULIP can also be applied to many applications since it is designed to be very flexible. Implementers can choose to use any of its schema models, depending on their usage. It also supports many types of data and is extensible. Actually, TULIP also supports the Document Object Model (DOM) which can transform paragraphs or text blocks into the same structure. It can be used to transforms Wikipedia articles into TULIP format as a five-star open dataset so that the Semantic Web application can consume Linked Data more conveniently. All of this is to create a

data structure that is not just machine-readable but will be machine-

**5. Experimental results and contributions**

*Excerpt of TULIP vocabulary syntaxes tlp:element and tlp:member.*

*Linked Open Data - Applications,Trends and Future Developments*

tulip.js respectively.

**Figure 13.**

**6. Conclusion**

integrated dataset.

understandable.

**34**

Julthep Nandakwang\* and Prabhas Chongstitvatana Department of Computer Engineering, Chulalongkorn University, Bangkok, Thailand

\*Address all correspondence to: julthep@nandakwang.com

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
