Watermarking Methods

#### **Chapter 6**

## Review on Watermarking Techniques Aiming Authentication of Digital Image Artistic Works Minted as NFTs into Blockchains

*Joceli Mayer*

#### **Abstract**

The recent creation of Non Fungible Tokens (NFTs) has enabled a multibillionaire market for digital artistic works including images or sequence of images, videos, and animated gifs. With this new trend issues regarding fraud, stolen works, authenticity, and copyright came along. The goal of this chapter is to provide an overview of the watermarking techniques that can be employed to mitigate those issues. We will discuss transparency, robustness, and payload of watermarking techniques aiming to educate the artists, researchers, and developers about the many approaches that watermarking techniques provide and the resulting trade-offs. We focus on fragile watermarking techniques due to their high transparency for embedding into artistic works. We discuss the spread spectrum and Least Significant Bit techniques. We describe the usual process of NFT minting into a blockchain and propose a more secure certification protocol with watermarking which employs the same usual NFT minting offered by current marketplaces. The proposed certification protocol mints a checksum string into a blockchain, ensuring the validity of the watermark and the information embedded into this watermark. This proposed protocol validates the date of creation and author identification which are transparently embedded in the artistic work, thus, increasing the security and confidence of markets for artistic works transactions.

**Keywords:** image watermarking, non fungible tokens, blockchain, reversible watermarking, visible watermarking, transparent watermarking

#### **1. Introduction**

The world of digital art has found an innovative way to trade and/or advertise their artistic image works after the recent creation of Non Fungible Tokens (NFTs) associated with a blockchain and some service to sell and buy the images or sequence of images, videos, and animated gifs. The main innovation conferred by NFTs is that the ownership of the digital artistic work is verifiable after the digital asset or a link to the asset with a URL (Universal Resource Locator) is minted into a blockchain.

After the first NFT work was created in 2014, named "quantum", a multibillionaire business has grown around NFTs and blockchains. The market cap of trading NFTs totaled over 23 billion dollars last year. Along with this surge of lucrative trading digital art through NFT, markets came also another black market with players that trade unauthorized copies of digital art disposed of at the markets. As a result, many artists started to include visible and invisible watermarks in their works in the hope that it would prevent stealing or provide additional legal evidence about the authorship to be disputed in a court of law. Moreover, protocols including the watermarked NFTs and the embedded data in the watermarks are being designed to provide the buyers some extra confidence that the work is actually original and created or owned by the seller, avoiding or mitigating a problem created in the market of NFTs: unauthorized copies sold to unwise buyers.

A complication issue is that the artistic work needs to be shown in the markets and is easily copied and re-sold as another NFT. The illegal trader just copies the advertised digital art in the market and mints the digital work (or minting its URL as is the usual practice) as his or her own using the same NFT technology and one of the many available blockchains and storage/displaying servers and services. This illegal trading is particularly damaging to low-cost artistic works which would not be worthwhile to start a legal prosecution against the illegal trader. The costs and difficulties to prove ownership in a court of law are prohibitive. Moreover, many artists that do not even create an NFT for their work are being stolen as illegal traders create the NFT before the actual owner.

The goal of this chapter is to provide an overview of the watermarking techniques that can be employed to mitigate the problem of authentication in this multibillionaire market of NFTs. We will discuss transparency, robustness, and payload of watermarking techniques divided into three categories: transparent with low impact in the artistic work, very robust with high impact in the artistic work and transparent and reversible watermarks. The discussion aims to educate the artists, researchers, and developers about the many approaches that watermarking techniques provide and the trade-offs that each watermarking technique imposes. As the technology of digital art trading evolves, these watermarking technologies and trading protocols will take place to provide a safer and more lucrative environment to the sellers and buyers in this innovative market.

#### **2. Security of authorship for minted NFTs**

The process of registering digital data (coin, NFT, image, video, etc) into a blockchain, due to a somewhat complex and secure cryptographic protocol employed, provides a very high probability that the digital data can be securely assigned to a owner along with some extra data such as a URL, date and other information about the transaction. The process of registering the data into a blockchain is named minting, due to its similarity to printing (minting) fiduciary money. This process is considered very secure, publicly accessed, and it is verifiable in a noncentralized way by many participants in the process. The decentralized finance (DeFi) approach is based on blockchain to assure proper secure transactions (digital coins or smart contracts) without the need for a unique institutional agent, such as a bank or government [1].

#### **2.1 Minting Process for NFTs**

Regardless of the blockchain chosen for minting, there is a cost associated with the computing energy spent to process and validate the transactions in the blockchain,

*Review on Watermarking Techniques Aiming Authentication of Digital Image Artistic Works… DOI: http://dx.doi.org/10.5772/intechopen.107715*

usually referred to as "gas" fees. For this reason, the minting of NFTs usually requires an associated storage server to upload the actual image, video, or animated GIF image. Otherwise, the "gas" fees become prohibitive high due to a large amount of data (bytes) required to be verified by the computing servers. Therefore, due to registration costs, in practice, only some data related to the NFT (URL, author, date, or some small information) is actually minted into the blockchain. This raises some issues regarding the security of the digital asset since it is stored in external servers and not registered into the blockchain. Currently, some services are provided for that external storage, however, they do not use blockchain technology and the security is left to the service provider's considerations. Recently, it has been reported that US\$ 1,7 million of NFTs was stolen by a hacker from a very popular NFT service name OpenSea.

Therefore, additional technologies need to be provided to digital art creators in order to enable more confidence in the transactions. Besides cryptographic protocols, watermarking techniques are being employed by artists aiming protection for copyright, authentication, and mitigation of frauds in the NFT market.

#### **3. Watermarking techniques applied to NFTs**

There are a variety of watermarking techniques such as visible, fragile, semifragile, strong, and reversible watermark. These techniques may be used to achieve different goals of authentication, copyright protection, tracking, or fraud detection. Some techniques properties, namely, robustness, transparency, and payload are required depending on the desired goal.

#### **3.1 Watermarking properties and tradeoffs**

#### *3.1.1 Robustness*

Robustness is a desirable property for a technique in the sense that the watermark, which may contain copyright or authentication information, is able to survive a given attack. There are two types of attacks: malicious and nonmalicious attacks. Nonmalicious attacks refer to normal transformations that one digital work may suffer during transmission or processing as a change of image format, from a JPEG to PNG for instance, or a mild filtering or histogram equalization. On the other hand, malicious attacks are designed to either remove the watermark and/or to substitute it with another watermark for fraudulent purposes. Some malicious attacks may include geometric (shearing, horizontal flipping, collage) and volumetric transformations (noise addition, color map modification, filtering, JPEG compression) [2].

#### *3.1.2 Transparency*

Transparency is a very desired property in the context of artistic digital works. The watermarking technique should be as invisible as possible in order to not affect the image quality since the work is presented by a given site or application for potential buyers. However, many artists use available software to insert very visible watermarks over the original work. This approach intends to provide a sample for the digital work either for advertising the author's artistic qualities and/or for indicating that a watermark-free can be purchased after by contacting the author. The approach aims to mitigate possible stealing of the work and re-selling under other authors' names. As a result, the

**Figure 1.**

*(a) A copyright-free image from [3] had the author's name and URL included at the bottom. (b) By using image tools, the surname of the author has been removed.*

watermark location is visible, and usually damages the image quality and presentation to a certain degree and additionally, operations of collage with image processing can be used to remove the visible watermarks creating a watermark-free similar version in order to re-sell the stolen art in the same site or another similar service for NFT trading.

In order to illustrate the point, in **Figure 1a**, a copyright-free image from [3] had the author name (found in the metadata image information) and the original URL drawn over the image bottom, and in **Figure 1b**, using image tools, the surname of the author has been removed, indicating how easily visible watermarks can be tampered with.

Moreover, in this scenario, a buyer has no guarantee that the received digital work is indeed original as it was created by the seller or it is a stolen edited copy. Therefore, for this NFT trading scenario, very transparent watermarks can be employed to convey authentication information along with some certification protocol provided by a trusted organization. Although invisible watermarks are more complicated and there is no available standard protocol for the artists, the need for a more secure market has been recognized and some corporations are building a trusting environment and friendly applications using watermarking techniques and blockchains that enable certification for the authors along with their digital works.

#### *3.1.3 Payload*

Payload is the amount of information measured in bytes that a watermarking technique is able to carry into the artistic image work. The required amount depends on the security protocol used and, on the need to convey some particular information such as author ID, URL, date of minting, and so on. For each given watermarking technique there is a trade-off among robustness, transparency, and payload. A technique with high robustness usually provides a relatively low transparency and a small *Review on Watermarking Techniques Aiming Authentication of Digital Image Artistic Works… DOI: http://dx.doi.org/10.5772/intechopen.107715*

payload. Conversely, a very transparent technique usually has low robustness and low payload. However, low robustness might be desired in some authentication applications. In such applications, the goal is to keep the digital work authenticated only if it has not been tampered with, thus very transparent and low robustness are proper for the NFT scenario where scarcity and authenticity are essential.

#### **4. Semi-fragile and reversible watermarks**

Robust watermarking techniques usually produce very low transparency and as stated before, transparency is a very important property when dealing with artistic works, therefore robust watermarking may not be a good choice for NFT authentication. On the other hand, very transparent watermarking can be achieved with fragile, semi-fragile, and also reversible techniques. Among these techniques, some are based on the spacial domain approach using Spread Spectrum (SS) [4] or Least Significant Bits (LSB) techniques. Others techniques rely on the transform domain approach using either Discrete Cosine Transform (DCT) as it is a basis for JPEG compression or Discrete Wavelet Transform (DWT) [2].

The semi-fragile approach allows a small degree of distortion imposed by nonmalicious attach such as image format transcoding, i.e., converting the digital image work from JPEG format to PNG format. However, large distortions usually meant for fraudulent purposes such as image horizontal flipping will result in losing the watermark and the digital work will not be authenticated anymore. In the next section, we describe an authentication protocol aiming to help to provide a more secure market for NFTs.

Reversible watermarking is designed to be able to remove the watermark with a proper secret key in order to restore the original artistic work. This approach is quite interesting for the NFT scenario where image quality is highly desirable. We describe how this feature can be achieved in the next sections.

#### **4.1 Spatial domain techniques**

Spread spectrum and LSB-based techniques are widely used and are able to provide a transparent watermarking for authentication purposes. These techniques can be designed as region based in order to locate which regions have been tampered with.

#### *4.1.1 Spread spectrum techniques*

The spread spectrum approach [5] is an additive operation on the spatial domain resulting in the watermarked image:

$$Im\_W = Im + ab\,\mathcal{W},\tag{1}$$

where *Im* is the original image (or frame of a video), *α* is the scaling parameter designed according to a desired robustness and transparency, *b* is an antipodal bit ∈ f g �1, þ1 and *W* is a watermarking image of same size as *Im*. Usually, this watermarking image is built as white noise, generating a signal with a large spread spectrum in the frequency domain. The antipodal bit *b* is used to convey one bit of information along with the watermark signal authentication, in some cases it can be discarded, remaining only the weighted watermark signal, i.e., *ImW* ¼ *Im* þ *αW*.

**Figure 2.**

*(a) Original Lenna Image. (b) Watermarked Lenna Image embedded with 10 bits presents very low perceptual impact using the multibit technique in [4]. (c) Difference among the images scaled for visibility.*

In other cases, more bits can be inserted with a more complex watermark signal composed of a weighted sum of pseudorandom sequences of the same dimension as the original image. These pseudorandom sequences can be further optimized for their orthogonality [6]. In either case, the weight *α* can be computed to satisfy a given tradeoff between robustness and transparency as defined in [4]. **Figure 2** illustrates the very high transparency achieved using an elaborated multibit spread spectrum technique.

#### *4.1.2 LSB techniques and reversible approach*

The watermarking embedding is performed by changing the last K least significant bits of image pixels. The resulting impact for the last 2 bits, for instance, is usually very small and results in a very transparent embedding. Moreover, the approach of LSB embedding can also be reversible using a property of the binary operation XOR (exclusive OR). To understand how it works for the case of LSB embedding in the last bit of each pixel, assume a secret key, named *ImK* as one 1-bit image of the same dimension as the original image, *Im*. Using an XOR (⊕) operation for each last bit *i*, the embedding watermark is given as:

*Review on Watermarking Techniques Aiming Authentication of Digital Image Artistic Works… DOI: http://dx.doi.org/10.5772/intechopen.107715*

$$W(i) = \operatorname{Im}\_K(i) \oplus \operatorname{Im}(i) \tag{2}$$

Next, the last bit of each pixel of the image, *Im i*ð Þ, is replaced by the watermark *W i*ð Þ, resulting in the watermarked image *ImW*. The process allows the authentication of the digital image using a given protocol. Moreover, given the secret key *ImK*, the last bits, changed previously, of the original image can be restored:

$$Im(i) = W(i) \oplus Im\_K(i) \tag{3}$$

By replacing the last bit of each pixel of the watermarked image, *ImW*ð Þ*i* by *Im i*ð Þ, all bits of the original image are properly restored. This property can be used to improve the security of the NFT market. The LSB can be applied to more than the last bit, decreasing the transparency and increasing the payload. Notice that the LSB approach is a very fragile technique where any image modification will damage the watermark. This fragility is acceptable for authentication purposes within a certification protocol and services associated in order to improve the NFT market security and acceptance (**Figure 3**).

**Figure 3.** *(a) Original Lenna Image. (b) Watermarked Lenna Image embedded with 1 bits (all zeros in this example) presenting very low perceptual impact using the LSB. (c) Difference among the images scaled by 100 times for visibility.*

#### **4.2 Frequency domain techniques**

In the frequency domain, it is possible to embed a waterkmark considering some model of human perception in relation to the frequency. In this way the technique can be properly adjusted to transparency according to such a human perception model and, as consequence, to reduce the visual impact of the embedding as compared to spatial domain techniques. The most used transforms for this purpose are the DCT and the DWT. There are a variety of approaches that modify some coefficients in the frequency domain in order to embed a watermark, a review on these advanced techniques is found in [2].

#### **5. Certification protocol for watermarking NFTs**

The process of minting an NFT into a blockchain is complex and a detailed example of the minting process for an NFT into the Ethereum blockchain is found in [7]. However, services provided by NFT marketplaces can mitigate the complexity for the users. Some of the top NFT marketplaces include OpenSea, Axie Marketplace, Larva Labs/CryptoPunks, NBA Top Shot Marketplace, Rarible, SuperRare, Foundation, Nifty Gateway, Mintable, and ThetaDrop. Using these services the process is simplified to a minimal number of steps which are explained in [8]. Moreover, the Rarible NFT marketplace offers a feature called "Lazy Minting": all fees are charged to the buyer, only after buying the Work is actually minted, the seller receives the Work price amount minus the fees, including the minting "gas". This feature is very interesting to incentive artists and creators [9].

As explained before due to the costs of mining, usually mentioned as "gas" fees, only the URL to the artistic work is actually minted into the blockchain. Usually, the data (representing the image, video, or another Work format) is stored in an Interplanetary File System (IPFS) which is a decentralized protocol and peer-to-peer network for storing and sharing data in a distributed file system. For example, the Pinata [10] system provide a convenient IPFS API and toolkit, to store NFT asset and metadata to ensure that the NFT is truly decentralized.

The minting process validates in a blockchain the transaction associated with the URL of the data (image, video, music, work) stored in an IPFS. Notice that the data itself is not minted into the blockchain. Watermarking is another verification layer of the authentication process along with procedures and evaluations provided by the marketplaces to verify for frauds of many types. As stated before, many artists are employing visible and invisible watermarking to reduce the number of frauds or even to help to detect when an artistic Work is stolen. Other approaches, out of the scope of this work, can be used to help to detect frauds, such as techniques to investigate image similarities and image forensics [11].

The third entity for certification purposes of the transaction can be implemented to help to validate the watermarking process using an Rivest–Shamir–Adleman (RSA) cryptographic protocol. The RSA is a public-key cryptosystem that is widely used for secure data transmission. The acronym "RSA" comes from the surnames of Ron Rivest, Adi Shamir and Leonard Adleman, who publicly described the algorithm in 1977 [12]. Both the work owner and the certification entity can use their private and public keys to improve the authenticity of the Work, the creator (authorship), and the certification entity by using the extra signature (watermark) embedded into the artistic Work. The checksum of this extra validation signature (watermark) can be

*Review on Watermarking Techniques Aiming Authentication of Digital Image Artistic Works… DOI: http://dx.doi.org/10.5772/intechopen.107715*

also minted into a blockchain to register the transaction for extra security, keeping the decentralized approach for NFTs and digital coins.

Using the RSA cryptography process one can generate a watermark *W* to embed into the image (Work) in order to certificate the creation date, *DATECREA*, the owner identification, *USERID*, and other information. Assume an RSA symmetric cryptography using an encryption process named *PUB :*, *KeyPUB* and a decryption process named *PRIV :*, *KeyPRIV* with corresponding public and private keys *KeyPUB* and *KeyPRIV* such that

$$\mathcal{W} = \text{PRIV}\left(\text{Key}\_{\text{PRIV}}, \text{PUB}\left(\text{Key}\_{\text{PUB}}, \text{ W}\right)\right). \tag{4}$$

These processes need these public and private keys in order to properly encrypt and decrypt messages. The public key used for encryption may be distributed publicly without compromising the security while the private key should be only known to the message sender or the Work creator/owner. In the following, we present a certification protocol that validates the authenticity of the Work and the ownership of the creator.

#### **5.1 Proposed watermarking certification protocol**

Let's assume a certification entity is used for giving better credibility to the artists by showing and dealing with the artistic works, registering the transactions into a blockchain for public auditing as well as for validation of the embedded watermark. This entity can be one of the current marketplaces that register the URL of the Work along with other information into the blockchain, which is usually the Ethereum blockchain. Other related approaches can be found in [13–15].

For a given image, *Im*, a watermark, *W*, can be embedded using one of the many watermarking techniques, including the spread spectrum and LSB techniques explained above. The Work owner (buyer or creator) can use the services of the marketplace to create private and public keys, *KeyPRIVUSER* and *KeyPUBUSER*, the private key is kept secure under the user personal and digital wallet. The marketplace also creates those keys, *KeyPRIVMKT* and *KeyPUBMKT* for this transaction. The private keys should be kept secret from the owner and the marketplace. On the other hand, the process of embedding and extracting the watermark is public. The creation of the watermark is based on the user identity given by the marketplace when the account of owner is created, *USERID*, the date of the creation of work, *DATECREA* and date of transaction (or minting into the blockchain), *DATEMI*, which are properly combined by concatenation, ∣ operator. The owner encrypts his part of the watermark, *W*<sup>1</sup> using the public key of the marketplace and the marketplace encrypts its part of the watermark, *W*2, using the public key of the owner, such that the final watermark, *W*, is composed of XOR operation, ⊕, from both parts:

$$W = \text{PUB} \left( \text{Key}\_{\text{PUBMKT}}, \text{USERID} \left| \text{DATACREA} \right. \right) \oplus \text{PUB} \left( \text{Key}\_{\text{PUBMUSD}}, \text{USERID} \left| \text{DATACME} \right. \right) \tag{5}$$

The watermark *W* ¼ *W*1⊕*W*<sup>2</sup> is then embedded into the work before storing the watermarked Work in an IPFS server. Notice that when necessary, the part *W*<sup>1</sup> can be generated by the entity marketplace that knows the part *W*<sup>2</sup> and the extracted watermark *W* by using the reversible property of the XOR operation explained above in the

**Figure 4.** *Proposed Certification Protocol using the owner and a marketplace as entities to validate the ownership.*

LSB technique section. The part *W*<sup>2</sup> can be generated by the owner in the same way. Moreover, a checksum, *CHKSUM*, of the watermark can be generated by one of the available algorithms [16]. This checksum is then included in the data that is going to be minted, *DATAMINT*, which includes the URL of the work in an IPFS server and other information. In order to reduce the "gas" fees, The checksum process should result in a much smaller amount of bits than the watermark itself and it aims to provide a validation of the authenticity of the embedded watermark itself. The proposed certification protocol is illustrated in **Figure 4**.

#### **5.2 Contestant process using the embedded watermark**

Consider that the marketplace system, by using forensics tools, finds out that a posted Work is duplicated or very similar. Alternatively, the owner or creator finds out that his work has been stolen and it was also minted into the blockchain after his original work was minted. By extracting the transparent watermarks from the works and using private keys to decrypt the relevant information, one can verify the authorship with the checksum that validates the watermarks into the blockchain, the creation and minting dates along with the users' identification. This information can be used as trusted legal evidence about the contested works. Therefore, the proposed process can be implemented by the marketplace providing a better and more secure service to the artist. Notice that validation depends on the marketplace and the owner's information about the transaction and the work itself. Both entities (owner and marketplace) can verify the corresponding part of the watermark *W* ¼ *W*1⊕*W*<sup>2</sup> and validate the ownership. Crossing these two information parts validate the entire

*Review on Watermarking Techniques Aiming Authentication of Digital Image Artistic Works… DOI: http://dx.doi.org/10.5772/intechopen.107715*

process of mitigating a possible fraud from one of these entities. Variations of this protocol can be proposed to increase even more the trust in NFT trade and turning the art market even more valuable. Notice that visible watermarks and multiple transparent techniques can be used with advanced semi-fragile watermarking techniques.

#### **6. Conclusions**

In this chapter, we discussed how watermarking technology can be employed to increase the security of trading NFTs in this new and multimillionaire market. We propose that transparent embedded watermarks into the original work bring another level of security and do not preclude the use of visible watermarks and the traditional minting process used by current marketplaces. The additional checksum data may increase the costs of minting, however, brings a huge gain in terms of the capacity of securing the authorship of the artistic works in the market. We discuss basic transparent watermarking techniques in order to understand how to generate a watermark to employ with the proposed certification protocol. A certification protocol is discussed in detail and shown to be viable and very interesting to bring more confidence to artistic creators, owners, sellers, and buyers of artistic works.

#### **Abbreviations**


#### **Author details**

Joceli Mayer Department of Electrical Engineering, Federal University of Santa Catarina, Florianópolis, Brazil

\*Address all correspondence to: mayer@eel.ufsc.br

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Nakamoto S. Bitcoin: A Peer-to-Peer Electronic Cash System, Bitcoin White Paper. 2009. Available from: https:// www.bitcoin.com/bitcoin.pdf

[2] Yu X, Wang C, Zhou X. Review on semi-fragile watermarking algorithms for content authentication of digital images. Future Internet. 2017; **9**:56

[3] Thorpe E. Colorful Architecture. Available from: https://www.freeimages. com/photo/colorful-architecture-1-1216925

[4] Mayer J. Optimization of Multibit Watermarking, Watermarking Book. London, UK: Intechopen; 2012

[5] Cox IJ, Kilian J, Leighton FT, Shamoon T. Secure spread spectrum watermarking for multimedia. IEEE Transactions on Image Processing. 1997; **6**(12):1673

[6] Mayer J, Bermudez JCM, Silverio AV. On the design of pattern sequences for spread spectrum image watermarking. In: IEEE International Telecommunications Symphosium, ITS'02. Natal; 2002

[7] Mudgil S. How to Write & Deploy an NFT. 2021. Available from: https://ethereum.org/en/developers/ tutorials/how-to-write-and-deployan-nft. [Accessed: April 20, 2022]

[8] Zipmex. How To Mint NFTs On The NFT Marketplace? Available from: https://zipmex.com/learn/nftminting-explained/. [Accessed: April 21, 2022]

[9] Rarible NFT Marketplace. Available from: https://rarible.com/. [Accessed: April 21, 2022]

[10] Pinata IPFS Site. Available from: https://www.pinata.cloud/. [Accessed: April 21, 2022]

[11] Piva A. An Overview on Image Forensics, ISRN Signal Image Forensics, ISRN Signal. London, United Kingdom: Published by Hindawi; 2013

[12] Nisha S, Farik M. RSA public key cryptography algorithm—A review. International Journal of Scientific & Technology Research. 2017;**6**(7):187-191

[13] Dedge O, Shingade R, Jagtap A, Yadav A, Kamble A. Image copyright protection system using blockchain. International Journal of Future Generation and Communication Networking. 2020;**13**(3s):37-43

[14] Gountia D. Towards scalability trade-off and security issues in state-ofthe-art blockchain. EAI Endorsed Transactions on Security and Safety. 2019;**5**(18):e4-e4

[15] Joshi A, Mishra V, Patrikar RM. Real time implementation of digital watermarking algorithm for image and video application. Watermarking. 2012; **2**:65

[16] Available from: https://en.wikipedia. org/wiki/Checksum. [Accessed: April, 23, 2022]

#### **Chapter 7**

## Perspective Chapter: Text Watermark Analysis – Concept, Technique, and Applications

*Preethi Nanjundan and Jossy P. George*

#### **Abstract**

Watermarking is a modern technology in which identifying information is embedded in a data carrier. It is not easy to notice without affecting data usage. A text watermark is an approach to inserting a watermark into text documents. This is an extremely complex undertaking, especially given the scarcity of research in this area. This process has proven to be very complex, especially since there has only been a limited amount of research done in this field. Conducting an in-depth analysis, analysis, and implementation of the evaluation, is essential for its success. The overall aim of this chapter is to develop an understanding of the theory, methods, and applications of text watermarking, with a focus on procedures for defining, embedding, and extracting watermarks, as well as requirements, approaches, and linguistic implications. Detailed examination of the new classification of text watermarks is provided in this chapter as are the integration process and related issues of attacks and language applicability. Research challenges in open and forward-looking research are also explored, with emphasis on information integrity, information accessibility, originality preservation, information security, and sensitive data protection. The topics include sensing, document conversion, cryptographic applications, and language flexibility.

**Keywords:** information protection, information security, text analysis, text watermarking, watermarking

#### **1. Introduction**

Recent years have seen a dramatic increase in communication via telephone, video, and the Internet. Companies and individuals still exchange paper copies of important documents. The development of a reliable method for authenticating hardcopy documents remains a critical task [1].

In this chapter, we present an innovative method for authenticating documents either electronic or printed documents may be affected combining these methods with traditional ones is recommended In addition to what was mentioned previously. This system is similar to the one proposed in [2], but Noise in the channel is taken into account during the detection process. This is the result of they have a much lower perceptual impact than digital sensors. The binary code of documents is protected by signatures. The system proposed here protects visual content. As a comparison, digital watermarking schemes are capable of transmitting hidden messages [3]. The proposed system classifies documents based on authenticity or nonauthenticity. A major advantage of this system is that it does not require a database. In this case, the information to be compared will be stored. As a result, the proposed method is suitable for this purpose. Self-authentication using text (TSA) is the name given to this technology. In addition, special considerations have been taken. When using a consumer scanner, only a consumer scanner is required. There is a consideration for a printed document. TSA is not relied upon. There are two ways to modify each character: either by modifying the function or by changing the character itself. This can be achieved with very little perceptual impact using text watermarking using techniques [4–6], or visibly, we can increase robustness.

Information on the internet is among the most common information found in today's world. The digital format is having both positive and negative effects on the modern world. The advancement of technologies, medical science, and astronomy are examples. The misuse of these technologies also has a number of negative aspects, such as protecting copyright and manipulating data. As a result of advanced technologies such as the world wide web and high-speed computer networks, unauthorized copying, redistribution, and storing of digital contents has been carried out in many ways. In order to prevent unauthorized copies of digital content, digital content security is crucial [7]. The Internet of Things (IoT) and cloud computing have experienced extensive government and research support at the global level [8]. There are numerous data formats supported by cloud computing, including video, audio, images, and text. However, establishing responsibility and protecting the content are challenging tasks. The data that enables smart cities to function is crucial for sustaining the data infrastructure and enabling the delivery of digital content to citizens. **Figure 1** illustrates this architecture. All data storage, processing, and analysis take place at a central location. Using digital watermarking, you can protect and verify the ownership of digital content. With the right technology, you can embed secret messages in digital content without compromising valuable information. Ownership identification will then be made possible with this information. The different types of digital watermarking are watermarking in text, photos or images, audio, and videos. These three types of watermarking have been the subject of most research. Text watermarks are becoming increasingly popular due to the large number of text documents currently being produced and shared [9].

**Figure 1.** *Smart cities are built according to certain architecture.* *Perspective Chapter: Text Watermark Analysis – Concept, Technique, and Applications DOI: http://dx.doi.org/10.5772/intechopen.106914*

#### **2. Technology associated with digital watermarking**

The digital watermarking process involves embedding identification information into a data carrier in a way that makes it difficult for third parties to detect, and without it adversely affecting the data. These technologies are often used to protect multimedia data as well as databases and text files. The dynamics and randomness of data make embedded watermarks quite different from those embedded in text files or databases. Data with redundant information and acceptable precision errors are prerequisites to machine learning. Taking into account the range of error tolerance within the database, Boney et al. embedded watermarking in the least important position [10, 11]. Sion et al. proposed a mathematical model based on the statistical property of an array of data (**Figure 2**).

To prevent an attacker from destroying the watermark, attribute data are embedded within it [12, 13]. Furthermore, fingerprints from databases are embedded into watermarking as a means of identifying the information owners and objects distributed [14]. This enables leakers to be identified. Watermarks without secret keys can also be verified using independent component analysis [15]. A number of references are provided for further information [16, 17]. When a fragile watermark is embedded in the tables of databases, data items will be detected in time [18, 19].

Text watermarking uses many generations of methods, which can be categorized into three kinds. There are two types of watermarking: one is based on fine-tuning the document structure, hoping that line spacing will differ from word spacing, and the other is based on subtle differences between line spacing and word spacing. As another type of watermarking, there is text content watermarking, which is based on modifying the content of the text, such as adding white spaces, amending punctuation, etc. Third, watermarking is based on semantic understanding, which can achieve changes by the replacement of synonyms or the transformation of sentences, for instance. Despite the fact that most watermarking studies here focus on static data sets, Big Data's peculiarities, such as high-speed data generation and updating, are not sufficiently addressed (**Figure 3**).

#### **Figure 2.**

*System for watermarking digital images [12].*

**Figure 3.**

*Watermarking system with a digital signature [20].*

#### **3. Analyzing watermarks in text**

A digital watermark (digital intellectual property) identifies its owner or originator by providing a unique numerical code. Tracking digital media usage online is done and warnings are sent when unauthorized access or use is possible. A digital watermark is an important part of digital rights management (DRM). A kind of marker is hidden within digital media, such as audio, video, or images, allowing us to determine who owns the copyright to them. By tracking copyright infringement on social media, this technique determines whether a note is authentic in banking. Watermarking is an extremely effective method of securing digital documents in addition to its ability to address distortion, replication, unauthorized access, and security breaches.

There are several ways to classify digital watermarking techniques.

#### **3.1 Durability**

Digital watermarks that are fragile can no longer be detected if they are altered even slightly. Tamper-proofing is a common method of protecting digital watermarks. Watermarks are commonly used to describe visible changes to a work instead of generalized barcodes.

Digital watermarks, for example, are semi-fragile, which means they resist benign changes but become unrecognizable after malignant changes. The detection of malignant transformations often requires watermarks with semi-fragile properties. The robustness of a watermark depends on how well it resists various types of transformations. Strong watermarks can carry both copying and access control information when used in copy protection applications.

#### **3.2 Perception**

The term "imperceptible watermarks" refers to those that are virtually indistinguishable from the original signal.

The observable type of watermark is one that can be felt (e.g., network logos, content bugs, code symbols, images that appear opaque). Occasionally, videos and pictures may have transparent/translucent portions for the convenience of the consumer, but these portions degrade the view and degrade the quality of the video.

A perceptual watermarking is not the same as watermarking that uses human perception limitations to appear indistinguishable.

#### **3.3 Availability**

In general, digital watermarking schemes can be divided into two main categories based on the length of the embedded message:


*Perspective Chapter: Text Watermark Analysis – Concept, Technique, and Applications DOI: http://dx.doi.org/10.5772/intechopen.106914*

#### **3.4 A method of embedding**

The term spread spectrum watermarking refers to digital watermarking methods that are created by modifying signals. Watermarks using spread-spectrum technology may be moderately robust, but they also have poor information capacities as a result of host interference.

Quantization is a type of digital watermarking when the signal is obtained through quantization. The fact that host interference is rejected makes quantization watermarks highly informative despite their low robustness.

A watermark of this type embeds an amplitude modulated signal into an additive modulus, similar to a spread spectrum, but integrated within the spatial domain.

#### **3.5 Examining different watermarking algorithms**

#### *3.5.1 Requirements*

Watermarking digitally can be applied to a variety of different applications, including:


#### **4. Digital watermarking: A life-cycle**

A watermarks encrypt information in an audio signal; however, the term is also used in some contexts to distinguish between watermarked and cover signals. The process of embedding the watermark in a host signal is known as encoding the watermark in a host signal. A watermarking process consists of three steps: embedding, attacking, and detecting. Embedding can create a watermarked signal by using an algorithm to take the host and the data.

In general, watermarked digital signals are transmitted or stored to another party. This signal is then modified by another individual. Modifying the digital watermark may be used by a third party to remove the watermark, which may not even be malicious. Copyright protection applications may be affected by this attack. The images or videos can be modified in a variety of ways, such as by lossy compression (which reduces the resolution), cropping, or intentionally adding noise.

**Figure 4.** *Life-cycle phases of a digital watermark: embedding, attacking, detecting, and retrieving [21].*

A detection algorithm is used to find a watermark in an attacked signal (also known as extraction). Even a transmission that was unaltered can still contain the watermark. Even robust digital watermarking applications should correctly extract watermarks despite strong changes. The extraction algorithm should fail whenever the signal is modified in fragile digital watermarking (**Figure 4**).

#### **5. Watermarking text using digital technology**

Watermarking has been a vital research area since 1991 when the concept of text digital watermarking was introduced. As the internet grows and communication spreads globally, a variety of text watermarking techniques have been proposed with the passage of time. Images-based and linguistic-based systems, as well as structuralbased and hybrid approaches, are all available [22, 23].

A. Methodologies based on images

The cover text is viewed as an image and embedded with a watermark according to the image-based approach described in [24]. The watermarked logos and images are converted into text strings and the data is generated. Watermarks serve as both a means of verifying ownership and preventing copyright infringement. In spite of the fact that optical character recognition (OCR) is considered safe for formatting attacks, it has limited applicability due to the fact that it ruins hidden information [25]. A technique described by Rizzo et al. [24] encrypts a short piece of text with a hidden watermark while preserving its content strictly. Images cannot be altered in either their content or appearance when converted from the text. Watermarking with blind watermarks helps protect content and ensure visibility. According to the authors of the study, a zero watermarking hybrid approach was used in their study [26, 27]. Watermarks are created by converting images into watermarks and embedding them on book covers. The disadvantage of this technology is that keys generated by certified authorities (CAs) must be stored in a large amount of storage space. In their [28] work, Thongkor and Amornraksa describe the process of watermarking scanned and printed documents with spatial information. For embedding the watermark, the image is composed of white and blue components. To determine whether the proposed technique is efficient, a variety of scanning resolutions, printing materials, and quality levels are analyzed.

#### B. An interdisciplinary approach to linguistics

The semantic and syntactic approach relies on techniques that emphasize semantics that is used to embed the watermark without altering the meaning of the texts. The idea is to conceal data by using a semantic approach, which replaces words with their synonyms. By using this method, grammatical alternations are used for embedding watermarks without changing the meaning of the original text. The watermarking process involves several language parts, such as verbs, adverbs, nouns, pronouns, adjectives, prepositions, acronyms, and conjunctions. In **Figure 5**, structural and linguistic approaches are compared. Based on integrating Chinese text features, Liu et al. [28] propose a method. Each word has been translated. In addition to measuring entropy, weight is also calculated using entropy. This method does well when considering formatting attacks. An approach to resolving this issue has been proposed by Yingjie et al. [30]. A Watermarking technique based on prose characteristics. Using representative words, one can generate keywords, core verb sets, and proportional features for adjectives. Watermarks are embedded by using verbs, adjectives, nouns, and adverbs. The reposed technique does not embed watermarks well.

C. Approaches based on structural analysis

Those techniques incorporate the essential bits of an In-Text, including its structure and characteristics. Watermarks may be used, for example, as identifiers for documents when they are incorporated into a line. Although this method resolves the problem of document ownership authentication for some types of text documents, it is not applicable to all types of documents. There will be no possibility of hidden information being revealed if there are spaces between words, lines, and paragraphs. The following **Figure 6** illustrates this. The first three lines in **Figure 6** are shifted downward so that the middle line is facing downward. To enhance the textual features preserved by this approach, we also used natural language processing (NLP) techniques and resources. As part of their Arabic watermarking technique, Taha et al. [29] propose the use of Kashida extension characters and extra small whitespaces. This approach does not resist formatting attacks as a result of removing the spaces between words. Ba-Alwi et al. [32] developed a new approach to watermarking zero text based on probabilistic models. A watermark is created by measuring the spacing between characters. In comparison with similar techniques, this method is more robust and performs better in reordering attacks. Zhang et al. [33] proposes a method for encrypting watermarking


#### **Figure 5.**

*The comparative analysis of linguistics and structuration [29].*

#### **Figure 6.**

#### *An illustration of line-shift coding [31].*

information with the Caesar cipher using the user key, then making and packing plain text messages from the encrypted messages. Tests have been performed to prove that the system is undetectable. Whitespace-based approaches are low embedding capable and vulnerable to formatting attacks, according to Liang and Iranmanesh [34]. This method has the disadvantage of requiring a large number of blank spaces to conceal the secret message. By applying the modern technology of information security, Usmonov et al. [35] proposed a technique for protecting data transmitted between logical, physical, and virtual components of IoT systems. A secure online health application is based on the integration of IoT, Big Data, and Cloud convergence, as designed by Suciu et al. [36]. Cloud View Exalead's infrastructure-level information can be used for online and enterprisebased search applications. An approach using Font-Code by Xiao et al. [37, 38] embeds watermarks into font glyphs rather than changing the actual text. This algorithm has the advantage of being robust and imperceptible, but it only works with one font family and is relatively small in capacity. In order to detect the message, a large font size is required, depending on the OCR library.

#### D. Methodologies that combine the best of both worlds

To combine the benefits of different text watermarking techniques, a hybrid approach has been developed. The hybrid approach is considered robust and can be applied to wide-text documents [39]. Elrefaei and Alotaibi [31] proposed a method for handling Arabic text using pseudo-space. This method recovers watermarked letters from strings of connected letters. The proposed method is unnoticeable and robust when used in a formatted document; however, it is not robust when used in a document with retyping. By Hamdan and Hamarsheh [39, 40], Hamdan and Hamarsheh present a new way of hiding text messages in text using Omega network structures. The authors of this study [41, 42] suggested that fragile watermarks be used to safeguard data integrity in the IoT.

#### **6. The process of watermarking embedding and extraction**

Watermarking is used to discourage illegal copying and prevent digital assets from being distributed [43]. The **Figure 7** below shows an implementation of text digital watermarking. Watermarking is a two-step process that involves embedding and removing the watermark. The document contains a piece of information called a watermark. There are three steps involved in embedding a watermark. Developing *Perspective Chapter: Text Watermark Analysis – Concept, Technique, and Applications DOI: http://dx.doi.org/10.5772/intechopen.106914*

**Figure 7.** *An overview of digital text watermarking [42].*

a watermark first requires that you include information about its owner, such as the author and publisher. Watermark security is the transformation of a watermark into a binary string or group. The last option is to insert a watermark in a document without having it affect the whole document. **Figure 7** illustrates the process for embedding a watermark. This is accomplished by representing "SM" for the secret message, "T" for the original document, "WD" for a watermarked document and "K" for the key. Watermarked documents can be shared via e-mail, websites, and social media channels. The process of extracting or verifying watermarks reverses watermark embedding. **Figure 6** illustrates how watermarks are extracted by entering the key and the watermarked document and detecting the secret message (**Figures 8** and **9**).

**Figure 9.** *Process of extracting watermarks [44].*

**Figure 8.**

### **7. Conclusion**

Text watermarking provides copyright protection. Generally, this is the most reliable and affordable way, especially for regular users. The algorithm, however, cannot be conquered by a computer hacker. A text document's cost can be kept low to improve watermark protection by preventing them from being attacked. In order to authenticate the digital contents of smart cities, and efficient watermarking algorithm is proposed. A comparison between the proposed technique and previous methods is done in order to evaluate its imperceptibility, security, robustness, and capability. There are many different approaches to this problem, but we also need a method that can be applied to smart cities, IoT devices, and the cloud. Experiments have shown that the proposed algorithm is highly imperceptible and achieves a 95.99 similarity factor. Despite being very robust, this algorithm can detect watermarks with high accuracy despite attacks such as cutting, copying, and pasting, font size, color, and alignment. By comparison, the proposed algorithm is more efficient than previous techniques. This method gives the same results in the cloud computing environment for securing text documents in smart cities. A future extension of the proposed solution could cover the copyright protection of print documents.

#### **Author details**

Preethi Nanjundan\* and Jossy P. George Department of Data Science, Christ University, Pune Lavasa, India

\*Address all correspondence to: preethi.n@christuniversity.in

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*Perspective Chapter: Text Watermark Analysis – Concept, Technique, and Applications DOI: http://dx.doi.org/10.5772/intechopen.106914*

#### **References**

[1] Borges PVK, Mayer J, Izquierdo E. Performance Analysis of text halftone modulation. In: 2007 IEEE International Conference on Image Processing. 2007. pp. III-285–III-288. DOI: 10.1109/ ICIP.2007.4379302

[2] Villan R, Voloshynovskiy S, Koval O, Deguillaume F, Pun T. Tamper-proofing of electronic and printed text documents via robust hashing and data-hiding. In: Proceedings of SPIE-IST Electronic Imaging 2007, Security, Steganography, and Watermarking of Multimedia Contents IX, San Jose, USA. 2007

[3] Barni M, Bartolini F. Watermarking Systems Engineering: Enabling Digital Assets Security and Other Applications. New York: Marcel Dekker; 2004

[4] Brassil JT, Low S, Maxemchuk NF. Copyright protection for the electronic distribution of text documents. Proceedings of the IEEE. 1999;**87**(7): 1181-1196

[5] Vllan R, Voloshynovskiy S, Koval O, Vila J, Topak E, Deguillaume F, Rytsar Y, Pun T. Text data-hiding for digital and printed documents: Theoretical and practical considerations. In: Proc. of SPIE, Elect. Imaging, USA, 2006

[6] Borges PV, Mayer J. Document watermarking via character luminance modulation. In: IEEE Int'l Conf. on Acoustics, Speech and Signal Processing, May 2006

[7] Zeeshan M, Ullah S, Anayat S, Hussain RG, Nasir N. A review study on unique way of information hiding: Steganography. International Journal of Data Science. 2017;**3**(5):45

[8] Huh S, Cho S, Kim S. Managing IoT devices using blockchain platform. In:

Proc. 19th Int. Conf. Adv. Commun. Technol. (ICACT), February 2017. pp. 464-467

[9] Panah AS, Van Schyndel R, Sellis T, Bertino E. On the properties of nonmedia digital watermarking: A review of state of the art techniques. IEEE Access. 2016;**4**:2670-2704

[10] Boney L, Tewfik AH, Hamdy KH. Digital watermarks for audio signals. In: Proc. EUSIPCO 1996, Trieste, Italy, September 1996

[11] Bors A, Pitas I. Embedding parametric digital signatures in images. In: EUSIPCO-96, Trieste, Italy, September 1996

[12] Sion R, Atallab M, Prabhakar S. On watermarking numeric sets. In: Proceedings of the first international workshop on digital watermarking, Seoul, Korea, November 21-22; 2002

[13] Sion R, Atallah M, Prabhakar S. Right protection for relational data. In: Proceedings of the 2003 ACM SIGMOD international conference on management of data, San Diego, USA, June 10-12; 2003

[14] Guo F, Wang J, Li D. Fingerprinting relational databases. In: Proceedings of the 2006 ACM symposium on applied computing, Dijion, France, April 23-27; 2006

[15] Jiang C, Sun X, Yi Y, et al. Study of database public watermarking based on JADE algorithm. Journal of Simulation. 2006;**18**(7):1781-1785

[16] Zhang Y, Niu X, Zhao D. A method of protecting relational databases copyright with cloud watermark. International

Journal of Information Technology. 2004;**1**(1):206-210

[17] Liu Y, Ma Y, Zhang H, et al. A method for trust management in cloud computing: Data coloring by cloud watermarking. International Journal of Automation and Computing. 2011;**8**(3):280-285

[18] Guo H, Li Y, Liu A, et al. A fragile watermarking scheme for detecting malicious modifications of database relations. Information Sciences. 2006;**176**(10):1350-1378

[19] Khan A, Mirza AM. Genetic perceptual shaping: Utilizing cover image and conceivable attack information during watermark embedding. Information Fusion. 2007;**8**(4):354-365. CiteSeerX 10.1.1.708.9509. DOI: 10.1016/j. inffus.2005.09.007 ISSN 1566-2535. "CPTWG Home Page". cptwg.org. Archived from the original on 2008-02-23

[20] Akter A, Nur-E-Tajnina, Ullah M. Digital image watermarking based on DWT-DCT: Evaluate for a new embedding algorithm. 2014. pp. 1-6. DOI: 10.1109/ICIEV.2014.6850699

[21] Abbas N. Watermarked and noisy images identification based on statistical evaluation parameters. Journal of Zankoy Sulaimani-Part A (JZS-A). 2013;**15**:159. DOI: 10.17656/jzs.10265

[22] Hao Y, Chuang QFL, Rong D. A survey of digital watermarking. Journal of Computer Research and Development. 2005;**7**:1093-1099

[23] Kaur M, Mahajan K. An existential review on text watermarking techniques. International Journal of Computers and Applications. 2015;**120**(18):1-4

[24] Rizzo SG, Bertini F, Montesi D. Content-preserving text watermarking through unicode homoglyph

substitution. In: Proc. 20th Int. Database Eng. Appl. Symp. 2016. pp. 97-104

[25] Tayan O, Kabir MN, Alginahi YM. A hybrid digital-signature and zero-watermarking approach for authentication and protection of sensitive electronic documents. Scientific World Journal. 2014;**2014**:514652

[26] Thongkor K, Amornraksa T. Digital image watermarking for printed and scanned documents. Proceedings of SPIE. 2017;**10420**:104203O

[27] Ahvanooey MT et al. A comparative analysis of information hiding techniques for copyright protection of text documents. Security and Communication Networks. 2018;**2018**:1-22

[28] Liu Y, Zhu Y, Xin G. A zerowatermarking algorithm based on merging features of sentences for Chinese text. Journal of the Chinese Institute of Engineers. 2015;**38**(3):391-398

[29] Taha A, Hammad AS, Selim MM. A high capacity algorithm for information hiding in Arabic text. Journal of King Saud University - Computer and Information Sciences, to be published

[30] Yingjie M, Huiran L, Tong S, Xiaoyu T. A zero-watermarking scheme for prose writings. In: Proc. Int. Conf. Cyber-Enabled Distrib. Comput. Knowl. Discovery (CyberC), October 2017. pp. 276-282

[31] Alotaibi RA, Elrefaei LA. Improved capacity Arabic text watermarking methods based on open word space. Journal of King Saud University - Computer and Information Sciences. 2018;**30**(2):236-248

[32] Ba-Alwi FM, Ghilan MM, Al-Wesabi FN. Content authentication of english text via Internet using zero

*Perspective Chapter: Text Watermark Analysis – Concept, Technique, and Applications DOI: http://dx.doi.org/10.5772/intechopen.106914*

watermarking technique and Markov model. International Journal of Applied Information Systems. 2014;**7**(1):25-36

[33] Zhang Y, Qin H, Kong T. Anovel robust textwatermarking forword document. In: Proc. 3rd Int. Congr. Image Signal Process (CISP), vol. 1, 2010. pp. 38-42

[34] Liang OW, Iranmanesh V. Information hiding using whitespace technique in Microsoft word. In: Proc. 22nd Int. Conf. Virtual Syst. Multimedia (VSMM), October 2016. pp. 1-5

[35] Usmonov B, Evsutin O, Iskhakov A, Shelupanov A, Iskhakova A, Meshcheryakov R. The cybersecurity in development of IoT embedded technologies. In: Proc. Int. Conf. Inf. Sci. Commun Technol. (ICISCT). 2017. pp. 1-4

[36] Suciu G et al. Big data, internet of things and cloud convergence - an architecture for secure E-health applications. Journal of Medical Systems. 2015;**39**(11):141

[37] Xiao C, Zhang C, Zheng C. FontCode: Embedding information in text documents using glyph perturbation. ACM Transactions on Graphics. 2018;**37**(2):15

[38] Jalil Z. Copyright protection of plain text using digital watermarking. FAST Nat. Univ. Comput. Emerg. Sci., Islamabad, Pakistan, Tech. Rep. 1059, 2010

[39] Hamdan AM, Hamarsheh A. AH4S: An algorithm of text in text steganography using the structure of omega network. Security and Communication Networks. 2017;**9**(18):6004-6016

[40] Zhang G, Kou L, Zhang L, Liu C, Da Q, Sun J. A new digital watermarking method for data integrity protection in the perception layer of IoT. Security and Communication Networks. 2017;**2017**:3126010

[41] Topkara M, Riccardi G, Hakkani-Tür D, Atallah MJ. Natural language atermarking: Challenges in building a practical system. Proceedings of SPIE. 2006;**6072**:60720A

[42] Al-Maweri NAS, Ali R, Adnan WAW, Ramli ARB, Ahmad SMSAA. Stateof-the-art in techniques of text digital watermarking: Challenges and limitations. Journal of Computational Science. 2016;**12**(2):62-80

[43] Mayer J, Bermudez JCM. Multi-bit informed embedding watermarking with constant robustness. In: IEEE International Conference on Image Processing; 2005. pp. I-669. DOI: 10.1109/ICIP.2005.1529839

[44] Khadam U, Iqbal MM, Azam MA, Khalid S, Rho S, Chilamkurti N. Digital watermarking technique for text document protection using data mining analysis. IEEE Access. 2019;**7**:64955-64965. DOI: 10.1109/ACCESS.2019.2916674

### **Chapter 8**

Application of Computational Intelligence in Visual Quality Optimization Watermarking and Coding Tools to Improve the Medical IoT Platforms Using ECC Cybersecurity Based CoAP Protocol

*Abdelhadi EI Allali, Ilham Morino, Salma AIT Oussous, Siham Beloualid, Ahmed Tamtaoui and Abderrahim Bajit*

#### **Abstract**

To ensure copyright protection and authenticate ownership of media or entities, image watermarking techniques are utilized. This technique entails embedding hidden information about an owner in a specific entity to discover any potential ownership issues. In recent years, several authors have proposed various ways to watermarking. In computational intelligence contexts, however, there are not enough research and comparisons of watermarking approaches. Soft computing techniques are now being applied to help watermarking algorithms perform better. This chapter investigates soft computing-based image watermarking for a medical IoT platform that aims to combat the spread of COVID-19, by allowing a large number of people to simultaneously and securely access their private data, such as photos and QR codes in public places such as stadiums, supermarkets, and events with a large number of participants. Therefore, our platform is composed of QR Code, and RFID identification readers to ensure the validity of a health pass as well as an intelligent facial recognition system to verify the pass's owner. The proposed system uses artificial intelligence, psychovisual coding, CoAP protocol, and security tools such as digital watermarking and ECC encryption to optimize the sending of data captured from citizens wishing to access a given space in terms of execution time, bandwidth, storage space, energy, and memory consumption.

**Keywords:** image watermarking, artificial intelligence AI face detection/recognition, Psychovisual coding, Foveation coding, image quality coding, CoAP protocol, ECC encryption/decryption

#### **1. Introduction**

The emergence of the pandemic threatened the existence of humanity, which led scientists to look for solutions to fight against this scourge and reduce its severity. Many solutions have been developed mobile application CovidSafe and CovidScan to check whether the certificate presented by a citizen in the form of a QR code is valid [1], and platforms COVID-19 diagnosis using machine learning from radiography and CT images [2].

Convinced of the harmful influence of this disease on vulnerable people, fighting against the spread of this virus and especially during access to private and public spaces has become a challenge for researchers. Access to these spaces is reserved to people with a valid vaccination pass, people exempt from vaccination, and people with a negative PCR test of fewer than 48 hours.

Any person protected against coronavirus 19 by obtaining the vaccine has a personal code that allows him/her to access a computer service and that shows his/her immunization schedule. This code is still coveted by dishonest people who do not have the complete vaccination scheme to escape controls and access spaces with the help of criminals who offer falsified health passes containing stolen or false QR codes. Even though these acts are criminalized by the states, criminals are constantly innovating in their search for real QR codes.

These irresponsible actions make the task so hard because instead of concentrating on finding a definitive solution to get rid of this dangerous virus, we waste our time looking for solutions to fight against the theft of people's data such as the QR Code. This theft is especially pronounced when a large number of people access public places such as sports stadiums, supermarkets, and events with a large number of participants at the same time. During these times, data management is more difficult in terms of execution time and quality of data sent for processing. IoT platforms have been developed to control access to public places [3].

Upon entering a controlled space an IoT node is required to scan the QR Code, read the tag, capture the citizen's photo and capture the temperature of people to proceed to filter the people who have the right to access from those who do not, and this by controlling the validity of the unique identifier of each one and its belonging to its owner, hence the need for an application that can recognize the face in real-time, processes it, optimize the quality and size of the data. and all this while guaranteeing a fast and secure delivery. Our work aims to use two types of image coding namely visual coding for scanned QR codes, foveal coding for photos captured of people at the entrance of public spaces, Elliptic-curve cryptography (ECC), and watermark to ensure data security.

This work consists of the architecture of our platform based on ECC encryption, CoAP protocol, and technologies used including artificial intelligence, face detection, and recognition in part 2. For part 3, we talk about the psychovisual and foveal coding image using watermarking to evaluate the quality assessment and the execution time of these two coding types. Part 4 presents the comparative results of these coded types of coding/decoding, encryption/decryption, and insertion/extraction of the images and their performances in terms of quality and execution time.

*Application of Computational Intelligence in Visual Quality Optimization Watermarking… DOI: http://dx.doi.org/10.5772/intechopen.106008*

#### **2. Medical IoT platform architecture using AI for face detection and recognition**

The evolution of IoT platforms is growing, several sectors apply this technique provided that they integrate Artificial Intelligence [4, 5]. The medical sector [6, 7] requires this type of platform to monitor the health status of these patients, so to be able to distinguish different patients we need facial recognition [8, 9].

Our medical IoT platform (**Figure 1**) is to prevent the system from crashing while processing the shipment or citizens from waiting in a long queue or using another person's data to avoid any kind of fraudulent theft. To this end, we used artificial intelligence on the one hand to store the user's personal information, detection of QR codes and images of citizens, face recognition, as well as decision making. On the other hand, we applied visual coding for the QR code and foveal coding for the citizen's faces to study their impact on the image quality and the time to send and process these data.

To access space, a double verification is required; the system verifies the QR code's legitimacy as well as the holder's identification using facial recognition to prevent counterfeiting. There are two methods of personal identification that have been established. The first involves scanning the QR code on the health pass or a PCR test that takes less than 48 hours, and the second is face recognition to verify the individual's identity. After the picture of the person is taken, foveation is used to create a higher-resolution image of the face. The image and QR code are then encoded and decoded; the choice to embed the entire QR code instead of the few bits of the associated information contained in the QR code because it allows us to extract information hidden inside a digital image without distorting the original or losing any data. Authorized recipients can extract not only the embedded message but also the original image, which is an intact and identical bit for bit to the image before the data was inserted. And the most important is that it guarantees and keeps the quality of the image according to the bit rate if the network speed is higher the quality of the image will be with better quality and vice versa. Finally, the images are encrypted with ECC which is a type of encryption that uses an elliptical ECC and a wall cryptosystem that is used in SSL/TLS licenses to encrypt data for devices with limited resources [10]. Moreover, it works with points on an elliptic curve and provides two major benefits;

#### **Figure 1.**

*The medical IoT platform architecture using artificial intelligence.*

security and a short key length. The memory and bandwidth savings, as well as the processing speed and low power usage [11].

Then, the payload is sent via the communication protocol CoAP (Constrained Application Protocol) to the webserver. CoAP is a communication protocol with less memory consumable and time that we used to make nodes and servers communicate (send and receive data exchanged during execution). CoAP is focused on the transmission of tiny messages, which are often sent through UDP (each CoAP message occupies the data section of one UDP datagram). It has a simple binary format. The payload is made up of a 4-byte fixed-size header, a variable-length token, a CoAP option sequence, and a 4-byte fixed-size header. Moreover, CoAP is characterized by its client/server architecture, where the client transmits a method code, such as GET, PUT, POST, or DELETE, to the server [12]. After receiving a request, the server provides a payload and a response code. Finally, a communication layer and a request/ response layer are included. The messaging layer is in charge of message redundancy and consistency, whereas the request/response layer is responsible of connectivity and communication. CoAP also provides multicast communication and asynchronous message exchange, as well as confirmable (CON) after it gets the data, unconfirmable (NON) with a unique identification in the event of an unreliable transmission, Acknowledgement (ACK), and resettable messages (RST) [13].

When the data is received by the server, it must be decrypted before it can be read and compared to the database's existing data. The person's entry to the area is granted or denied by a decision system.

Then with the help of the AI, the system begins to process the received information, comparing it to the existing database (Image of the person requesting the access and Image of the real owner of the Qr Code). After comparing the submitted data to the current database, the system decides whether or not to provide the guest access. If the person presenting the QR Code has a valid pass or test, he or she is permitted to access the facility. Access is given if the person is exempt from the health pass and has a fever of no more than 37 degrees. Access is prohibited if someone has a valid QR code but the picture does not match the one previously provided or cannot be identified in the database. Access is disallowed if the person has an expired QR code, a positive PCR test, or is older than 48 hours.

For the AI, the study consists of five stages. The essential phase is the storage of the user's information in the system. The Faster-RCNN architecture is applied for QR code detection. The MTCNN architecture is used for face detection. Finally, the FaceNet architecture is dedicated to face recognition. Deep learning is performed on a database of a few people. For each face image, the MTCNN model produces a fixed-length embedding vector as a unique facial feature (distinguishing characteristics of people). Thus, each person has multiple similarity vectors using Euclidean distance or cosine similarity. The learning process follows the recognition process, namely: image acquisition, detection, and feature extraction. The acquisition phase consists of resizing the input images and normalizing them by removing the average pixel value. Thus, all detected regions of interest (ROI) are scaled to fit the input CNN architecture. In the extraction phase, its role is the feature vectors, to store them in the database. Finally, the detection phase is based on the MTCNN model based on the delimitation boxes of the faces in an image (Landmarks). The MTCNN model includes three processing blocks to perform face detection and tracking. For the first processing block, several candidate windows go through a shallow CNN (P-Net block) and then through a second more complex CNN that consists of refining the windows to reject a large number of windows that do not contain a face (R-Net block). In the third processing

*Application of Computational Intelligence in Visual Quality Optimization Watermarking… DOI: http://dx.doi.org/10.5772/intechopen.106008*

block, a robust CNN is used to polish the result and display the landmark positions of the faces.

For FaceNet, it is a facial recognition system, with a unified integration for facial recognition and clustering. It is an architecture that gives an image of a face, extracts high-quality features from the face, and predicts a 128-element vector representation of those features, called face integration. FaceNet directly learns a scene (images or video) from images of faces in a compact Euclidean space where distances correspond directly to a measure of face similarity. Face-Net takes a face embedding as input and predicts the identity of the face that is stored (for recognition). However, a technique that applies a standardized rotation to the face and relies on the facial cues to align the feature vectors. To do the alignment, we first need to find the facial landmarks as fast as possible (speed of execution), so we used two architectures (MTCNN and FaceNet). To extract from the image only the facial features (eyes, face contour, nose, mouth, etc.) as vectors.

Several works [14, 15] use the Faster R-CNN architecture, in our case, it involves extracting the QR code from the image. This architecture is composed of two branches that share convolutional layers. The first branch is a region proposition network that learns a set of window locations, and the second is a classifier that learns to label each window as one of the classes in the training set. We use all the layers in the network that work with object proposals and extract features from the convolutional layers. Build a global image descriptor from the faster activations of the Faster R-CNN layers. The activations of each filter have the same dimension as the number of filters in the convolutional layer. In general, the Faster-RCNN architecture is based on CNN, consisting of a feature extraction network to extract feature maps from the input image. Meanwhile, the convolution layers do not change the size of the image, so usually adopt a basic size, with a set of pad and stride, and each grouping layer reduces the image by half the original size. Faster R-CNN allows us to obtain better feature representations for detection or extraction of Qr-Code from images and improves the performance of spatial analysis and reanalysis.

Our AI platform makes the decision based on the three architectures mentioned earlier: MTCNN, FaceNet, and Faster-RCNN. The extracted features are compared to those stored in the face and QR code database by averaging a similarity metric, in our case we opted for the Euclidean distance. After exploring MTCNN face detectors, including a FaceNet facial recognition package, for verified access.

#### **3. Watermarking for foveal and visual image coding to evaluate the quality assessment**

Psychovisual coding algorithms consist of optimizing the image quality according to the image complexity. The characteristics of the human visual system (HVS) on the frequency and spatial domain are exploited for best coding results.

Our system is designed according to the scheme shown in **Figures 2** and **3**. The construction steps are the acquisition of the image QR code and face of the citizen, decomposition of the image by applying the wavelet transform (DWT), the psychovisual weighting model, watermark embedding, and the SPIHT scalable coding. The reconstruction steps are the psychovisual inverse weighting model, watermark extraction, and the SPIHT scalable decoding reconstructed image.

It is important to compress the image in order to guarantee a low memory space and a fast image transmission, without degrading the image quality. First, we

#### **Figure 2.**

*Visual coding scenario for QR code image.*

#### **Figure 3.** *Foveal coding scenario for Citizen's face image.*

decompose the original image using discrete wavelet transform. This algorithm consists in cortically splitting the image into a biorthogonal set of wavelets using two filters of type low-pass and high-pass. The use of LPF allows extracting the edges and details of the image, while HPF extracts the most important information seen by the eye.

DWT is a transformation used for frequency domain analysis of image [16, 17]. DWT decomposes the image into four non-overlapping multi-resolution sub-bands: LL1 (Approximate or Low-Low sub-band), HL1 (Horizontal or high-Low sub-band), LH1 (Vertical or Low-high sub-band), and HH1 (Diagonal or High-high Sub band). Here, LL1 is a low-frequency component whereas HL1, LH1, and HH1 are high frequency (detail) components.

All wavelets used are based on the Daubechies 9/7 filter for an ideal reconstruction of the image. The benefit of this type of DWT compression is that it ensures fast computation and fewer resources with interesting mathematical properties. Then, a psycho-visual weighting filter is implemented to process the wavelet sub-bands. This model uses the contrast sensitivity filter to discriminate between low frequencies and remove invisible ones. The luminance setting and contrast calibration are adapted according to the perceptual thresholds based on the JND wavelets.

The weighting model combines the following steps; The application of the JND just noticeable distortion model which is established from the adaptation of the luminance and contrast of the image to improve the performance of the perceptual coding. The measurement of the visibility threshold is then based on the JND model. We apply a CSF contrast sensitivity filter that masks invisible frequencies by taking into account

#### *Application of Computational Intelligence in Visual Quality Optimization Watermarking… DOI: http://dx.doi.org/10.5772/intechopen.106008*

the properties of human frequency sensitivity. The luminance mask is then operated on the original wavelet spectrum to adapt the light of the image. The correction factor of the luminance mask is taken into account to varying the luminance of the coded image. The contrast mask is then used according to the perceptual thresholds to calculate the contrast correction, which allows to eliminate the invisible contrast information and to enhance the perceptible information.

The same weighting model is applied to both the visual coding of the QrCode and the foveal coding of the visual image, with an adaptation of the localization of the regions of interest by applying a foveal filter in the case of foveal coding.

Foveation is a lossy filter that reduces the size of the transmitted image by preserving the relevant information and removing all the background from the image that will not be processed by the system. This filter is used to extract the regions of interest and reduces the wavelet coefficients by applying a low-pass filter while focusing on the target region. It is important to determine the parameters of the foveation filter in order to keep the good quality of the image and the needed information. One of the characteristics of the foveation filter is the determination of the frequency spectrum of the area of interest in the function of the distance, when the observation distance increases, the high-frequency areas are higher and higher.

For Watermarking, it is one measure among others to have a good defense against copying [18]. It's a method for embedding data into a multimedia element such as an image, audio, or video file [19]. Current watermarking methods often aim at a certain level of robustness against intrusions that aim at getting rid of the hidden watermark at the cost of destroying the data quality of the media [20]. This chapter presents an image watermarking method based on Discrete Wavelet Transform (DWT). The proposed method is based on a 3-level discrete wavelet transform (DWT). First, the original image of size 512 512 is DWT decomposed into the third level using Haar wavelet providing the four sub-bands LL3, LH3, HL3, and HH3 [21]. In the same manner, 3 level DWT is also applied to the watermark image. For this Haar wavelet is used. Digital image Watermarking consists of two processes that are watermark embedding and watermark extracting [22] described below (**Figure 4**).

For watermark embedding, we need a host image and a watermark image, then we begin our process. Firstly, the First level DWT is performed on the host image and the watermark image to decompose it into four sub-bands LL1, HL1, LH1, HH1, and

**Figure 4.** *Watermarked image process.*

wLL1, wHL1, wLH1, wHH1 respectively. Then, the second level DWT is performed on the LL1 and wLL1 sub-band to get four smaller sub-bands LL2, HL2, LH2 et HH2 and wLL2, wHL2, wLH2, wHH2 respectively. While the third level DWT is performed on the LL2 and wLL2 sub-band to get four smaller sub-bands LL3, HL3, LH3, HH3, and wLL3, wHL3, wLH3, wHH3 respectively. Next, A embedding function is used to add the two sub-bands are added with an embedding formula with the value alpha as in is as follows: *newLL*<sup>3</sup> <sup>¼</sup> *LL*<sup>3</sup> <sup>þ</sup> *alpha*<sup>∗</sup> *wLL*3. After, the Inverse DWT is performed using the sub-bands newLL3, LH3, HL3, HH3 to get image new LL2. Then, the same Inverse is done using the sub-bands newLL2, LH2, HL2, HH2 to get image new LL1. The last one is performed using the sub-bands newLL2, LH1, HL1, HH1 to get the watermarked image.

For the watermark extracting phase, we start by performing the first level DWT on the host image and the watermarked image to decompose it into four sub-bands LL1, HL1, LH1, HH1, and nLL1, nHL1, nLH1, nHH1 respectively. Then, performing the second level DWT on the LL1 and nLL1 sub-band to get four smaller sub-bands LL2, HL2, LH2, HH2, and nLL2, nHL2, nLH2, and nHH2 respectively. And the third level DWT on the LL2 and nLL2 sub-band to get four smaller sub-bands LL3, HL3, LH3, HH3, and nLL3, nHL3, nLH3, nHH3 respectively. Next, the following extract is performed to get wLL3 with the extraction formulae with the same value of alpha as in embedding *wLL*3 ¼ ð Þ *nLL*3 � *LL*3 *=alpha*. After that, we apply inverse DWT on wLL3 with all other subbands (LH, HL, HH) equal to zero to get wLL2. Finally, we repeat the last step 5 two times at each level to get the extracted watermarks (**Figures 5** and **6**).

We developed a visible difference predictor (VDP) metric to evaluate the quality between the reference image and the decoded image. It highlights the set of SVH features by using the wavelet transform to analyze the content of an image. The VDP metric can automatically detect errors in an image that are not visible to the human eye. The principle is to compare the original image and the degraded image by associating each point of the visibility map with a visibility probability. An improved MDVP model is presented in our work [23]. We apply the previously explained weighting model to compare relevant information and neglect unseen information


**Figure 5.** *The DWT compression with face detection and recognition diagram.*

*Application of Computational Intelligence in Visual Quality Optimization Watermarking… DOI: http://dx.doi.org/10.5772/intechopen.106008*

**Figure 6.**

*The DWT compression with QR code embedding diagram.*

and use a psychometric function to examine the quality factor PS. A mathematical Minkowski summation of all wavelet subbands is then performed to determine the mean opinion score (MOS). The larger the factor determined, the higher the image quality, hence the important role this metric adopts. Similarly for foveal coding, we foveally evaluate the image quality using an FDVP metric improved from the VDP metric by applying a foveal weighting model that uses the foveal filter for the detection of areas of interest [24].

The SPIHT progressive encoder is the final phase which has the task of improving the quality of the image progressively and prioritizing the image. The SPIHT encoder is an improved version of the zero tree EZW encoder for lossless compression. The progressive aspect of this encoder is to detect the most relevant information and send it first, transmitting the most significant bits first and then the least significant bits.

#### **4. Discussion and results**

The transmission of Qr Code and CitizenPicture images classically avoids many problems in terms of size, complexity, loss of time and memory, as well as security, and to ensure their rapid transmission without loss of any of these and specifically without degradation of image quality, We have tried in this work to integrate the classical SPIHT coding, the psychovisual coding, and to highlight the impact of the graphic security based on watermarking and the payload security based on ECC encryption on the execution times.

A broad comparative analysis has been performed which is divided into three evaluations; a qualitative study presented in **Tables 1** and **2**, a subjective study illustrated in **Figures 7** and **8**, and finally the quantitative study displayed in **Tables 3** and **4**.

Starting with **Table 1**, the results obtained concern the different types of coding with different degrees of quality according to four types of binary budgets. Then, the foveal EFIC coding for the CitizenPicture which is in our case Lena test compared it with its reference SPIHT. And the visual coding for QrCode and SPIHT.

From these results, we notice that at the beginning (A. FPS and a. FPS) with a bit rate of 4:1 the images have excellent quality, at the level (B. FPS and b. FPS) with a bit

#### **Table 1.**

*Lena EFIC foveal coding images versus the SPIHT coding images and QR code EVIC visual coding images versus its optimized version SPIHT with their quality scores PS using visual WVDP assessor given for varying bit rate and fixed observation distance. The bit rate varies as follow: A. 0.25 bpp, B. 0.15 Bpp, C. 0.0625 bpp, D.0.0313 bpp, a. 0.25 bpp, b. 0.15 bpp, c. 0.0625 bpp, d. 0.0313 bpp.*

rate of 8:1, the images hold good quality. The images presented in (C. FPS and c. FPS) with bit rate 16:1 have a medium quality, and finally for (D. FPS and d. FPS) with bit rate 32:1 the images retain bad quality. On the other hand, Lena coded with EFIC that focuses on the face and not the whole image provides a better quality comparing it with SPIHT Lena. The same goes for the visual coding of the QrCode EVIC, it is better than SPIHT in terms of perceptual quality.

Moving on to **Table 2** which illustrates the Watermarking Lena SPIHT (column 1) versus SPIHT deWatermarking QrCode (column 2) and the Watermarking Lena EFIC (column 3) versus EVIC deWatermarking QrCode (column 4) according to different quality and bit rate budgets. From these images, we get four types of bit rate high bite rate which gives excellent quality, medium bite rate which offers good quality, reasonable bite rate which delivers acceptable quality, and inferior bite rate which supplies unsatisfactory quality. On the other hand, we notice that the watermarking Lena EFIC versus the Dewatermarking QrCode EVIC provides an excellent quality when compared with its reference SPIHT.

*Application of Computational Intelligence in Visual Quality Optimization Watermarking… DOI: http://dx.doi.org/10.5772/intechopen.106008*

#### **Table 2.**

*The watermarking Lena SPIHT coding images (column 1) versus the SPIHT Dewatermarking QR code images (column 2) and watermarking Lena EFIC foveal coding images (column 3) versus the EVIC version Dewatermarking QR coding (column 4) with their quality scores PS using visual WVDP assessor given for varying bit rate and fixed observation distance. The bit rate varies as follow: A. 0.25 bpp, B. 0.15 Bpp, C. 0.0625 bpp, D.0.0313 bpp.*

**Figure 7.** *EFIC vs. SPIHT applied to non-secure CoAP CitzenPicture.Png payload.*

#### **Figure 8.**

*EVIC vs. SPIHT applied to non-secure CoAP deWatermarkedQrCode.Png payload.*


#### **Table 3.**

*Execution time gain percent EFIC vs. SPIHT non secure for CitzenPicture.Png and secure version with ECC.*


#### **Table 4.**

*Execution time gain percent EVIC vs. SPIHT for Dewatermarked QR code non secure and its secure version with ECC.*

So, from **Tables 1** and **2**, we can distinguish that when the binary budget increases we obtain images with good quality whatever the coding. Thus, the psychovisual coding is the best codage compared to SPIHT in terms of quality.

Based on the subjective study illustrated in **Figure 5**, we have curves that present the bite rate execution time consumption for Citizen Picture (Lena) for EFIC versus SPIHT coding in both secured with ECC and unsecured versions. And **Figure 6**, presents the EVIC versus SPIHT coding for the deWatermarked QrCode. From these two figures, we can see that EFIC for the Lena test and EVIC for Dewatermaking consume little time compared to SPIHT.

Let us talk about the security, we can notice that the secured version in both figures is more consuming compared to the non-secured version which is normal when we add the security layer to secure our images but this does not prevent us from saying that ECC does not consume much time if we compare it with non-secured and other security algorithms.

Finally, moving to the quantitative study presented in **Tables 3** and **4**, we have the percentage gain in terms of execution time for CitizenPicture in the non-secure and secure version (**Table 3**), and the gain in execution time EVIC versus SPIHT for Dewatermarked QrCode non-secure and secure (**Table 4**). We notice that the execution is faster when the binary budget is lower than 16:1. So from all these results, we

can say that even if the execution time increases when the binary budget is higher than 16:1 we obtain images with good quality but when the budget decreases the quality of the images becomes mediocre and weak.

#### **5. Conclusion**

In this work, we have processed the QR code images and CitizenPicture captured at the entrance of public or private spaces by decomposing them through the application of the DWT, the psychovisual weighting model, watermark embedding, and the SPIHT embedded coding/ decoding and for the construction of these images we have used the psychovisual weighting model, watermark extraction and the SPIHT embedded coding/decoding.

These visual optimization techniques, based on human visual cortex usage and quality optimization tests, have been used to develop the performance of the Medical IoT platform. Encouraging results were obtained in terms of reduced execution time, storage space, bandwidth and memory load. Adding the watermark to the ECC made it possible to send the citizen's picture containing the QR Code securely with the CoAP protocol.

The use of this Medical IoT platform will help fight the corona virus pandemic by allowing a large number of people to simultaneously access a space with full security of their private data such as their CitizenPicture and QR Code.

#### **Author details**

Abdelhadi EI Allali<sup>1</sup> \*†, Ilham Morino1†, Salma AIT Oussous1†, Siham Beloualid1†, Ahmed Tamtaoui2† and Abderrahim Bajit1†

1 Ibn Tofail University, Laboratory of Advanced Systems Engineering (ISA), National School of Applied Sciences, Kenitra, Morocco

2 SC Department, Mohammed V University, Laboratory of Advanced Systems National Institute of Posts and Telecommunications, Rabat, Morocco

\*Address all correspondence to: aelallali@gmail.com

† These authors contributed equally.

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Sun R, Wang W, Xue M, Tyson G, Camtepe S, Ranasinghe DC. An empirical assessment of global COVID-19 contact tracing applications. In: 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). 2021. pp. 1085-1097. DOI: 10.1109/ICSE43902.2021.00101.

[2] Mohammad-Rahimi H, Nadimi M, Ghalyanchi-Langeroudi A, Taheri M, Ghafouri-Fard S. Application of machine learning in diagnosis of COVID-19 through X-Ray and CT images: A scoping review. Frontiers in Cardiovascular Medicine. Mar 2021;**25**(8):638011. DOI: 10.3389/fcvm.2021.638011. PMID: 33842563; PMCID: PMC8027078

[3] De Nardis L, Mohammadpour A, Caso G, Ali U, Di Benedetto M-G. Internet of things platforms for academic Research and Development: A critical review. Applied Sciences. 2022; **12**(4):2172. DOI: 10.3390/app12042172

[4] Rathee G, Sharma A, Kumar R, Iqbal R. A secure communicating things network framework for industrial IoT using Blockchain technology. Ad Hoc Networks. 2019;**94**:101933. DOI: 10.1016/j.adhoc.2019.101933

[5] Przybylowski A, Stelmak S, Suchanek M. Mobility behaviour in view of the impact of the COVID-19 pandemic-public transport users in gdansk case study. Sustainability. 2021; **13**(1):1-12. DOI: 10.3390/su13010364

[6] Kumar K, Kumar N, Shah R. Role of IoT to avoid spreading of COVID-19. International Journal of Intelligent Networks. 2020;**1**(April):32-35. DOI: 10.1016/j.ijin.2020.05.002

[7] Miller DD, Brown EW. Artificial intelligence in medical practice: The question to the answer? The American Journal of Medicine. 2018;**131**(2): 129-133. DOI: 10.1016/j. amjmed.2017.10.035

[8] Barodi A, Bajit A, Tamtaoui A, Benbrahim M. An enhanced artificial intelligence-based approach applied to vehicular traffic signs detection and road safety enhancement. Advances in Science, Technology and Engineering Systems Journal. 2021;**6**(1):672-683. DOI: 10.25046/aj060173

[9] Barodi A, Bajit A, Benbrahim M, Tamtaoui A. Improving the transfer learning performances in the classification of the automotive traffic roads signs. E3S Web Conferences. 2021;**234**:00064. DOI: 10.1051/e3sconf/202123400064

[10] Rajeswari PG, Thilagavathi K. A novel protocol for indirect authentication In Mobile networks based on elliptic curve cryptography. Journal of Theoretical and Applied Information Technology. 2009

[11] Majumder S, Ray S, Sadhukhan D, et al. ECC-CoAP: Elliptic curve cryptography based constraint application protocol for internet of things. Wireless Pers Communications. 2021;**116**:1867-1896. DOI: 10.1007/ s11277-020-07769-2

[12] Naik N. Choice of effective messaging protocols for IoT systems: MQTT, CoAP, AMQP and HTTP. In: 2017 IEEE International Systems Engineering Symposium (ISSE). 2017. pp. 1-7

[13] Kayal P, Perros H. A comparison of IoT application layer protocols through a smart parking implementation. In: 2017 20th Conference on Innovations in Clouds, Internet and Networks (ICIN). Paris; 2017. pp. 331-336

*Application of Computational Intelligence in Visual Quality Optimization Watermarking… DOI: http://dx.doi.org/10.5772/intechopen.106008*

[14] Parvathi S, Tamil Selvi S. Detection of maturity stages of coconuts in complex background using faster R-CNN model. Biosystems Engineering. 2021;**202**:119-132. DOI: 10.1016/j. biosystemseng.2020.12.002

[15] Aslam A, Curry E. A survey on object detection for the internet of multimedia things (IoMT) using deep learning and event-based middleware: Approaches, challenges, and future directions. Image and Vision Computing. 2021;**106**: 104095. DOI: 10.1016/j. imavis.2020.104095

[16] Lin C, Gao W, Guo MF. Discrete wavelet transform-based triggering method for single-phase earth fault in power distribution systems. IEEE Transactions on Power Delivery. Oct. 2019;**34**(5):2058-2068. DOI: 10.1109/TPWRD.2019.2913728

[17] Weeks M, Bayoumi M. Discrete wavelet transform: Architectures, design and performance issues. The Journal of VLSI Signal Processing-Systems for Signal, Image, and Video Technology. 2003;**35**:155-178. DOI: 10.1023/A: 1023648531542

[18] Singh OP, Singh AK, Srivastava G, et al. Image watermarking using soft computing techniques: A comprehensive survey. Multimedia Tools and Applications. 2021;**80**:30367-30398. DOI: 10.1007/s11042-020-09606-x

[19] Anand A, Singh AK. Watermarking techniques for medical data authentication: A survey. Multimedia Tools and Applications. 2021;**80**: 30165-30197. DOI: 10.1007/s11042-020- 08801-0

[20] Fares K, Khaldi A, Redouane K, Salah E. DCT & DWT based watermarking scheme for medical information security. Biomedical Signal Processing and Control. 2021;**66**:102403, ISSN 1746-8094. DOI: 10.1016/j. bspc.2020.102403

[21] Alshoura WH, Zainol Z, Teh JS, Alawida M, Alabdulatif A. Hybrid SVD-based image watermarking schemes: A review. IEEE Access. 2021;**9**: 32931-32968. DOI: 10.1109/ ACCESS.2021.3060861

[22] Zhang J et al. Deep Model Intellectual Property Protection via Deep Watermarking. In: IEEE Transactions on Pattern Analysis and Machine Intelligence. DOI: 10.1109/ TPAMI.2021.3064850

[23] Bajit A, Nahid M, Benbrahim M, Tamtaoui A. A perceptually optimized wavelet Foveation based embedded image Coder and quality assessor based both on human visual system tools. International Symposium on Advanced Electrical and Communication Technologies (ISAECT). 2019;**2019**:1-7. DOI: 10.1109/ISAECT47714.2019. 9069686

[24] Bajit A, Nahid M, Tamtaoui A, Benbrahim M. A Psychovisual optimization of wavelet Foveation-based image coding and quality assessment based on human quality criterions. Advances in Science, Technology and Engineering Systems Journal. 2020;**5**: 225-234. DOI: 10.25046/aj050229

### *Edited by Jaydip Sen and Joceli Mayer*

In the era of generative artificial intelligence and the Internet of Things (IoT) and while there is explosive growth in the volume of data and the associated need for processing, analysis, and storage, several new challenges have arisen in identifying spurious and fake information and protecting the privacy of sensitive data. This has led to an increasing demand for more robust and resilient schemes for authentication, integrity protection, encryption, non-repudiation, and privacy preservation of data. This book presents some of the state-of-the-art research in the field of cryptography and security in computing and communications. It is a useful resource for researchers, engineers, practitioners, and graduate and doctoral students in the field of cryptography, network security, data privacy issues, and machine learning applications in the security and privacy in the context of the IoT.

Published in London, UK © 2023 IntechOpen © hunthomas / iStock

Information Security and Privacy in the Digital World - Some Selected Topics

Information Security and

Privacy in the Digital World

Some Selected Topics

*Edited by Jaydip Sen and Joceli Mayer*