**Real Time Implementation of Digital Watermarking Algorithm for Image and Video Application**

Amit Joshi1, Vivekanand Mishra1 and R. M. Patrikar2 *1Sardar Vallabhbhai National Institute of Technology Surat 2Visvesvaraya National Institute of Technology Nagpur India* 

#### **1. Introduction**

64 Watermarking – Volume 2

[1] Voloshynovskiy, S.et al., ''Attacks on digital watermarks: Classification, estimation

[2] Nasir Memon and Ping Wah Wong, ''Protecting Digital Media Content'',

[3] S. P. Mohanty, J. R. Ramakrishnan, and M. S. Kankanhalli, ''A DCT domain visible

[4] Hongjie He, Jiashu Zhang, Fan Chen, ''Block-wise Fragile Watermarking Scheme Based

[5] M. M Yeung, F. Mintzer, "An invisible watermarking technique for image verification'',

[6] Yuichi Nakai, ''Semi Fragile Watermarking Based on Wavelet Transform'', Proceeding of the SPIE, Security and Watermarking of Multimedia Contents, pp. 796- 803, 2001. [7] Van Schyndel, R.G.; Tirkel, A.Z.; Osborne, C.F., ''A Digital Watermark'', Proceedings of IEEE International Conference on Image Processing, pp. 86-90, 1994. [8] I. Nasir, Ying Weng, Jianmin Jiang, ''Novel Multiple Spatial Watermarking Technique in

[9] F. Mintzer and G. W. Braudaway, ''If one watermark is good, are more better?''

[10] N. P. Sheppard, R. Shafavi-Naini, and P. Ogunbona, ''On multiple watermarking''

[11] J. Cox, M. L. Miller, and J. A. Bloom, ''DigitalWatermarking and fundamentals'',

[12] M Barni, Franco Bartolini, Vito Cappellini, Alessandro Piva, ''A DCT-domain system

[13] Mitchell D. Swanson, Bin Zhu, and Ahmed H. Tewfik, ''Transparent Robust Image

[14] X. Xia, Charles G. Boncelet, Gonzalo R. Arce, ''A Multiresolution Watermark for

[15] X. Liang; Wu Huizhong, ''Multiple perceptual watermarks using multiple-based

[16] Lan Hongxing,Chen Songqiao, Li Taoshen, Hu Aina,''A Digital Watermarking

Communication Technology, Volume 1, pp. 213 - 216, 2003.

Conference for Young Computer Scientists, pp. 1488-1492, 2008.

Communications of the ACM, Volume 41, No. 7, pp. 34-43, 1998.

Multimedia and Expo, Volume 2, pp. 10291032, 2000.

Computing: Theories and Applications, PP. 216 220, 2007.

based attacks and benchmarks'', IEEE Communications Magazine, vol.39(8),

watermarking technique for images'', IEEE International Conference on

on Scramble Encryption'', IEEE International conference on Bio-Inspired

Proceedings of IEEE International Conference on Image Processing, pp. 680 683, 1997.

Color Images'', IEEE International Conference on Information Technology: New

Proceedings of the International Conference on Accoustics, Speech and Signal

Proceedings of the ACM Multimedia and Security Workshop 2001, ACM Press, pp.

for robust image watermarking'', Elsevier journal of Signal Processing, Volume 66,

Watermarking'', Proceedings of IEEE International Conference On Image

Digital Images'' Proceedings of IEEE International Conference on Image

number conversion in wavelet domain'', IEEE International Conference on

Algorithm Based on Dual-tree Complex Wavelet Transform'', IEEE International

**6. References** 

pp.118-126, Aug 2001.

Generations, pp. 777 - 782, 2008.

No. 3, pp. 357-372, 1998.

Processing, pp. 211-214, 1996.

Processing, pp.548-551, 1997.

36 , 2001

Processing, Volume 4, pp. 20672070, 1999.

Morgan Kaufmann series, San Francisco, 2002.

Watermarking is the process of hiding a predefined pattern or logo into multimedia like image, audio or video in a way that quality and imperceptibility of media is preserved. Predefined pattern or logo represents identity of an author or rights. In recent years, rapid growth in digital multimedia has been noticed. Digital data (image, audio, and video) is sent through World Wide Web (www) without much effort and money. But security is the main issue in digital multimedia. In the face of these dramatic changes, the entertainment industry has scrambled to adopt a slew of technologies that allow it to retain the copyright controls provided by the law and harness the new world to increase the industry size and enhance the consumer experience.

In recent years, the research community has seen much activity in the area of digital watermarking as an additional tool in protecting digital content and many excellent papers have appeared over the years (Arun Kejariwal,2003). Digital watermarking attempts to copyright the digital data that is freely available on the World Wide Web to protect the owner's rights. As opposed to traditional, printed watermarks, digital watermarks are transparent signatures. They are integrated within digital files as noise, or random information that already exists in the file. Thus, the detection and removal of the watermark becomes more difficult. Typically, watermarks are dispersed throughout the entire digital file such that the manipulation of one portion of the file does not alter the underlying watermark. To provide copy protection and copyright protection for digital image and video data, two complementary techniques are being developed known as Encryption and Watermarking. One more method for data hiding is which is closely correlated with watermarking known as Steganography.Steganography was basically a way of transmitting hidden (secret) messages between allies. There are various data hiding techqniques are available for security. The deatils of each data hiding techniques are presented in next section.

Real Time Implementation of Digital Watermarking Algorithm for Image and Video Application 67

at the intended user device without giving annoyance to the user. Watermark only shows

**b. Security:** Watermarked information shall only be accessible to only authorized parties. They only have the right to alter the Watermark content. Encryption can be used to prevent

**c. Ease of embedding and retrieval:** Ideally, Watermarking on digital media should be possible to be performed on the fly. The computation needed for the selected algorithm

**d. Robustness:** Watermarking must be robust enough to withstand all kinds for signal processing operations attacks or unauthorized access. Any attempt, whether intentionaly or unintentionaly, that has a potential to alter the data content is considered as an attack. Robustness against attack is a key requirement for Watermarking and the success of this

**e. Effect on bandwidth:** Watermarking should be done in such a way that it does not increase the bandwidth required for transmission. If Watermarking becomes a burden for

**f. Interoperability:** Digitally watermarked content shall still be interoperable so that it can be seamlessly accessed through heterogeneous networks and can be played on various plays

The implementation of watermarking could be on many platforms such as software, hardware, embedded controller, DSP, etc. System performance is a major parameter while designing complex systems. The standard DSP which has Von Neumann style of fetchoperate-write back computation fails to exploit the inherent parallelism in the algorithm. For example, a 30 tap FIR filter implemented on a DSP microprocessor would require 30 MAC (Multiply Accumulate) cycles for advancing one unit of real-time. Further, each MAC operation may consist of more than one cycle as it involves a memory fetch, the multiply accumulate operation, and the memory write back. In contrast, a hardware implementation can store the data in registers and perform the 30 MAC operations in parallel over a single cycle. Thus, high throughput requirements of real-time digital systems often dictate

FPGAs provide a rapid prototyping platform. They can be reprogrammed to achieve different functionalities without incurring the non-recurring engineering costs typically associated with custom IC fabrication. For commercial applications like movie production, video recording,real on-spot video surveillance,where a real-time response is always required, so a software solution is not recommended due to its long time delay. Since the goal of this research is a high performance encoding watermarking unit in an integrated circuit (IC) for commercial applications, and since FPGAs (field programmable gate arrays) have advantages in both fast processing speed and field programmability, it was determined that an FPGA is the best approach to build a fast prototyping module for

technology for copyright protection depends on its stability against attacks.

out devices that may be aware or unaware of watermarking techniques.

up a watermark-detector device.

should be least.

unauthorized access of the watermarked data.

the available bandwidth, the method fails.

**4. Need of hardware implementation** 

verifying design concepts and performance.

hardware intensive solutions.

#### **2. Data hiding techniques**

**Cryptography:** It scrambles a message into a code to obscure its meaning. Scrambling of message is done with help of secret key. Scrambling message called as encrypted and it is again decrypted with that secret key only. Cryptography provides security to message. **Steganography:** With Steganography, the sender would hide the message in a host file. The host file or cover message, is the file that anyone can see. When people use this techique, they often hide the true intent for communicating in a more common place communication scenario. In steganography, usually the message itself is of value and must be protected through clever hiding techniques and the "vessel" for hiding the message is worthless.

**Watermarking:** It is the direct embedding of additional information into the original content or host signal. Ideally, there should be no perceptible difference between the watermarked and original signal and the watermark should be difficult to remove or alter without damaging the host signal. In watermarking, the effective coupling of message to the vessel which is the digital content is of value and the protection of the content is crucial.

In case of steganography, where the method of hiding the message may be secret and the message itself is kept secret; but in watermarking, typically the watermark embedding process is known and the message (except for the use of a secret key) does not have to be secret.Most of the people find difficulty to differentiate term digital watermarking and steognography. Let us take a simple example to understand this difference. If someone gives me a beautiful birthday gift with his name on wraper. Now if I am interested in steagnography approch, I am more willing to see what is inside the wraper so I will open gift without any care of wrapper. While being digital watermarking person, I am interested in wrapper rather then gift provided to me, which gives me a clear indication of the provider. The conecpt of cryptography is totally different then these approaches of data security. Digital content is encrypted at transmitter using a key and can be decrypted at receiver if and only if the correct key is available. Cryptography gives advantage only through the channel. Once encrypted content is decrypted using a key at receiver, no means of security is available for protecting digital content from copyright. Therefore, encryption must be replaced by some method which protects digital content after decryption and there concept of watermarking comes. Another difference between cryptography and watermarking is: cryptography maps the data such that it is unreadable without decryption while, watermarking embeds data maintaining multimedia in its original form.

#### **3. Digital watermarking**

These are the parameters important for digital watermarking.


**a. Transparency:** The most fundamental requirement for any Watermarking method shall be such that it is transparent to the end user. The watermarked content should be consumable at the intended user device without giving annoyance to the user. Watermark only shows up a watermark-detector device.

**b. Security:** Watermarked information shall only be accessible to only authorized parties. They only have the right to alter the Watermark content. Encryption can be used to prevent unauthorized access of the watermarked data.

**c. Ease of embedding and retrieval:** Ideally, Watermarking on digital media should be possible to be performed on the fly. The computation needed for the selected algorithm should be least.

**d. Robustness:** Watermarking must be robust enough to withstand all kinds for signal processing operations attacks or unauthorized access. Any attempt, whether intentionaly or unintentionaly, that has a potential to alter the data content is considered as an attack. Robustness against attack is a key requirement for Watermarking and the success of this technology for copyright protection depends on its stability against attacks.

**e. Effect on bandwidth:** Watermarking should be done in such a way that it does not increase the bandwidth required for transmission. If Watermarking becomes a burden for the available bandwidth, the method fails.

**f. Interoperability:** Digitally watermarked content shall still be interoperable so that it can be seamlessly accessed through heterogeneous networks and can be played on various plays out devices that may be aware or unaware of watermarking techniques.

#### **4. Need of hardware implementation**

66 Watermarking – Volume 2

**Cryptography:** It scrambles a message into a code to obscure its meaning. Scrambling of message is done with help of secret key. Scrambling message called as encrypted and it is again decrypted with that secret key only. Cryptography provides security to message. **Steganography:** With Steganography, the sender would hide the message in a host file. The host file or cover message, is the file that anyone can see. When people use this techique, they often hide the true intent for communicating in a more common place communication scenario. In steganography, usually the message itself is of value and must be protected through clever hiding techniques and the "vessel" for hiding the message is worthless.

**Watermarking:** It is the direct embedding of additional information into the original content or host signal. Ideally, there should be no perceptible difference between the watermarked and original signal and the watermark should be difficult to remove or alter without damaging the host signal. In watermarking, the effective coupling of message to the vessel

In case of steganography, where the method of hiding the message may be secret and the message itself is kept secret; but in watermarking, typically the watermark embedding process is known and the message (except for the use of a secret key) does not have to be secret.Most of the people find difficulty to differentiate term digital watermarking and steognography. Let us take a simple example to understand this difference. If someone gives me a beautiful birthday gift with his name on wraper. Now if I am interested in steagnography approch, I am more willing to see what is inside the wraper so I will open gift without any care of wrapper. While being digital watermarking person, I am interested in wrapper rather then gift provided to me, which gives me a clear indication of the provider. The conecpt of cryptography is totally different then these approaches of data security. Digital content is encrypted at transmitter using a key and can be decrypted at receiver if and only if the correct key is available. Cryptography gives advantage only through the channel. Once encrypted content is decrypted using a key at receiver, no means of security is available for protecting digital content from copyright. Therefore, encryption must be replaced by some method which protects digital content after decryption and there concept of watermarking comes. Another difference between cryptography and watermarking is: cryptography maps the data such that it is unreadable without decryption

which is the digital content is of value and the protection of the content is crucial.

while, watermarking embeds data maintaining multimedia in its original form.

**a. Transparency:** The most fundamental requirement for any Watermarking method shall be such that it is transparent to the end user. The watermarked content should be consumable

These are the parameters important for digital watermarking.

**2. Data hiding techniques** 

**3. Digital watermarking** 

c. Ease of embedding and retrieval

a. Transparency b. Security

d. Robustness

e. Effect on bandwidth f. Interoperability

The implementation of watermarking could be on many platforms such as software, hardware, embedded controller, DSP, etc. System performance is a major parameter while designing complex systems. The standard DSP which has Von Neumann style of fetchoperate-write back computation fails to exploit the inherent parallelism in the algorithm. For example, a 30 tap FIR filter implemented on a DSP microprocessor would require 30 MAC (Multiply Accumulate) cycles for advancing one unit of real-time. Further, each MAC operation may consist of more than one cycle as it involves a memory fetch, the multiply accumulate operation, and the memory write back. In contrast, a hardware implementation can store the data in registers and perform the 30 MAC operations in parallel over a single cycle. Thus, high throughput requirements of real-time digital systems often dictate hardware intensive solutions.

FPGAs provide a rapid prototyping platform. They can be reprogrammed to achieve different functionalities without incurring the non-recurring engineering costs typically associated with custom IC fabrication. For commercial applications like movie production, video recording,real on-spot video surveillance,where a real-time response is always required, so a software solution is not recommended due to its long time delay. Since the goal of this research is a high performance encoding watermarking unit in an integrated circuit (IC) for commercial applications, and since FPGAs (field programmable gate arrays) have advantages in both fast processing speed and field programmability, it was determined that an FPGA is the best approach to build a fast prototyping module for verifying design concepts and performance.

Real Time Implementation of Digital Watermarking Algorithm for Image and Video Application 69

It is the most common application especially in multimedia object where user inserts copyright information as a watermark or never-copy watermark in a digital content. This watermark can prove his ownership in court when someone has infringed on his copyrights Also number of duplications, manipulations and distribution of digital content can be controlled which are the main sources of illegal process. It is also possible to encode the identity of the original buyer along with the identity of the copyright holder, which allows

Fingerprint is treated as a transactional watermark. It applies to trace the source of illegal copying of digital content. The owner can embed different unique watermarks for different customers. To trace the source of illegal copies, the owner can use a fingerprinting technique. In this case, the owner can embed different watermarks in the copies of the data that are supplied to different customers. Fingerprinting can be compared to embedding a serial number that is related to the customer's identity in data. It enables the intellectual property owner to identify customers who have broken their license agreement by supplying the data to third party. If misuse of digital content takes place, it is easy to trace

Fig. 2. Video Tracking and finger-printing service (MarkAny, 2010)

**5.1 Copy-right protection** 

tracing of any unauthorized copies.

out the responsible customer.

**5.2 Transaction tracing fingerprinting** 

Fig. 1. Copyright Protection service (MarkAny ,2010).

Several software implementations of the watermarking algorithms are available, but very few attempts have been made for hardware implementations. Software implementation of watermarking has been implemented because of their ease of use and flexibility. Mostly software based watermarking works on offline where images are captured through camera and stored on computer and the software for watermarking runs and embeds the watermark and then the images are distributed. This approach has the drawback of certain amount of delay, once images are captured and then watermark is embedded. If attackers would attacks the image before the watermark embedded then it creates issues for ownership of the originator.So there is a need of real-time watermarking where watermark embedding unit reside inside the device (as digital camera) and embedding done directly when image is captured. The hardware implementation of watermarking has advantages in terms of reliability and high performance for area, power and speed. This is very much crucial in some applications like real-time broad casting, video authentication and secure camera system for courtroom evidence. The hardware implementation can have advantage of parallel processin. Since watermarking process deals with processing of watermark and pre-processin of original content before embedding watermark. These two processes are independent and can work in parallel to achieve parallelism to achieve high speed for real-time application.

#### **5. Application of digital watermarking**

The digital watermarking technology can be applied to various fields such as copyright protection, transaction tracing, broadcast monitoring and tamper proofing etc.

#### **5.1 Copy-right protection**

68 Watermarking – Volume 2

Several software implementations of the watermarking algorithms are available, but very few attempts have been made for hardware implementations. Software implementation of watermarking has been implemented because of their ease of use and flexibility. Mostly software based watermarking works on offline where images are captured through camera and stored on computer and the software for watermarking runs and embeds the watermark and then the images are distributed. This approach has the drawback of certain amount of delay, once images are captured and then watermark is embedded. If attackers would attacks the image before the watermark embedded then it creates issues for ownership of the originator.So there is a need of real-time watermarking where watermark embedding unit reside inside the device (as digital camera) and embedding done directly when image is captured. The hardware implementation of watermarking has advantages in terms of reliability and high performance for area, power and speed. This is very much crucial in some applications like real-time broad casting, video authentication and secure camera system for courtroom evidence. The hardware implementation can have advantage of parallel processin. Since watermarking process deals with processing of watermark and pre-processin of original content before embedding watermark. These two processes are independent and can work in

parallel to achieve parallelism to achieve high speed for real-time application.

protection, transaction tracing, broadcast monitoring and tamper proofing etc.

The digital watermarking technology can be applied to various fields such as copyright

Fig. 1. Copyright Protection service (MarkAny ,2010).

**5. Application of digital watermarking** 

It is the most common application especially in multimedia object where user inserts copyright information as a watermark or never-copy watermark in a digital content. This watermark can prove his ownership in court when someone has infringed on his copyrights Also number of duplications, manipulations and distribution of digital content can be controlled which are the main sources of illegal process. It is also possible to encode the identity of the original buyer along with the identity of the copyright holder, which allows tracing of any unauthorized copies.

#### **5.2 Transaction tracing fingerprinting**

Fingerprint is treated as a transactional watermark. It applies to trace the source of illegal copying of digital content. The owner can embed different unique watermarks for different customers. To trace the source of illegal copies, the owner can use a fingerprinting technique. In this case, the owner can embed different watermarks in the copies of the data that are supplied to different customers. Fingerprinting can be compared to embedding a serial number that is related to the customer's identity in data. It enables the intellectual property owner to identify customers who have broken their license agreement by supplying the data to third party. If misuse of digital content takes place, it is easy to trace out the responsible customer.

Fig. 2. Video Tracking and finger-printing service (MarkAny, 2010)

Real Time Implementation of Digital Watermarking Algorithm for Image and Video Application 71

& N. Raganathan 2004) had also developed visible watermarking scheme on DCT. After that algorithms for wavelet based approach were developed to adapt JPEG2000 new millennium standard and to explore multiresolution property of wavelet (Victor V. Hernandez Guzman & Mariko Nankano,2004; Jianyog Huang & Changsheng Yang,2004). In 2005 later part, the proposed watermark (Sammy H. M. Hawk & Edumund Y. Lam, 2002) implementation technique in digital photography with DWT approach for software based implementation. After that next year, development of (Lei Tian, Heng-Ming Tai,2006) secure images captured by digital camera for DWT based approach has been proposed. Another spread spectrum watermarking techniques provides better perceptual transparency and watermark robustness (I.J.Cox et al,1995,1997). This can also developed for secure digital camera application. The watermarking scheme with random binary sequence was developed in paper (A. Lumini & D. Maio,2000). Another watermarking algorithm based on threshold based scheme presnted in paper (Yong-Gang Fu & Hui -Rong Wang,2008 ). The novel watermarking scheme for block processing method with differential expansion was developed in the paper(Hsien-Wen Tseng & Chin-Chen Chang ,2008). The first attemp to develop simple and efficient watermark technique for JPEG2000 Codec with scattered matrix watermark was presnted in paper (Tung Shou Chen et al,2004). There are many software based implemntation of image watermarking algorithms but very few attempt has been made for hardware implemntation. This will cover in section 6.1 withy detail

The input image for watermarking algorithm can be either monochrome (black and white) or color image. As with traditional color processing, we first convert a color image from

Fig. 4. Tamper proofing Service flow (MarkAny, 2010)

explanation.

#### **5.3 Broadcast monitoring**

This application is used by advertisers to broadcast the watermarked information at a specific time and location. Watermarking finds its application to monitor or track the digital

content being broadcast, time and location of broadcasting. Specialized equipments are used to track the broadcast channels or radio channels. Upon reception, watermark is detected; content is verified and reported to the broadcasters for true reach of content. It is also useful in finding illegal rebroadcast of copyright information. By embedding watermarks in commercial advertisements an automated monitoring system can verify whether advertisements are broadcasted as contracted. A broadcast surveillance system can check all broadcast channels and charge the TV stations according to their findings. Owners of copyrighted videos want to get their royalties each time their property is broadcasted.

Fig. 3. Broadcast monitoring system (MarkAny, 2010)

#### **5.4 Tamper – proofing**

This can be applied to detect existence forgery when contents are forged evil-mindedly and intentionally by embedding watermark information easily damaged by micro-operation. For example, security equipment such as CCTV has been already converted from an analogue system into a digital system, but the data saved by these systems are saved all digitally. However the weakness of digital data that even general users who have personal system can easily operate moving pictures and sound source data which causes a reliability problem for the digital data. A means of judging existence of forgery is necessary to utilize the moving picture data recorded at a digital depository through CCTV as proof data at a court. This can be utilized for a DVR (Digital Video Recorder)/NVR (Network Video Recorder) system, digital camera, camcorder, etc.

#### **6. Implementation of image watermarking**

First DCT based semi fragile watermarking algorithm for digital camera with FPGA implementation was developed in paper (Hyum Lim et al,2003). In paper, (Saraju P. Mohanty

Fig. 4. Tamper proofing Service flow (MarkAny, 2010)

This application is used by advertisers to broadcast the watermarked information at a specific time and location. Watermarking finds its application to monitor or track the digital content being broadcast, time and location of broadcasting. Specialized equipments are used to track the broadcast channels or radio channels. Upon reception, watermark is detected; content is verified and reported to the broadcasters for true reach of content. It is also useful in finding illegal rebroadcast of copyright information. By embedding watermarks in commercial advertisements an automated monitoring system can verify whether advertisements are broadcasted as contracted. A broadcast surveillance system can check all broadcast channels and charge the TV stations according to their findings. Owners of copyrighted videos want to get their royalties each time their property is broadcasted.

This can be applied to detect existence forgery when contents are forged evil-mindedly and intentionally by embedding watermark information easily damaged by micro-operation. For example, security equipment such as CCTV has been already converted from an analogue system into a digital system, but the data saved by these systems are saved all digitally. However the weakness of digital data that even general users who have personal system can easily operate moving pictures and sound source data which causes a reliability problem for the digital data. A means of judging existence of forgery is necessary to utilize the moving picture data recorded at a digital depository through CCTV as proof data at a court. This can be utilized for a DVR (Digital Video Recorder)/NVR (Network Video Recorder) system,

First DCT based semi fragile watermarking algorithm for digital camera with FPGA implementation was developed in paper (Hyum Lim et al,2003). In paper, (Saraju P. Mohanty

**5.3 Broadcast monitoring** 

Fig. 3. Broadcast monitoring system (MarkAny, 2010)

**6. Implementation of image watermarking** 

**5.4 Tamper – proofing** 

digital camera, camcorder, etc.

& N. Raganathan 2004) had also developed visible watermarking scheme on DCT. After that algorithms for wavelet based approach were developed to adapt JPEG2000 new millennium standard and to explore multiresolution property of wavelet (Victor V. Hernandez Guzman & Mariko Nankano,2004; Jianyog Huang & Changsheng Yang,2004). In 2005 later part, the proposed watermark (Sammy H. M. Hawk & Edumund Y. Lam, 2002) implementation technique in digital photography with DWT approach for software based implementation. After that next year, development of (Lei Tian, Heng-Ming Tai,2006) secure images captured by digital camera for DWT based approach has been proposed. Another spread spectrum watermarking techniques provides better perceptual transparency and watermark robustness (I.J.Cox et al,1995,1997). This can also developed for secure digital camera application. The watermarking scheme with random binary sequence was developed in paper (A. Lumini & D. Maio,2000). Another watermarking algorithm based on threshold based scheme presnted in paper (Yong-Gang Fu & Hui -Rong Wang,2008 ). The novel watermarking scheme for block processing method with differential expansion was developed in the paper(Hsien-Wen Tseng & Chin-Chen Chang ,2008). The first attemp to develop simple and efficient watermark technique for JPEG2000 Codec with scattered matrix watermark was presnted in paper (Tung Shou Chen et al,2004). There are many software based implemntation of image watermarking algorithms but very few attempt has been made for hardware implemntation. This will cover in section 6.1 withy detail explanation.

The input image for watermarking algorithm can be either monochrome (black and white) or color image. As with traditional color processing, we first convert a color image from

Real Time Implementation of Digital Watermarking Algorithm for Image and Video Application 73

Fig. 6. Down sampling and up sampling process of Y component

Fig. 7. Reconstructed Y plane after up sampling

The complete modified process is shown in below Fig 8.

Fig. 8. Modified Down-Up sampling process with proper reconstruction of Image

an RGB color space to the YCbCr color space. Then only the Y component of the image is down-sampled to form a grayscale image of resolution of 1 M pixels (assuming the original is between 2 M and 8 M pixels, true for most digital cameras today). Afterwards, a watermark is embedded in the image by quantizing the coefficients of the nth sub band level for DWT of the image. Finally, the Y image plane is converted back to spatial domain by IDWT and a watermark image is formed by up sampling the image and adding it with the original Y, Cb and Cr color components. For extraction process, the user has access to watermark (w), the coefficient selection key (c key) (A.J. Menezes et al,1996; B. Sehneier,1996) and the original image incase of non blind watermarking. Since only the user knows the secret key for the watermarking therefor security against forgery is guaranteed.

As stated earlier, only luminance component Y is down sampled to embed the watermark. The down sampling is a lossy process and thus down sampling of chroma signals (Cb and Cr) are lost that can not be retrieved by reverse process of up sampling at receving end. The complete process of down and up sampling of Lena 256 x 256 color image is shown in this section. First color images which comprises of RGB components are converted to YcbCr, where Y contains luminance information and Cb and Cr contains the chrominance information of the image. Then Y signal is down sampled at factor two to obtain down sample image. This image is used for wavelet processing and to embed the watermark. This image is again up sampled with factor two to obtain recover Y component. The complete process shown in Fig. 5.

Fig. 5. Watermark Embedding Process

The problem with process that up sampling adds zero's to the images, so the image after up sampling is distorted (recovered Y component) as shown in above Fig 6. To over come this problem, we simply add the original Y component value to the zero paddied values as shown in below in Fig. 7.

Fig. 6. Down sampling and up sampling process of Y component

Fig. 7. Reconstructed Y plane after up sampling

The complete modified process is shown in below Fig 8.

72 Watermarking – Volume 2

an RGB color space to the YCbCr color space. Then only the Y component of the image is down-sampled to form a grayscale image of resolution of 1 M pixels (assuming the original is between 2 M and 8 M pixels, true for most digital cameras today). Afterwards, a watermark is embedded in the image by quantizing the coefficients of the nth sub band level for DWT of the image. Finally, the Y image plane is converted back to spatial domain by IDWT and a watermark image is formed by up sampling the image and adding it with the original Y, Cb and Cr color components. For extraction process, the user has access to watermark (w), the coefficient selection key (c key) (A.J. Menezes et al,1996; B. Sehneier,1996) and the original image incase of non blind watermarking. Since only the user knows the secret key for the watermarking therefor security against forgery is

As stated earlier, only luminance component Y is down sampled to embed the watermark. The down sampling is a lossy process and thus down sampling of chroma signals (Cb and Cr) are lost that can not be retrieved by reverse process of up sampling at receving end. The complete process of down and up sampling of Lena 256 x 256 color image is shown in this section. First color images which comprises of RGB components are converted to YcbCr, where Y contains luminance information and Cb and Cr contains the chrominance information of the image. Then Y signal is down sampled at factor two to obtain down sample image. This image is used for wavelet processing and to embed the watermark. This image is again up sampled with factor two to obtain recover Y component. The complete

The problem with process that up sampling adds zero's to the images, so the image after up sampling is distorted (recovered Y component) as shown in above Fig 6. To over come this problem, we simply add the original Y component value to the zero paddied values as

guaranteed.

process shown in Fig. 5.

Fig. 5. Watermark Embedding Process

shown in below in Fig. 7.

Fig. 8. Modified Down-Up sampling process with proper reconstruction of Image

Real Time Implementation of Digital Watermarking Algorithm for Image and Video Application 75

Text Input/Output Function to read Pixel input

Lifting Based DWT

Transform DomainCoefficient Matrix

Separate LL band to embed watermark

Bit Plane Slicing Method

Watermark Embedding process in LSB

Reconstruction of Plane to n LL Band of watermarked image

> Lifting Based IDWT

MATLAB Image Read Function

**MATLAB Implementation** 

Fig. 11. Embedding scheme for watermarking

**MATLAB Implementation** 

Spatial-domain digital watermarking methods are generally considered as having poor performance after geometric distortion (such as cropping and scaling), common signal processing (such as JPEG and filtering). It is efficient in terms of less computational cost due to their easy operation.On the other hand, frequency-domain watermark techniques,

Random Number Generator for Co-efficient key and watermark

**HDL Implementation**

Text Input/output Function to write out put Pixel

Generation

MATLAB Image Write Function

**6.1 Proposed algorithm** 

Watermarked Image

Original Image

Fig. 9. Down sampling and up sampling process for Cb component

The chrominance signal can also be down sampled as shown in Fig 9 for Cb component and Fig 10 for Cr component. We can see the difference between original and recovered up sampled image in Fig 9 and Fig 10. As we can see, luminance component Y signal down/up sampling process provides better results compare to chrominance signal Cb and Cr down/up sampling.

Fig. 10. Down sampling and up sampling process for Cr component

Fig. 11. Embedding scheme for watermarking

#### **6.1 Proposed algorithm**

74 Watermarking – Volume 2

The chrominance signal can also be down sampled as shown in Fig 9 for Cb component and Fig 10 for Cr component. We can see the difference between original and recovered up sampled image in Fig 9 and Fig 10. As we can see, luminance component Y signal down/up sampling process provides better results compare to chrominance signal Cb and Cr

Fig. 9. Down sampling and up sampling process for Cb component

Fig. 10. Down sampling and up sampling process for Cr component

down/up sampling.

Spatial-domain digital watermarking methods are generally considered as having poor performance after geometric distortion (such as cropping and scaling), common signal processing (such as JPEG and filtering). It is efficient in terms of less computational cost due to their easy operation.On the other hand, frequency-domain watermark techniques,

Real Time Implementation of Digital Watermarking Algorithm for Image and Video Application 77

To implement this algorithm, the equations stated earlier are utilized. In lifting scheme this algorithm is divided in two phases: predict phase and update phase. In order to find the value for predict phase, simultaneously three inputs are required as per eq. (1). Similarly in the update phase only one even input and two values obtained form predict phase are required as per eq. (2). The 8 bit-gray scale image of LENA is used for performance of Legal 5/3 DWT. The architecture module of predict phase and update phase are shown in Fig. 12

As suggested, to obtain the forward wavelet transform, initially we need to read the three input data. And from these we are get two coefficients detail (high) and approximate (low). One needs to manipulate the co-efficient of the image to obtain the correct output. In VHDL, two dimensional matrixes are not synthesizable. So if we are interested in reading the image having size n X n, then total n^2 memory location are required to store each input pixels. For this, 64 X 64 gray scale image are utilized in the wavelet transform, data is processed

row wise and then after columnwise. The memory orgnization is as shown in Fig. 14.

Fig. 14. Memory Management of Wavelet transforms

and Fig. 13 respectively.

Fig. 13. Update Phase

**6.1.2 Memory management** 

have their high computation complexity, and provide great robustness to different attack. To increase the robustness, we have to increase number of sub band level which require more computational cost. However, we have adopted the combined approach with spatial-frequency domain approach which has advantages of both domains. The frequency domain transformation is done with lifting based wavelet scheme and spatial domain transformation done with bit plane slicing. The steps of algorithms have been described in paper ( Amit joshi,2009). The implementation flow for proposed scheme is shown in Fig 11. The image is read through MATLAB and pixel is stored in datain.txt file. With help of text I/O package, the datain.txt file has been read in VHDL and Legall 5/3 based Lifting wavelet applied to obtain transform domain co-efficient matrix. LL band coefficient stored in separate memory to embed watermark. The RTL code of bit Plane slicing has been developed to separate different planes from LSB to MSB. To the selected co-efficient generated by random number generator, then watermark has been added to them. Then all planes are reconstructed with bit plane slicing RTL code to obtain LL band of watermarked image. Then lifting based legall 5/3 IDWT has been applied to obtain pixel values. The MATLAB function is used to construct watermarked image.

#### **6.1.1 Hardware implementation of wavelet**

For implementation of hardware efficient DWT based scheme, lifting based scheme obviously far better than traditional convolution scheme. Lifting based wavelet scheme used in various approaches like Daubechies 9/7 and Le Gall 5/3. But Le Gall 5/3 is proven more hardware efficient due to its simplicity and lossless implementation. The odd and even samples values can be calculated by following equations (1) and (2) are

$$\mathbf{x}(2n+1) = \mathbf{x}(2n+1) - [\frac{\mathbf{x}(2n) + \mathbf{x}(2n+2)}{2}] \tag{1}$$

$$y(2n) = x(2n) + \left[\frac{y(2n-1) + y(2n+1) + 2}{4}\right] \tag{2}$$

Fig. 12. Predict Phase

To implement this algorithm, the equations stated earlier are utilized. In lifting scheme this algorithm is divided in two phases: predict phase and update phase. In order to find the value for predict phase, simultaneously three inputs are required as per eq. (1). Similarly in the update phase only one even input and two values obtained form predict phase are required as per eq. (2). The 8 bit-gray scale image of LENA is used for performance of Legal 5/3 DWT. The architecture module of predict phase and update phase are shown in Fig. 12 and Fig. 13 respectively.

Fig. 13. Update Phase

76 Watermarking – Volume 2

have their high computation complexity, and provide great robustness to different attack. To increase the robustness, we have to increase number of sub band level which require more computational cost. However, we have adopted the combined approach with spatial-frequency domain approach which has advantages of both domains. The frequency domain transformation is done with lifting based wavelet scheme and spatial domain transformation done with bit plane slicing. The steps of algorithms have been described in paper ( Amit joshi,2009). The implementation flow for proposed scheme is shown in Fig 11. The image is read through MATLAB and pixel is stored in datain.txt file. With help of text I/O package, the datain.txt file has been read in VHDL and Legall 5/3 based Lifting wavelet applied to obtain transform domain co-efficient matrix. LL band coefficient stored in separate memory to embed watermark. The RTL code of bit Plane slicing has been developed to separate different planes from LSB to MSB. To the selected co-efficient generated by random number generator, then watermark has been added to them. Then all planes are reconstructed with bit plane slicing RTL code to obtain LL band of watermarked image. Then lifting based legall 5/3 IDWT has been applied to obtain pixel values. The MATLAB function is used to construct watermarked

For implementation of hardware efficient DWT based scheme, lifting based scheme obviously far better than traditional convolution scheme. Lifting based wavelet scheme used in various approaches like Daubechies 9/7 and Le Gall 5/3. But Le Gall 5/3 is proven more hardware efficient due to its simplicity and lossless implementation. The odd and even

(2 ) (2 2) (2 1) (2 1) [ ] <sup>2</sup>

(2 1) (2 1) 2 (2 ) (2 ) [ ] <sup>4</sup>

*yn yn yn xn*

*xn xn*

 (1)

(2)

samples values can be calculated by following equations (1) and (2) are

*yn xn*

image.

Fig. 12. Predict Phase

**6.1.1 Hardware implementation of wavelet** 

#### **6.1.2 Memory management**

As suggested, to obtain the forward wavelet transform, initially we need to read the three input data. And from these we are get two coefficients detail (high) and approximate (low). One needs to manipulate the co-efficient of the image to obtain the correct output. In VHDL, two dimensional matrixes are not synthesizable. So if we are interested in reading the image having size n X n, then total n^2 memory location are required to store each input pixels. For this, 64 X 64 gray scale image are utilized in the wavelet transform, data is processed row wise and then after columnwise. The memory orgnization is as shown in Fig. 14.

Fig. 14. Memory Management of Wavelet transforms

Real Time Implementation of Digital Watermarking Algorithm for Image and Video Application 79

0000000**1** + 000000**0**0 + 00000**0**00 + 0000**1**000 + 000**1**0000 + 00**1**00000 + 0**1**000000 + **0**0000000 =

**b. Random Number Generator:** It is one of the important blocks of watermarking process. Basically its role is to generate the coefficient selection key and embed watermark to original content. As shown in Fig 16, it has 8 bit D-flip flop which are used so that the maximum number of co-efficient selection key from random number generator is 255. The watermark is added according to key generated at the output. The random number generator is started with secret key provided as its initial state. We have used 10101101 as key that serves as the initial seed to start random number generation. The same key has been used with same random number at the detection side which generate same pseudo

**01111001**.

sequence to retrieve the watermark.

Fig. 16. Random Number Generator

from RAM1.

**6.1.4 VLSI architecture of image watermarking algorithm** 

The architecture design proposed for scheme is defined as shown in Fig 17. The main memory comprises the memory space which is twice the size of original image size as it has to store original values and watermarked value. For example, the size of image be 256 x 256, the main memory requirement is 2\*256\*256=1, 31,072. Now the memory is divided into two parts as RAM1 for original image and second for RAM2 for watermarked image. At the time of detection for non blind scheme, the values in RAM1 are considered as original pixels values and RAM2 are watermarked values. As explained earlier in section 6.1, wavelet scheme based on lifting based legal 5/3 method requires three values to read

#### **6.1.3 Watermarking embedding hardware implementation**

There are two basic blocks required for watermark embedding process.


**a. Bit Plane Slicing Implementation:** It is the spatial domain techniques. In Spatial domain scheme, watermark is directly embedded in the pixel values. Algorithm splits the image

Fig. 15. Bit planes of a Image

into 8 planes from MSB to LSB. The whole concept is explained in details as shown in Fig. 15. Suppose pixel values in binary foramt read from memory as shown here:

01111001 01100101 01001010 00100110 10000100 10000110 10001001 10001101

The values read from memory are taken one by one in temp variable.

To separate out the values in different planes, we will xor the temp values with standard values as follows. Here first value read from memory and stored in temp is **01111001.** 


Next time another value of pixel read from memory and stored in temp as 01100101 and same procedure is followed as above. In this way all LSB plane co-efficient are obtained. The reconstruction of the planes are also very simple. Finally we have to just add all the resultant values planes to obtain original value. With addition of all plane values we obtain:

0000000**1** + 000000**0**0 + 00000**0**00 + 0000**1**000 + 000**1**0000 + 00**1**00000 + 0**1**000000 + **0**0000000 = **01111001**.

**b. Random Number Generator:** It is one of the important blocks of watermarking process. Basically its role is to generate the coefficient selection key and embed watermark to original content. As shown in Fig 16, it has 8 bit D-flip flop which are used so that the maximum number of co-efficient selection key from random number generator is 255. The watermark is added according to key generated at the output. The random number generator is started with secret key provided as its initial state. We have used 10101101 as key that serves as the initial seed to start random number generation. The same key has been used with same random number at the detection side which generate same pseudo sequence to retrieve the watermark.

Fig. 16. Random Number Generator

78 Watermarking – Volume 2

**a. Bit Plane Slicing Implementation:** It is the spatial domain techniques. In Spatial domain scheme, watermark is directly embedded in the pixel values. Algorithm splits the

into 8 planes from MSB to LSB. The whole concept is explained in details as shown in Fig.

To separate out the values in different planes, we will xor the temp values with standard values as follows. Here first value read from memory and stored in temp is

Next time another value of pixel read from memory and stored in temp as 01100101 and same procedure is followed as above. In this way all LSB plane co-efficient are obtained. The reconstruction of the planes are also very simple. Finally we have to just add all the resultant values planes to obtain original value. With addition of all plane values we obtain:

15. Suppose pixel values in binary foramt read from memory as shown here:

The values read from memory are taken one by one in temp variable.


01111001 01100101 01001010 00100110 10000100 10000110 10001001 10001101

**6.1.3 Watermarking embedding hardware implementation** 

a. Bit Plane Slicing scheme implementation

Fig. 15. Bit planes of a Image

**01111001.** 

image

There are two basic blocks required for watermark embedding process.

b. Random Number Generator for key selection and watermark generation.

#### **6.1.4 VLSI architecture of image watermarking algorithm**

The architecture design proposed for scheme is defined as shown in Fig 17. The main memory comprises the memory space which is twice the size of original image size as it has to store original values and watermarked value. For example, the size of image be 256 x 256, the main memory requirement is 2\*256\*256=1, 31,072. Now the memory is divided into two parts as RAM1 for original image and second for RAM2 for watermarked image. At the time of detection for non blind scheme, the values in RAM1 are considered as original pixels values and RAM2 are watermarked values. As explained earlier in section 6.1, wavelet scheme based on lifting based legal 5/3 method requires three values to read from RAM1.

Real Time Implementation of Digital Watermarking Algorithm for Image and Video Application 81

Data In [7:0]: DATA Input bus. Original pixel values which were stored will be input on this

START: It is an active low handshake signal to initiate data transfer operation on Data In

READY: It is active high signal will be activated for one CLK cycle after the completion of

BUSY: It is a active high signal. It indicates Watermarking is in progress. When external signal is high which indicates, external access to RAM1's are isolated. The data on data bus

Synthesis was performed with help of Xilinx project navigator ISE 9.1 software EDA tool. During simulation, textio library was utilized to read the gray scale image (lena 256 x 256) file. After processing, results are stored in text file. This text file is read through

Data out [7:0]: DATA output bus. Watermarked pixel values are output on this bus.

watermark embedding operation. It indicates Data out bus has valid out on bus.

bus for operation.

CLK: Clock signal to chip.

bus on every clock edge.

out is not valid.

Wmcontrol: Enables during embedding the watermark.

The simulation results for Legall 5/3 is as shown in Fig. 19.

RESETZ: It is active low signal to reset the chip

**6.2 Hardware implementation results** 

Fig. 19. Simulation of Legal 5/3 wavelet

**6.2.1 FPGA results** 

Fig. 17. VLSI Architecture of proposed algorithm

#### **6.1.5 Pin diagram**

The pin diagram for wavelet based spatial domain watermarking chip is shown in the Fig. 18. The functional description of each pin is:

Fig. 18. Pin Diagram

Data In [7:0]: DATA Input bus. Original pixel values which were stored will be input on this bus for operation.

CLK: Clock signal to chip.

80 Watermarking – Volume 2

Predict Phase

8-Bit Bus

> Temporary Storage

External RAM

Update Phase

LL Band

Mask Pattern

The pin diagram for wavelet based spatial domain watermarking chip is shown in the Fig.

LFSR

2-DWT Lifting unit

Watermark generation i

Bit Plane slicing unit

Logical AND

Fig. 17. VLSI Architecture of proposed algorithm

RAM 1

RAM 2

Watermark Adder

18. The functional description of each pin is:

**6.1.5 Pin diagram** 

Fig. 18. Pin Diagram

Wmcontrol: Enables during embedding the watermark.

RESETZ: It is active low signal to reset the chip

START: It is an active low handshake signal to initiate data transfer operation on Data In bus on every clock edge.

Data out [7:0]: DATA output bus. Watermarked pixel values are output on this bus.

READY: It is active high signal will be activated for one CLK cycle after the completion of watermark embedding operation. It indicates Data out bus has valid out on bus.

BUSY: It is a active high signal. It indicates Watermarking is in progress. When external signal is high which indicates, external access to RAM1's are isolated. The data on data bus out is not valid.

#### **6.2 Hardware implementation results**

The simulation results for Legall 5/3 is as shown in Fig. 19.


Fig. 19. Simulation of Legal 5/3 wavelet

#### **6.2.1 FPGA results**

Synthesis was performed with help of Xilinx project navigator ISE 9.1 software EDA tool. During simulation, textio library was utilized to read the gray scale image (lena 256 x 256) file. After processing, results are stored in text file. This text file is read through

Real Time Implementation of Digital Watermarking Algorithm for Image and Video Application 83

**Processing**

Al,2005 Visible Spatial 0.35um 72uW 0.914ms

Table 4. Summary of Watermark Custom IC Hardware Description in the literature of

In today's multimedia technology, the most widely used object is a video. Therefor maximum occurrences of copyright infringement and abuse happen for video media content. Video is sequence of frames and each frame is considered as a still image. But the

b. Video content are sensitive to subjective quality and Watermarking may degrade the

c. Video compression algorithms are computationally intensive and hence there is less

d. Video is bandwidth hungry and that is why it is mostly carried in compressed domain. Therefore, Watermarking algorithm shall be adaptable for compress domain

e. For low-bit rate video, Watermarking poses additional challenges, as there is less room

f. During video transmission, frame drops are very usual. If watermark data spreads over many frames, in that case watermark data may become irretrievable. Watermarking

The easiest way to embed the watermark in video is consider each of frame of video as still image and apply image watermarking algorithm. So algorithm which described in section 6.1 still holds quite comparable results when applied to video. One algorithm developed for video is presented in paper for wavelet domain (Amit Joshi & Vivekanand Mishra,2011). But with this approach, we are not utilizing the temporal dimension of video. Same way, many algorithms for developing watermarks on images are extended for videos. Some points need to be considered during the extensions. First one is between the frames there exists a huge amount of intrinsically redundant data. So we can explore that before embedding the watermark. Second is there must be a strong balance between the motion and the motionless regions. And another one is strong concern must be put forth on real time and streaming

**Domain Technology Power Execution** 

robust DCT 0.35 um 107.6 uW 1.494 ms

Fragile Spatial 0.13um 82 uW 1.3059 ms

Robust DCT 0.35um 90 uW 1.125ms

Robust Wavelet 0.18um 69uW 0.893ms

**Time** 

**Research Work Watermarking**

Tsai and Lu,2001

Garimella et Al,2003

Saraju P. Mohanty et

Saraju P. Mohanty et al,2006

> Proposed Scheme

**7. Video watermarking** 

challenges for video watermarking are as follows:

headroom for Watermarking computation.

should be robust enough against this phenomenon.

Watermarking

quality.

processing.

for watermark data.

**Type** 

Invisible

Invisible

Invisible

Invisible

a. Video media is susceptible to increased attacks than any other media.

MATLAB to generate image. Synthesis report and device utilization reports for proposed DWT, IDWT and Watermarking Processor is shown in Table 1 and Table 2 respectively. The results are obtained for Xc3s500e-4fg320-SPARTAN 3E FPGA using ISE 9.1 from Xilinx Tool.


Table 1. Synthesis Report for Proposed DWT,IDWT and Watermarking Processor


Table 2. Device Utilization Report for Proposed DWT, IDWT and Watermarking Processor

#### **6.2.2 ASIC results**

The proposed scheme requires three major blocks to embed the watermark as DWT, IDWT and Watermark process. We have calculated area and power with Design compiler using standard cell library of Farady 0.18 um technology as shown in Table 3. In Table 4, It also has been compared with other existing scheme.


Table 3. Area and dynamic power results for proposed scheme


Table 4. Summary of Watermark Custom IC Hardware Description in the literature of Watermarking

#### **7. Video watermarking**

82 Watermarking – Volume 2

MATLAB to generate image. Synthesis report and device utilization reports for proposed DWT, IDWT and Watermarking Processor is shown in Table 1 and Table 2 respectively. The results are obtained for Xc3s500e-4fg320-SPARTAN 3E FPGA using ISE 9.1 from Xilinx

**Resources DWT Processor Watermarking Processor IDWT Processor** 

RAM/ROM 3 2 4 Adders/Subtractor 16 2 14

Latches 12 5 10 Muxs 4 3 4 Counters 3 2 3

**Resources DWT Processor Watermarking Processor IDWT Processor**  Slices (535/4656) (153/4656) (570/4656) Slices FFs (395/9312) (117/9312) (410/9312) LUTs 982 335 925 I/O 70 25 70 GCLKs 3 1 3

Table 2. Device Utilization Report for Proposed DWT, IDWT and Watermarking Processor

The proposed scheme requires three major blocks to embed the watermark as DWT, IDWT and Watermark process. We have calculated area and power with Design compiler using standard cell library of Farady 0.18 um technology as shown in Table 3. In Table 4, It also

Watermarking Processor Bit Plane Slice 192 23 uW Watermarking Processor Binary Number Generator 322 48 uW yy

Table 3. Area and dynamic power results for proposed scheme

**Block Type Area (um\*um) Dynamic Power**  DWT 1 Dimensional 12196 2.0592 mW DWT 2 Dimensional 15770 8.6813 m W IDWT 1 Dimensional 13237 2.8531 mW IDWT 2 Dimensional 18106 9.3752 mW

Registers/Flip Flop 469 62 542 Tt

Table 1. Synthesis Report for Proposed DWT,IDWT and Watermarking Processor

Tool.

**6.2.2 ASIC results** 

has been compared with other existing scheme.

In today's multimedia technology, the most widely used object is a video. Therefor maximum occurrences of copyright infringement and abuse happen for video media content. Video is sequence of frames and each frame is considered as a still image. But the challenges for video watermarking are as follows:


The easiest way to embed the watermark in video is consider each of frame of video as still image and apply image watermarking algorithm. So algorithm which described in section 6.1 still holds quite comparable results when applied to video. One algorithm developed for video is presented in paper for wavelet domain (Amit Joshi & Vivekanand Mishra,2011). But with this approach, we are not utilizing the temporal dimension of video. Same way, many algorithms for developing watermarks on images are extended for videos. Some points need to be considered during the extensions. First one is between the frames there exists a huge amount of intrinsically redundant data. So we can explore that before embedding the watermark. Second is there must be a strong balance between the motion and the motionless regions. And another one is strong concern must be put forth on real time and streaming

Real Time Implementation of Digital Watermarking Algorithm for Image and Video Application 85

H.264/MPEG4-AVC is the latest video coding standard of ITU-T Video CodingExperts Group(VCEG) and the ISO/IEC Moving Picture Expert Group(MPEG). H.264/MPEG4-AVC has recently become the most widely accepted video coding standard since the deployment of MPEG2 at the drawn of digital television, and it may soon overtake MPEG2 in common use. It covers all common video application ranging from mobile services and video conferencing IPTV,HDTV and HD video storage. The H.264 standard has a number of advantages that distinguish it from existing standards, while at the same time, sharing common features with other existing standards like up to 50 % of bandwidth sharing, high

The paper (Frank Hartung, Bernod Girod, 1998) presented spread spectrum based watermark embedding method for additive digital watermarks into video sequences in uncompressed and compressed video sequences. It adds pseudo-noise signal to the video with invisible and robust against manipulations. For practical applications, watermarking schemes operating on compressed video are desirable. The watermark is processes through discrete cosine transform (DCT) and embedded into the MPEG-2 bit-stream without increasing the bit-rate. The watermark can be retrieved from the decoded video and without

The authors (Karima Ait Saadi et al.,2008) proposes a new block based DCT selection and a robust video watermarking algorithm to hide copyright information in the compressed domain of the emerging video coding standard H.264/AVC. The watermark is first quantized and securely inserted. To achieve invisibility and robustness, the high entropy DCT 4x4 blocks within the macro blocks are elected to minimize the distortion caused by the embedded watermark and then scrambled using Linear Congruential Generator (LCG) technique. This approach provides good robustness against some attacks such as re-

The authors (Jing Zhang and Anthony T. S. Ho ,2005) presents a byte replacement watermarking for direct stream marking of H.264/AVC streams. This paper describes a method for applying a watermark directly to an entropy coded H.264/AVC stream. This method can be applied when the stream, or at least the I-frames, is entropy coded with CAVLC. The embedding process involves replacing each identified segment with one of the alternative values from encode VLC table. The choice of alternative is informed by the

In this method (Dekun Zou, Jeffrey A. Bloom,2008 ), a grayscale watermark pre-processing is adapted for H.264/AVC. 2-D 8-bit watermarks such as detailed company trademarks or logos can be used as inconvertible watermark for copyright protection. A grayscale watermark pattern is first modified to accommodate the H.264/AVC computational constraints, and then embedded into video data in the compressed domain. With the proposed method, the video watermarking scheme can achieve high robustness and good

They (Jing Zhang.2007) proposes robust MPEG-2 video watermarking techniques , focusing on commonly used typical geometric processing for bit-rate reduction, cropping, removal of any rows, arbitrary-ratio downscaling, and frame dropping. Both the embedding and the extraction of watermarks are done in the compressed domain, so the computational cost is

**7.1 Compressed domain video watermarking** 

quality video and error resilience.

knowledge of the original video.

payload to be embedded.

compression by the H.264 codec, transcoding and scaling.

visual quality without increasing the overall bit-rate.

video applications. Video Watermarking mainly done in uncompressed (raw data) domain or in compressed domain. The raw watermarking is classical approach of video watermarking scheme. In this classical approach, to apply a watermark, firstly the compressed video stream is to decompress. Use a spatial domain or transform- domain watermarking technique to apply the watermark, and then recompress the watermarked video. The disadvantages of classical approach is that watermark embedded has no knowledge of how the video will be recompressed and cannot make informed decisions based on the compression parameters. This approach treats the video compression process as a removal attack and requires the watermark to be inserted with excessive strength, which can adversely impact watermark perceptibility. Another issue with classical approach is that compression step is likely to add compression noise, degrading the video quality further. The main drawback is that fully decompressing and recompressing the video stream can be computationally expensive. A faster and more flexible approach to apply watermarking on compressed video is well know as compressed-domain watermarking. In compressed-domain watermarking, the original compressed video is partially decoded to expose the syntactic elements of the compressed bit stream for watermarking (such as encoded transform coefficients). Then, watermark is embedded in the partially decoded bit stream and again reassembled to form the compressed watermarked video. The watermark insertion process ensures that all modifications to the compressed bit stream will produce a syntactically valid bit stream that can be decoded by a standard decoder. The watermark embed process has access to information contained in the compressed bit stream, such as prediction and quantization parameters, and can adjust the watermark embedding accordingly to improve robustness, capacity, and visual quality.

Similar to image watermark implementations, the video watermark system can be implemented in either software or hardware, each having advantages and drawbacks. In software, the watermark scheme can simply be implemented in a PC environment. The watermark algorithm's operations can be performed as scripts written for a symbolic interpreter running on a workstation or machine code software running on an embedded processor. By programming the code and making use of available software tools, it can be easy for the designer to implement any watermark algorithm at any level of complexity. However, such an implementation is relatively slow and therefore not suitable for real time applications. In practical, video storage and distribution systems, video sequences are stored and transmitted in a compressed format. Thus, a watermark that is embedded and detected directly in the compressed video stream which can minimize computational demanding operations. Furthermore, frequency domain watermark methods are more robust than the spatial domain techniques(Xian Li,2008). Therefore, working on compressed rather than uncompressed video is important for practical watermark applications. There are few standards for video compression. All current popular standards for video compression, namely MPEG-x (ISO standard) and H.26x formats (ITU-T standard), are hybrid coding schemes and are DCT based compression methods. Such schemes are based on the principles of motion compensated prediction and block-based transform coding. Currently, researchers are given more focus on recently developed H.264 based video watermarking standard for low bit rate video application.

#### **7.1 Compressed domain video watermarking**

84 Watermarking – Volume 2

video applications. Video Watermarking mainly done in uncompressed (raw data) domain or in compressed domain. The raw watermarking is classical approach of video watermarking scheme. In this classical approach, to apply a watermark, firstly the compressed video stream is to decompress. Use a spatial domain or transform- domain watermarking technique to apply the watermark, and then recompress the watermarked video. The disadvantages of classical approach is that watermark embedded has no knowledge of how the video will be recompressed and cannot make informed decisions based on the compression parameters. This approach treats the video compression process as a removal attack and requires the watermark to be inserted with excessive strength, which can adversely impact watermark perceptibility. Another issue with classical approach is that compression step is likely to add compression noise, degrading the video quality further. The main drawback is that fully decompressing and recompressing the video stream can be computationally expensive. A faster and more flexible approach to apply watermarking on compressed video is well know as compressed-domain watermarking. In compressed-domain watermarking, the original compressed video is partially decoded to expose the syntactic elements of the compressed bit stream for watermarking (such as encoded transform coefficients). Then, watermark is embedded in the partially decoded bit stream and again reassembled to form the compressed watermarked video. The watermark insertion process ensures that all modifications to the compressed bit stream will produce a syntactically valid bit stream that can be decoded by a standard decoder. The watermark embed process has access to information contained in the compressed bit stream, such as prediction and quantization parameters, and can adjust the watermark embedding accordingly to improve robustness,

Similar to image watermark implementations, the video watermark system can be implemented in either software or hardware, each having advantages and drawbacks. In software, the watermark scheme can simply be implemented in a PC environment. The watermark algorithm's operations can be performed as scripts written for a symbolic interpreter running on a workstation or machine code software running on an embedded processor. By programming the code and making use of available software tools, it can be easy for the designer to implement any watermark algorithm at any level of complexity. However, such an implementation is relatively slow and therefore not suitable for real time applications. In practical, video storage and distribution systems, video sequences are stored and transmitted in a compressed format. Thus, a watermark that is embedded and detected directly in the compressed video stream which can minimize computational demanding operations. Furthermore, frequency domain watermark methods are more robust than the spatial domain techniques(Xian Li,2008). Therefore, working on compressed rather than uncompressed video is important for practical watermark applications. There are few standards for video compression. All current popular standards for video compression, namely MPEG-x (ISO standard) and H.26x formats (ITU-T standard), are hybrid coding schemes and are DCT based compression methods. Such schemes are based on the principles of motion compensated prediction and block-based transform coding. Currently, researchers are given more focus on recently developed H.264 based video watermarking standard for low bit rate

capacity, and visual quality.

video application.

H.264/MPEG4-AVC is the latest video coding standard of ITU-T Video CodingExperts Group(VCEG) and the ISO/IEC Moving Picture Expert Group(MPEG). H.264/MPEG4-AVC has recently become the most widely accepted video coding standard since the deployment of MPEG2 at the drawn of digital television, and it may soon overtake MPEG2 in common use. It covers all common video application ranging from mobile services and video conferencing IPTV,HDTV and HD video storage. The H.264 standard has a number of advantages that distinguish it from existing standards, while at the same time, sharing common features with other existing standards like up to 50 % of bandwidth sharing, high quality video and error resilience.

The paper (Frank Hartung, Bernod Girod, 1998) presented spread spectrum based watermark embedding method for additive digital watermarks into video sequences in uncompressed and compressed video sequences. It adds pseudo-noise signal to the video with invisible and robust against manipulations. For practical applications, watermarking schemes operating on compressed video are desirable. The watermark is processes through discrete cosine transform (DCT) and embedded into the MPEG-2 bit-stream without increasing the bit-rate. The watermark can be retrieved from the decoded video and without knowledge of the original video.

The authors (Karima Ait Saadi et al.,2008) proposes a new block based DCT selection and a robust video watermarking algorithm to hide copyright information in the compressed domain of the emerging video coding standard H.264/AVC. The watermark is first quantized and securely inserted. To achieve invisibility and robustness, the high entropy DCT 4x4 blocks within the macro blocks are elected to minimize the distortion caused by the embedded watermark and then scrambled using Linear Congruential Generator (LCG) technique. This approach provides good robustness against some attacks such as recompression by the H.264 codec, transcoding and scaling.

The authors (Jing Zhang and Anthony T. S. Ho ,2005) presents a byte replacement watermarking for direct stream marking of H.264/AVC streams. This paper describes a method for applying a watermark directly to an entropy coded H.264/AVC stream. This method can be applied when the stream, or at least the I-frames, is entropy coded with CAVLC. The embedding process involves replacing each identified segment with one of the alternative values from encode VLC table. The choice of alternative is informed by the payload to be embedded.

In this method (Dekun Zou, Jeffrey A. Bloom,2008 ), a grayscale watermark pre-processing is adapted for H.264/AVC. 2-D 8-bit watermarks such as detailed company trademarks or logos can be used as inconvertible watermark for copyright protection. A grayscale watermark pattern is first modified to accommodate the H.264/AVC computational constraints, and then embedded into video data in the compressed domain. With the proposed method, the video watermarking scheme can achieve high robustness and good visual quality without increasing the overall bit-rate.

They (Jing Zhang.2007) proposes robust MPEG-2 video watermarking techniques , focusing on commonly used typical geometric processing for bit-rate reduction, cropping, removal of any rows, arbitrary-ratio downscaling, and frame dropping. Both the embedding and the extraction of watermarks are done in the compressed domain, so the computational cost is

Real Time Implementation of Digital Watermarking Algorithm for Image and Video Application 87

The algorithm presented (Yulin Wang, Alan Pearmain, 2004) is blind watermarking based on scene detection. The algorithm is adapted on hardware where Integer DCT is utilized. The steps for hardware implementation shown in Fig. 21. This algorithm is implemented

**b. Watermarking Embedding Hardware Implementation** 

Fig. 21. VLSI Architecture for Integer DCT based watermarking

of video watermarking is as shown in Fig. 22.

The input values coming from video capturing devices such as digital camera coming and stored in memory and our DCT column processor run on that stores transformed values in transpose memory and transposing values are given again to row processor. The schematic

with simple multiplier, shifter and adder/sub tractor.

low. Moreover, the watermark extraction is blind. The presented technique is applicable not only to MPEG-2 video, but also to other DCT-based coding videos.

The author (Satyen Biswas,2005) propose a new adaptive compressed video watermarking scheme uses scene-based multiple gray-level watermark that provides more perceptual information. The concept of human vision system (HVS) is employed to find a suitable set of DCT coefficients for watermark embedding. The developed method embeds several binary images, decomposed from a single watermark image, into different scenes of a video sequence. The spatial spread spectrum watermark is embedded directly into the compressed bit streams by modifying discrete cosine transform (DCT) coefficients. The proposed watermarking scheme is substantially more effective and robust against spatial attacks such as scaling, rotation, frame averaging, and filtering, besides temporal attacks like frame dropping and temporal shifting.

#### **7.1.1 Watermarking embedding hardware implementation**

#### **a. Integer DCT Transform based watermarking:**

The discrete cosine transform (DCT) is a very promising technique used for video/image coding, and widely adopted by most image and video compression standards including latest H.264 standard. Since increasing applications apply these standards to portable systems like hand-held videophone and multimedia terminals, it becomes imperative to develop a high speed and low complexity DCT chip as one key component for such applications. To support low power applications, it is necessary to minimize the computational complexity as much as possible. For the high speed of operation and low delays pipelining structure is used, which also reduces the resource utilization. H.264 supports Integer based DCT for low complexity and high speed. The 2-D integer DCT is obtained on columns and row processor of 1-D DCT.The detailed architecture is shown below in Fig. 20. First input pixels are read from Memory and then 1-D DCT for column processing is done which stores in transpose memory. In transpose memory, transposing of values are done. The values of column DCT is written on horizontal manner while reading of values are read on vertical basis and then applied to 1-D DCT for row processor.

Fig. 20. VLSI Architecture for 2-D DCT.

**¶ (6pt)** 

#### **b. Watermarking Embedding Hardware Implementation**

86 Watermarking – Volume 2

low. Moreover, the watermark extraction is blind. The presented technique is applicable not

The author (Satyen Biswas,2005) propose a new adaptive compressed video watermarking scheme uses scene-based multiple gray-level watermark that provides more perceptual information. The concept of human vision system (HVS) is employed to find a suitable set of DCT coefficients for watermark embedding. The developed method embeds several binary images, decomposed from a single watermark image, into different scenes of a video sequence. The spatial spread spectrum watermark is embedded directly into the compressed bit streams by modifying discrete cosine transform (DCT) coefficients. The proposed watermarking scheme is substantially more effective and robust against spatial attacks such as scaling, rotation, frame averaging, and filtering, besides temporal attacks like frame

The discrete cosine transform (DCT) is a very promising technique used for video/image coding, and widely adopted by most image and video compression standards including latest H.264 standard. Since increasing applications apply these standards to portable systems like hand-held videophone and multimedia terminals, it becomes imperative to develop a high speed and low complexity DCT chip as one key component for such applications. To support low power applications, it is necessary to minimize the computational complexity as much as possible. For the high speed of operation and low delays pipelining structure is used, which also reduces the resource utilization. H.264 supports Integer based DCT for low complexity and high speed. The 2-D integer DCT is obtained on columns and row processor of 1-D DCT.The detailed architecture is shown below in Fig. 20. First input pixels are read from Memory and then 1-D DCT for column processing is done which stores in transpose memory. In transpose memory, transposing of values are done. The values of column DCT is written on horizontal manner while reading

of values are read on vertical basis and then applied to 1-D DCT for row processor.

only to MPEG-2 video, but also to other DCT-based coding videos.

**7.1.1 Watermarking embedding hardware implementation** 

**a. Integer DCT Transform based watermarking:** 

Fig. 20. VLSI Architecture for 2-D DCT.

dropping and temporal shifting.

**¶ (6pt)** 

The algorithm presented (Yulin Wang, Alan Pearmain, 2004) is blind watermarking based on scene detection. The algorithm is adapted on hardware where Integer DCT is utilized. The steps for hardware implementation shown in Fig. 21. This algorithm is implemented with simple multiplier, shifter and adder/sub tractor.

Fig. 21. VLSI Architecture for Integer DCT based watermarking

The input values coming from video capturing devices such as digital camera coming and stored in memory and our DCT column processor run on that stores transformed values in transpose memory and transposing values are given again to row processor. The schematic of video watermarking is as shown in Fig. 22.

Real Time Implementation of Digital Watermarking Algorithm for Image and Video Application 89

am obliged to the in hand support that has been given to me by the my collegaues and especially to the "SMDP Project Lab", which has provided me with the platform like ISE (Xilinx Tool), Modelsim (Mentor Graphics), MATLAB, Design Vision (Synopsys Tool) which could navigate me through the sea of experiments and help me too get the desired level of work satisfaction.. Also quite thankful to SMDP Lab for providing me 180 nm CMOS

A.J. Menezes, P.V. Oorcscot and S.A. Vanstone (1996), *Handbook of Applied cryptography*, CRC

A. Lumini and D. Maio (2000), A Wavelet-Based Image Watermarking Scheme, *The* 

Amit Joshi, Vivekanand Mishra (2011), Blind video watermarking of wavelet domain for copy right protection ,*International Journal of Computing*, Vol 1, Issue 3,pp 291-295. Amit Joshi, Prof.A.D.Darji (2009), Efficient Dual Domain Watermarking for secure images,

Arun Kejariwal (2003), Watermarking, *IEEE Potential*, October/November, 2003.pp 37-40. B. Sehneier (1996), *Applied Cryptography: Protocols, Algorithms and Source Code in C*, second

Dekun Zou, Jeffrey A. Bloom(2008): H.264/AVC stream replacement technique for video

Frank Hartung, Bernod Girod(1998),"Watermarking of uncompressed and compressed

Garimella, A., Satyanarayan, M.V.V., Kumar, R.S., Murugesh, P.S., Niranjan (2003) ,VLSI

Hyun Lim, Soon-Young Park and Seong jun Kang (2003), FPGA Implementation of Image

Hsien-Wen Tseng , Chin-Chen Chang (2008), An extended difference expansion algorithm for reversible watermarking ,*Elsiver, Image and vision computing*, pp 1148-1308 I .J. Cox, J. Kilian, T. Leighton and T. Shanon (1995), Secure spread spectrum for Multimedia,

I.J.Cox,J. Kilian, F.T. Leighton, and T. Shamoon (1997), Secure Spread Spectrum

Jianyog Huang and Changsheng Yang (2004),"Image Digital Watermarking algorithm Using

Jing Zhang,Anthony T.S.Ho, Gang Qiu.Robust(2007), "Video Watermarking of

*NEC research institute*, prinecton, NJ, technical report pp. 95-10.

Implementation of Online Digital Watermarking Techniques with Difference Encoding for the 8-bit Gray Scale Images *In: Proc. of the Intl. Conf.on VLSI* 

Watermarking Algorithm for a Digital camera, *IEEE Pacific Rim Conference on Communications, Computers and signal Processing, 2003. PACRIM. 2003*, pp.1000-1003.

Watermarking for Multimedia, *IEEE Transactions on Image Processing*, vol. 6, no. 12,

Multiresolution wavelet Transform", *IEEE International conference on systems, man* 

H.264/AVC[J]". *IEEE Transactions on circuits and systems-Ti:express* 

domain Video",*Elseiver,*Vol 66,no. 3,May 1998,pp 283-301

*International Conference on Information Technology: Coding and Computing*, Las Vages,

*An International Conference on dvances in Recent Technologies in Communication and* 

Standard Cell Library for post synthesis of my design.

*Computing, 2009. ARTCom '09, pp. 909-914.* 

edition, John Wiley & Sons, New York.

watermarking. *ICASSP* 2008: 1749-1752

**10. References** 

press, Boca Raton.

NV, pp. 122-127.

*Design*.,pp 283–288

pp. 1673-1687

*and Cybernetics*, pp. 2977-2982.

*briefs*,vol.54,no.2,February 2007.

Fig. 22. Schematic Design of Integer based DCT watermarking


The device utilization of above algorithm is as shown in following Table 5.

Table 5. Synthesis Report of Video watermarking algorithm

#### **8. Conclusion**

The proposed algorithm is applicable for image and video application. It has combined approaches of spatial and frequency domain. From Table 1-3, it has been conclude that proposed scheme is suitable for real-time application due to its simplicity. It has overcome the problem of block artifacts of DCT and advantage of both domain properties. It has also lesser computational complexity compare to other algorithms because we embed watermark in Legal 5/3 Integer wavelet transform. From Table 4 of ASIC results taken from design vision from Synopsis also shown that proposed scheme has comparable results for speed and power compare to other existing schemes. From Table 5, it has been verified that propose video watermarking algorithm provides hardware efficient algorithm.

#### **9. Acknowledgment**

I am thankful to Prof. Anand Darji who provides me guidance to complete this chapter. would also like to thanks all my friends who directly and indirectly helping me every bit. I am obliged to the in hand support that has been given to me by the my collegaues and especially to the "SMDP Project Lab", which has provided me with the platform like ISE (Xilinx Tool), Modelsim (Mentor Graphics), MATLAB, Design Vision (Synopsys Tool) which could navigate me through the sea of experiments and help me too get the desired level of work satisfaction.. Also quite thankful to SMDP Lab for providing me 180 nm CMOS Standard Cell Library for post synthesis of my design.

#### **10. References**

88 Watermarking – Volume 2

Fig. 22. Schematic Design of Integer based DCT watermarking

Table 5. Synthesis Report of Video watermarking algorithm

**8. Conclusion** 

**9. Acknowledgment** 

The device utilization of above algorithm is as shown in following Table 5.

Number of Slices 70 out of 4656 1 % Number of Slices Flip Flops 105 out of 9312 1% Number of 4 input LUTs 129 out of 9312 1% Number of IOBS 40 out of 158 24% Number of GCLKS 1 out of 24 4%

**Resources Number of Utilization Percentage of utilization** 

The proposed algorithm is applicable for image and video application. It has combined approaches of spatial and frequency domain. From Table 1-3, it has been conclude that proposed scheme is suitable for real-time application due to its simplicity. It has overcome the problem of block artifacts of DCT and advantage of both domain properties. It has also lesser computational complexity compare to other algorithms because we embed watermark in Legal 5/3 Integer wavelet transform. From Table 4 of ASIC results taken from design vision from Synopsis also shown that proposed scheme has comparable results for speed and power compare to other existing schemes. From Table 5, it has been verified that

I am thankful to Prof. Anand Darji who provides me guidance to complete this chapter. would also like to thanks all my friends who directly and indirectly helping me every bit. I

propose video watermarking algorithm provides hardware efficient algorithm.


0

**5**

*Japan*

Tadahiko Kimoto *Toyo University*

Sophisticated Spatial Domain Watermarking by

Digital watermarking is a technique for embedding additional signals, watermarks, into digital signals such as images and afterward extracting them (Macq (1999)). From the viewpoint of domains to embed watermark signals into, the watermarking techniques are mainly divided into two categories: spatial-domain-based techniques and

In the watermarking of digital images, the transform-domain-based techniques can usually provide not only good visual quality in the resulting images but also stronger robustness against image modification than the spatial-domain-based ones. However, it is hard to embed watermarks exactly into transform domains. In the embedding procedure, the pixel values of a source image, which are usually quantized levels or integers, are first transformed into frequencies. The frequency coefficients are then modified so as to represent watermarks. The values inversely transformed from such modified transform domain are usually real numbers with fractions. Consequently, the quantization errors occur inevitably when the integral spatial domain is reconstructed. These errors are likely to disturb the watermark that has

In contrast, exact watermark signals can be embedded into the spatial domain though they are fragile under signal modification. The traditional method for spatial-domain-based image watermarking is first to select pixels in a source image and then, to modify the levels of the selected pixels so that the watermark can be expressed there (Wang et al. (2009)). The most primitive method for modifying a pixel level is to select a bit in the binary expression of the level and then, to invert the selected bit (Oka & Matsui (1997)). In this method, the bit position selected in the binary expression as well as the pixel location selected in a source image must

The embedded watermark distorts the source signal to some extent. The transformation in the image spatial domain has especially direct effects on visual quality. In the bit inverting method, inverting the *k*th bit of a signal, denoted by the bit *k*, where the 0th bit is the least significant bit, changes the signal level by 2*k*. Hence, the visual distortion caused by the level change increases with an increase in *k*. On the other hand, when a bit *k* represents a watermark bit, an attacker searches some range of *k*'s for the correct value to get the bit *k*. With increasing *k*, the range to be searched becomes wider, and accordingly, the tolerance to unauthorized access can increase. Thus, determining appropriate values of *k* is very important

be kept secret so that the watermark can be protected from unauthorized access.

1. Introduction

transform-domain-based ones.

been embedded in the transform domain.

Bit Inverting Transformation


## Sophisticated Spatial Domain Watermarking by Bit Inverting Transformation

Tadahiko Kimoto *Toyo University Japan*

#### 1. Introduction

90 Watermarking – Volume 2

Jing Zhang and Anthony T. S. Ho(2005), "Efficient robust watermarking of compressed 2-D

Karima Ait Saadi, Ahmed Bouridane, H. Meraoubi(2008):"Secure and Robust Copyright

Lei Tian and Heng-Ming Tai (2006), Secure Image Captured by Digital Camera, *International* 

Sammy H. M. Kwok, Edmund Y. Lam (2002), Watermarking Implementation in digital

Saraju P. Mohanty, N. Raganathan (2004), VLSI Implementation of Visible Watermarking for

Saraju P. Mohanty, N. Raganathan and R.K. Namballa (2005) ,A VLSI architecture for visible

Saraju P. Mohanty, N. Raganathan and K. Balakrishna (2006), A Dual Voltage- Frequency

Satyen Biswas, Sunil R. Das, Emil M. Petriu, (2005),"An Adaptive Compressed MPEG-2

Tsai, T.H., Lu, C.Y (2001),A Systems Level Design for Embedded Watermark Technique

Tung-Shou Chen, Jeanne Chen, Jian –Guo Chen(2004) , A Simple and efficient watermark technique based on JPEG2000 Codec*, Springer, Multi media systems*, pp 16-26. Victor V. Hernandez Guzman, Mariko Nankano (2004), Analysis of a Wavelet based

Xian Li, Yonatan shoshan,Alexander Fish,Graham Jullian,Orly Yadid-Pecht(2010)

Yong-Gang Fu and Hui-Rong Wang (2008), A Novel Discrete Wavelet Transform Based

Yulin Wang, Alan Pearmain(2004),"Blind image data hiding based on self reference", *Pattern* 

*Conference on Consumer Electronics, 2006. ICCE '06*, pp.341-342

MarkAny (2010), Watermarking Technology, *Whitepaper*,2010.

*and systems II(TCAS-II),* vol. 53,pp 394-398.

*Communication System*, Nashville, TN, pp 20-23

*Electronics, Communication and Computers*, pp 283-287.

*and Communication System*. Hall

*VLSI*, vol 13., pp 1002-1012.

vol. 54, no. 5, October-2005

*computing*, pp-9-16

*Security and Identification* ,pp 55-58.

*Recongnition, Elsevier*, 2004, pp. 1681-1689.

*Processing*, pp. 1-4, 2005.

355

*design* 

grayscale patterns for H.264/AVC," *Proc. of IEEE Workshop on Multimedia Signal* 

Protection for H.264/AVC based on Selected Blocks DCT". *SIGMAP 2008,*pp 351-

photography", *Proceeding of International Symposium on Intelligent Signal Processing* 

a secure Digital Camera Design, *Proceeding of 17th International Conference on VLSI* 

watermarking in a secure still digital camera (S2DC) design, *IEEE Transaction on* 

VLSI chip for Image Watermarking in DCT Domain , IEEE *Transaction on circuits* 

Video Watermarking Scheme", IEEE *transaction on instrumentation and measurement*,

using DSC Systems, *presented at the IEEE Int. Workshop Intelligent Signal Processing* 

watermarking Algorithm, Proceeding *of the 14th International Conference on* 

,Hardware Implementation of video watermarking *, Information science and* 

Digital Watermarking Scheme, *2nd International Conference on Anticounterfeiting,* 

Digital watermarking is a technique for embedding additional signals, watermarks, into digital signals such as images and afterward extracting them (Macq (1999)). From the viewpoint of domains to embed watermark signals into, the watermarking techniques are mainly divided into two categories: spatial-domain-based techniques and transform-domain-based ones.

In the watermarking of digital images, the transform-domain-based techniques can usually provide not only good visual quality in the resulting images but also stronger robustness against image modification than the spatial-domain-based ones. However, it is hard to embed watermarks exactly into transform domains. In the embedding procedure, the pixel values of a source image, which are usually quantized levels or integers, are first transformed into frequencies. The frequency coefficients are then modified so as to represent watermarks. The values inversely transformed from such modified transform domain are usually real numbers with fractions. Consequently, the quantization errors occur inevitably when the integral spatial domain is reconstructed. These errors are likely to disturb the watermark that has been embedded in the transform domain.

In contrast, exact watermark signals can be embedded into the spatial domain though they are fragile under signal modification. The traditional method for spatial-domain-based image watermarking is first to select pixels in a source image and then, to modify the levels of the selected pixels so that the watermark can be expressed there (Wang et al. (2009)). The most primitive method for modifying a pixel level is to select a bit in the binary expression of the level and then, to invert the selected bit (Oka & Matsui (1997)). In this method, the bit position selected in the binary expression as well as the pixel location selected in a source image must be kept secret so that the watermark can be protected from unauthorized access.

The embedded watermark distorts the source signal to some extent. The transformation in the image spatial domain has especially direct effects on visual quality. In the bit inverting method, inverting the *k*th bit of a signal, denoted by the bit *k*, where the 0th bit is the least significant bit, changes the signal level by 2*k*. Hence, the visual distortion caused by the level change increases with an increase in *k*. On the other hand, when a bit *k* represents a watermark bit, an attacker searches some range of *k*'s for the correct value to get the bit *k*. With increasing *k*, the range to be searched becomes wider, and accordingly, the tolerance to unauthorized access can increase. Thus, determining appropriate values of *k* is very important

0 ··· 0 to that of 1 ··· 1 in natural binary. Each bit in the binary expression is denoted by bit *k* for *k* = 0, 1, . . . , *M* − 1, and bit 0 means the least significant bit. For a given level *v* ∈ [ 0, <sup>2</sup>*<sup>M</sup>* <sup>−</sup> <sup>1</sup> ] and a specified value of *<sup>k</sup>* <sup>∈</sup> [ 0, *<sup>M</sup>* <sup>−</sup> <sup>1</sup> ], inverting the bit *<sup>k</sup>* of *<sup>v</sup>* transforms *<sup>v</sup>* to another

Sophisticated Spatial Domain Watermarking by Bit Inverting Transformation 93

integers in the interval [ 0, 2*M*−*k*−<sup>1</sup> <sup>−</sup> <sup>1</sup> ] as a result. Each input level thus either increases or

The amount of level change caused by an inverted bit can be minimized by altering the other bits of the signal appropriately. The principle for transforming the entire bit-pattern of a signal so as to achieve the smallest level change is illustrated in Fig. 1. Let *b*(*v*, *k*) denote a binary value of the bit *k* of a level *v*, and *b* denotes the 1's complement of a bit value *b*. Suppose that the *k*th bit of a level *v* is to be inverted. Then, the inverting of the bit is performed by changing the level, that is, the entire bit-pattern to one of those patterns of level *v* which satisfy *b*(*v*, *k*) = *b*(*v*, *k*). Furthermore, the one of the *v*s that is closest to *v* can be chosen as *v* to achieve the smallest level change. For the most of *v*, such *v*s exist in both sides of *v* in

*<sup>v</sup>* <sup>−</sup> <sup>2</sup>*k*, if *<sup>n</sup>* <sup>=</sup> <sup>2</sup>*<sup>m</sup>* <sup>+</sup> <sup>1</sup> (1)

Signal level

*b*(*fk*(*v*), *k*) = *b*(*v*, *k*). (3)

<sup>1</sup>*v* <sup>2</sup> *v v*

Fig. 1. Changing signal levels for inverting a signal bit: To invert *bk* of *v*, *v* is changed to

Both inverting a *k*th bit and minimizing the resultant level change are performed by a single level transformation. Let <sup>∆</sup>*<sup>k</sup>* <sup>=</sup> <sup>2</sup>*k*−<sup>1</sup> where *<sup>k</sup>* now assumes values in the range 1, 2, . . . , *<sup>M</sup>* <sup>−</sup> 1. For a given *k*, the transformation of a signal level *v* can be expressed as a function *fk* of *v*,

> 2∆*k*, if *n* = 0 2*m*∆*k*, if *n* = 2*m* − 1 2*m*∆*<sup>k</sup>* − 1, if *n* = 2*m*

where *<sup>n</sup>* <sup>=</sup> *<sup>v</sup>*/∆*<sup>k</sup>* and *<sup>m</sup>* has integers in the interval [ 1, 2*M*−*<sup>k</sup>* <sup>−</sup> <sup>1</sup> ]. The function *fk* satisfies

<sup>2</sup>*<sup>M</sup>* <sup>−</sup> <sup>2</sup>∆*<sup>k</sup>* <sup>−</sup> 1, if *<sup>n</sup>* <sup>=</sup> <sup>2</sup>*M*−*k*+<sup>1</sup> <sup>−</sup> <sup>1</sup>

*b b b b b b b b b*

 *v*/2*<sup>k</sup>* 

, and *m* has

(2)

; this transformation is described as a function *tk* of *v* in the relation

where, letting *x* denote the integral part of the real number *x*, *n* =

*<sup>v</sup>* <sup>=</sup> *tk*(*v*) = *<sup>v</sup>* <sup>+</sup> <sup>2</sup>*k*, if *<sup>n</sup>* <sup>=</sup> <sup>2</sup>*<sup>m</sup>*

level *v*

decreases just by 2*k*.

either *v*<sup>1</sup> or *v*2.

defined by

the relation

2.2 Minimizing level changes

the dynamic range as depicted in Fig. 1.

Bit pattern

*k b* 

*<sup>M</sup>*1 *b* <sup>−</sup>

0 *b* 

*fk*(*v*) =

 

(LSB)

(MSB)

to the watermarks involving the bit inverting. Also, the extending of ranges of *k* and the preserving of visual quality are contradictory subjects.

It is desirable to determine the values of *k* automatically for given source images to carry out the watermarking efficiently. To choose an appropriate bit *k* to be inverted in terms of visual quality, a human perceptual model is necessary. The perceptual model means a relationship between quantities representing objective qualities and human subjective qualities of the images being viewed, based on the human visual system (Awrangjeb & Kankanhalli (2004)). In other words, such a perceptual model is a function of objective quality measures that produces a subjective quality measurement as an estimation of human subjective quality. Various kinds of quantities have been proposed for the measurement of objective quality (Wang et al. (2004)). By adaptively determining the embedding parameters such as the values of *k* by means of the human perceptual model, the watermarking scheme can perform as a *perceptually adaptive system* (Cox et al. (2002)).

In the next section, the function of inverting signal bits is discussed. There a method for inverting a signal bit with making the resultant level change minimum is presented (Kimoto (2005); Kimoto (2007)). In this method, the inversion of the *k*th bit (*k* ≥ 1), where a 0th bit is a least significant bit, results in the change in the signal level by 2*k*−<sup>1</sup> or under for any input level. Also, randomly varying signals are added to the transformation outputs so as to improve signal quality (Kimoto (2006)). Both the function of inverting bits and that of randomizing levels are given as a single transformation. The performance of the transformation is analyzed in detail and also demonstrated by some experiments.

The next section treats some subjects regarding the implementation of the bit inverting transformation. First, the transformation domains are considered; a principle for defining domains of the transformation in the input dynamic range under limitations on level changes is formulated. Also, a method for dividing the input dynamic range into a union of transformation domains so that the blind watermarking can be achieved is described. Next, a scheme for embedding watermark bits in every image block using the transformation is presented with some experimental results.

In the next section, the subject of determining a *k*th bit to be inverted, or the value of *k* in the same sense, is discussed; a scheme for implementing the bit inverting transformation to embed watermark bits so that a required subjective visual quality can be achieved on the resulting image is developed (Kimoto & Kosaka (2010)). To derive an appropriate perceptual model for such images that are watermarked by the bit inverting transformation, first, objective quality measures are defined based on the properties of the transformation. Then, a subjective visual quality measure based on the objective quality measures is established by subjective evaluations of human observers. A perceptually adaptive image watermarking scheme using the perceptual model is presented. This scheme is aimed at embedding watermark bits in every pixel all over the source images in contrast with the block watermarking scheme described in the preceding section. The performance of the scheme is demonstrated by some experiments.

#### 2. Bit inverting transformation

#### 2.1 Inverting signal bits

Inverting a single bit of a signal can be expressed by a level transformation. Signal levels are supposed to be uniformly quantized to *M* bits and expressed from the *M*-bit sequence of 0 ··· 0 to that of 1 ··· 1 in natural binary. Each bit in the binary expression is denoted by bit *k* for *k* = 0, 1, . . . , *M* − 1, and bit 0 means the least significant bit. For a given level *v* ∈ [ 0, <sup>2</sup>*<sup>M</sup>* <sup>−</sup> <sup>1</sup> ] and a specified value of *<sup>k</sup>* <sup>∈</sup> [ 0, *<sup>M</sup>* <sup>−</sup> <sup>1</sup> ], inverting the bit *<sup>k</sup>* of *<sup>v</sup>* transforms *<sup>v</sup>* to another level *v* ; this transformation is described as a function *tk* of *v* in the relation

$$v' = t\_k(v) = \begin{cases} v + 2^k \text{,} & \text{if } n = 2m \\ v - 2^k \text{,} & \text{if } n = 2m + 1 \end{cases} \tag{1}$$

where, letting *x* denote the integral part of the real number *x*, *n* = *v*/2*<sup>k</sup>* , and *m* has integers in the interval [ 0, 2*M*−*k*−<sup>1</sup> <sup>−</sup> <sup>1</sup> ] as a result. Each input level thus either increases or decreases just by 2*k*.

#### 2.2 Minimizing level changes

2 Will-be-set-by-IN-TECH

to the watermarks involving the bit inverting. Also, the extending of ranges of *k* and the

It is desirable to determine the values of *k* automatically for given source images to carry out the watermarking efficiently. To choose an appropriate bit *k* to be inverted in terms of visual quality, a human perceptual model is necessary. The perceptual model means a relationship between quantities representing objective qualities and human subjective qualities of the images being viewed, based on the human visual system (Awrangjeb & Kankanhalli (2004)). In other words, such a perceptual model is a function of objective quality measures that produces a subjective quality measurement as an estimation of human subjective quality. Various kinds of quantities have been proposed for the measurement of objective quality (Wang et al. (2004)). By adaptively determining the embedding parameters such as the values of *k* by means of the human perceptual model, the watermarking scheme can perform as a

In the next section, the function of inverting signal bits is discussed. There a method for inverting a signal bit with making the resultant level change minimum is presented (Kimoto (2005); Kimoto (2007)). In this method, the inversion of the *k*th bit (*k* ≥ 1), where a 0th bit is a least significant bit, results in the change in the signal level by 2*k*−<sup>1</sup> or under for any input level. Also, randomly varying signals are added to the transformation outputs so as to improve signal quality (Kimoto (2006)). Both the function of inverting bits and that of randomizing levels are given as a single transformation. The performance of the

The next section treats some subjects regarding the implementation of the bit inverting transformation. First, the transformation domains are considered; a principle for defining domains of the transformation in the input dynamic range under limitations on level changes is formulated. Also, a method for dividing the input dynamic range into a union of transformation domains so that the blind watermarking can be achieved is described. Next, a scheme for embedding watermark bits in every image block using the transformation is

In the next section, the subject of determining a *k*th bit to be inverted, or the value of *k* in the same sense, is discussed; a scheme for implementing the bit inverting transformation to embed watermark bits so that a required subjective visual quality can be achieved on the resulting image is developed (Kimoto & Kosaka (2010)). To derive an appropriate perceptual model for such images that are watermarked by the bit inverting transformation, first, objective quality measures are defined based on the properties of the transformation. Then, a subjective visual quality measure based on the objective quality measures is established by subjective evaluations of human observers. A perceptually adaptive image watermarking scheme using the perceptual model is presented. This scheme is aimed at embedding watermark bits in every pixel all over the source images in contrast with the block watermarking scheme described in the preceding section. The performance of the scheme is

Inverting a single bit of a signal can be expressed by a level transformation. Signal levels are supposed to be uniformly quantized to *M* bits and expressed from the *M*-bit sequence of

transformation is analyzed in detail and also demonstrated by some experiments.

preserving of visual quality are contradictory subjects.

*perceptually adaptive system* (Cox et al. (2002)).

presented with some experimental results.

demonstrated by some experiments.

2. Bit inverting transformation

2.1 Inverting signal bits

The amount of level change caused by an inverted bit can be minimized by altering the other bits of the signal appropriately. The principle for transforming the entire bit-pattern of a signal so as to achieve the smallest level change is illustrated in Fig. 1. Let *b*(*v*, *k*) denote a binary value of the bit *k* of a level *v*, and *b* denotes the 1's complement of a bit value *b*. Suppose that the *k*th bit of a level *v* is to be inverted. Then, the inverting of the bit is performed by changing the level, that is, the entire bit-pattern to one of those patterns of level *v* which satisfy *b*(*v*, *k*) = *b*(*v*, *k*). Furthermore, the one of the *v*s that is closest to *v* can be chosen as *v* to achieve the smallest level change. For the most of *v*, such *v*s exist in both sides of *v* in the dynamic range as depicted in Fig. 1.

Fig. 1. Changing signal levels for inverting a signal bit: To invert *bk* of *v*, *v* is changed to either *v*<sup>1</sup> or *v*2.

Both inverting a *k*th bit and minimizing the resultant level change are performed by a single level transformation. Let <sup>∆</sup>*<sup>k</sup>* <sup>=</sup> <sup>2</sup>*k*−<sup>1</sup> where *<sup>k</sup>* now assumes values in the range 1, 2, . . . , *<sup>M</sup>* <sup>−</sup> 1. For a given *k*, the transformation of a signal level *v* can be expressed as a function *fk* of *v*, defined by

$$f\_k(v) = \begin{cases} 2\Delta\_{k\prime} & \text{if } n = 0\\ 2m\Delta\_{k\prime} & \text{if } n = 2m - 1\\ 2m\Delta\_k - 1 & \text{if } n = 2m\\ 2^M - 2\Delta\_k - 1 & \text{if } n = 2^{M - k + 1} - 1 \end{cases} \tag{2}$$

where *<sup>n</sup>* <sup>=</sup> *<sup>v</sup>*/∆*<sup>k</sup>* and *<sup>m</sup>* has integers in the interval [ 1, 2*M*−*<sup>k</sup>* <sup>−</sup> <sup>1</sup> ]. The function *fk* satisfies the relation

$$b(f\_k(v),k) = \overline{b(v,k)}.\tag{3}$$

2*<sup>M</sup>* − ∆*<sup>k</sup>* 2 2 *<sup>M</sup>* − ∆*<sup>k</sup>* 2 3 *<sup>M</sup>* − ∆*<sup>k</sup>*

2*m*∆*<sup>k</sup>* (2 1) *m*− ∆*<sup>k</sup>*

Sophisticated Spatial Domain Watermarking by Bit Inverting Transformation 95

) = *fk*(*v*) − ∆*<sup>k</sup>*

The translated coordinate system is depicted in Fig. 2 by the thin dashed lines. Accompanied with the coordinate translation, both of the upper bounds of the dynamic range with regard

*fk*(*v* + ∆*k*) = *g*(*v*

This equation indicates that the inversion of bit-*k* holds between *v* + ∆*<sup>k</sup>* and *gk*(*v*

*gk*(*v*in) = (2*<sup>m</sup>* <sup>+</sup> <sup>1</sup>)∆*k*, if *<sup>n</sup>* <sup>=</sup> <sup>2</sup>*<sup>m</sup>*

By carrying out the above addition of ∆*<sup>k</sup>* in modulus of 2*M*, we can correspond the interval [ <sup>2</sup>*<sup>M</sup>* <sup>−</sup> <sup>∆</sup>*k*, 2*<sup>M</sup>* <sup>−</sup> <sup>1</sup> ] of *<sup>v</sup>* to the interval [ 0, <sup>∆</sup>*<sup>k</sup>* <sup>−</sup> <sup>1</sup> ] of *<sup>v</sup>*, and thus, the upper end-effect interval

In the new coordinate system, the function *gk* is defined by a level transformation from an

where *<sup>n</sup>* <sup>=</sup> *<sup>v</sup>*in/∆*<sup>k</sup>* taking values in [ 0, 2*M*−*k*+<sup>1</sup> <sup>−</sup> <sup>1</sup> ] and, as described later, *<sup>k</sup>* <sup>≥</sup> 1. Thus, this transformation maps 2*<sup>M</sup>* consecutive source levels into 2*M*−*k*+<sup>1</sup> discrete output levels.

Figure 4 shows the input-output relationship of the transformation *gk*. The transformation is no longer affected by the *M*-bit end effects. Hence, the level change has the absolute magnitude in the range [ 1, ∆*<sup>k</sup>* ] throughout the entire source range, as shown in Fig. 5. Thus, the function *gk* yields the smallest difference for each input level while achieving the bit *k* inversion in terms of Eq. (7). Note that it is the bit (*k* − 1) of *gk*(*v*in) that is actually inverted compared with *v*in, and the level change is not the smallest for each input level in the situation of bit (*k* − 1) inversion. This performance holds for 1 ≤ *k* ≤ *M* − 1. Hence, *gk* has been defined

Fig. 3. Characteristics of level change caused by the transformations: The solid lines show

end-effect interval [ 0, ∆*<sup>k</sup>* − 1 ], that is, letting *gk* denote the translated function,

 *<sup>v</sup>* <sup>=</sup> *<sup>v</sup>* <sup>−</sup> <sup>∆</sup>*<sup>k</sup> gk*(*v*

2*M*

*v*in

) + ∆*k*. (6)

) + ∆*k*, *k*) = *b*(*v* + ∆*k*, *k*). (7)

(2*<sup>m</sup>* <sup>+</sup> <sup>1</sup>)∆*<sup>k</sup>* <sup>−</sup> 1, if *<sup>n</sup>* <sup>=</sup> <sup>2</sup>*<sup>m</sup>* <sup>+</sup> <sup>1</sup> , (8)

(5)

) + ∆*k*.

∆*k* 2∆*<sup>k</sup>* 3∆*<sup>k</sup>* 4∆*<sup>k</sup>*

0

out in *v v* −

0

−1 1

2 − ∆*<sup>k</sup>* −∆*<sup>k</sup>*

*fk*(*v*in) − *v*in; the dashed lines show *tk*(*v*in) − *v*in.

to *fk*(*v*) are removed. Eq. (5) directly leads to the relation

Using Eqs. (5) and (6) in Eq. (3), we obtain the relation

no longer exists in the translated coordinate system.

input level *v*in, that is,

in this range of *k*.

*b*(*gk*(*v*

2∆*<sup>k</sup>* ∆*k*

For *k* = 0, the transformation *f*0(*v*) has the special relation

$$f\_0(v) = \begin{cases} \ v+1, & \text{if } v \text{ is even} \\ \ v-1, & \text{if } v \text{ is odd}. \end{cases} \tag{4}$$

Figure 2 illustrates *fk*(*v*) over the *<sup>M</sup>*-bit dynamic range *<sup>v</sup>* <sup>∈</sup> [ 0, 2*<sup>M</sup>* <sup>−</sup> <sup>1</sup> ], comparing with *tk*(*v*) of Eq. (1) depicted by the thick dashed lines, where *k* ≥ 1 (the box of thin dashed lines will be explained later). As shown in this figure, *fk*(*v*) possesses the almost staircase relations between input and output levels.

Fig. 2. Transformations for inverting *k*th bits of *M*-bit levels (1 ≤ *k* ≤ *M* − 1): The bold line shows the transformation *fk*(*v*), and the thick dashed line, *tk*(*v*), where ∆*<sup>k</sup>* = 2*k*−1; the dashed-and-dotted line shows the identity transformation for reference.

The level difference resulting from the level transformation is also a function of the source levels. Figure 3 shows the difference between the transformed level *v*out = *fk*(*v*in) and the source level *v*in in the entire source range, comparing with the difference caused by the transformation *tk*. The absolute magnitude of the difference, | *fk*(*v*in) − *v*in|, varies in the range from 1 to ∆*k*, which is less than or equal to the half of the difference caused by *tk*, for the source levels *<sup>v</sup>*in over the interval [ <sup>∆</sup>*k*, 2*<sup>M</sup>* <sup>−</sup> <sup>∆</sup>*<sup>k</sup>* <sup>−</sup> <sup>1</sup> ]. On the contrary, for a level *<sup>v</sup>*in in the interval [ 0, ∆*<sup>k</sup>* − 1 ], those levels available for bit inversion exist only on the upper side of *v*in because of the end of *M*-bit levels. Accordingly, all the source levels in the interval are to be transformed to the smallest one among the available levels, 2∆*k*. The resulting level differences consequently get over ∆*<sup>k</sup>* in the interval. A similar end effect occurs for the input levels in the interval [ <sup>2</sup>*<sup>M</sup>* <sup>−</sup> <sup>∆</sup>*k*, 2*<sup>M</sup>* <sup>−</sup> <sup>1</sup> ]. In this paper we refer to these two intervals as the *end-effect intervals*.

#### 2.3 Modifying level transformation

The *M*-bit end effects can be removed by translating the coordinate system. Both of the axes of the coordinate system are translated by ∆*<sup>k</sup>* to the positive direction so as to avoid the lower 4 Will-be-set-by-IN-TECH

Figure 2 illustrates *fk*(*v*) over the *<sup>M</sup>*-bit dynamic range *<sup>v</sup>* <sup>∈</sup> [ 0, 2*<sup>M</sup>* <sup>−</sup> <sup>1</sup> ], comparing with *tk*(*v*) of Eq. (1) depicted by the thick dashed lines, where *k* ≥ 1 (the box of thin dashed lines will be explained later). As shown in this figure, *fk*(*v*) possesses the almost staircase relations

*v* + 1, if *v* is even

*<sup>v</sup>* <sup>−</sup> 1, if *<sup>v</sup>* is odd. (4)

2*<sup>M</sup>* − ∆*<sup>k</sup>* 2 2 *<sup>M</sup>* − ∆*<sup>k</sup>*

2*M*

*v v*′

2 3 *<sup>M</sup>* − ∆*<sup>k</sup>*

For *k* = 0, the transformation *f*0(*v*) has the special relation

2 2 *<sup>M</sup>* − ∆*<sup>k</sup>*

22 1 *<sup>M</sup>* −∆−*<sup>k</sup>*

2*M*

between input and output levels.

*f*0(*v*) =

*g v <sup>k</sup>* ( )′ ( ) *<sup>k</sup> f v*

∆*k*

( ) <sup>1</sup> 2 , 1, 2, , 2 1 *<sup>k</sup> M k <sup>k</sup> m* − − ∆ = = … −

0 0

2∆*<sup>k</sup>*

dashed-and-dotted line shows the identity transformation for reference.

3∆*<sup>k</sup>* 4∆*<sup>k</sup>*

2*m*∆*<sup>k</sup>*

(2 1) *m*− ∆*<sup>k</sup>*

Fig. 2. Transformations for inverting *k*th bits of *M*-bit levels (1 ≤ *k* ≤ *M* − 1): The bold line shows the transformation *fk*(*v*), and the thick dashed line, *tk*(*v*), where ∆*<sup>k</sup>* = 2*k*−1; the

The level difference resulting from the level transformation is also a function of the source levels. Figure 3 shows the difference between the transformed level *v*out = *fk*(*v*in) and the source level *v*in in the entire source range, comparing with the difference caused by the transformation *tk*. The absolute magnitude of the difference, | *fk*(*v*in) − *v*in|, varies in the range from 1 to ∆*k*, which is less than or equal to the half of the difference caused by *tk*, for the source levels *<sup>v</sup>*in over the interval [ <sup>∆</sup>*k*, 2*<sup>M</sup>* <sup>−</sup> <sup>∆</sup>*<sup>k</sup>* <sup>−</sup> <sup>1</sup> ]. On the contrary, for a level *<sup>v</sup>*in in the interval [ 0, ∆*<sup>k</sup>* − 1 ], those levels available for bit inversion exist only on the upper side of *v*in because of the end of *M*-bit levels. Accordingly, all the source levels in the interval are to be transformed to the smallest one among the available levels, 2∆*k*. The resulting level differences consequently get over ∆*<sup>k</sup>* in the interval. A similar end effect occurs for the input levels in the interval [ <sup>2</sup>*<sup>M</sup>* <sup>−</sup> <sup>∆</sup>*k*, 2*<sup>M</sup>* <sup>−</sup> <sup>1</sup> ]. In this paper we refer to these two intervals as the *end-effect intervals*.

The *M*-bit end effects can be removed by translating the coordinate system. Both of the axes of the coordinate system are translated by ∆*<sup>k</sup>* to the positive direction so as to avoid the lower

∆*k*

2∆*<sup>k</sup>*

2 1 ∆ −*<sup>k</sup>* 4 1 ∆ −*<sup>k</sup>*

2.3 Modifying level transformation

4∆*<sup>k</sup>*

2*m*∆*<sup>k</sup>*

2 1 *m*∆ −*<sup>k</sup>*

Fig. 3. Characteristics of level change caused by the transformations: The solid lines show *fk*(*v*in) − *v*in; the dashed lines show *tk*(*v*in) − *v*in.

end-effect interval [ 0, ∆*<sup>k</sup>* − 1 ], that is, letting *gk* denote the translated function,

$$\begin{cases} \ v' = v - \Delta\_k \\ g\_k(v') = f\_k(v) - \Delta\_k \end{cases} \tag{5}$$

The translated coordinate system is depicted in Fig. 2 by the thin dashed lines. Accompanied with the coordinate translation, both of the upper bounds of the dynamic range with regard to *fk*(*v*) are removed. Eq. (5) directly leads to the relation

$$f\_k(v' + \Delta\_k) = g(v') + \Delta\_k. \tag{6}$$

Using Eqs. (5) and (6) in Eq. (3), we obtain the relation

$$b(\mathcal{g}\_k(\upsilon') + \Delta\_{k'} \ k) = \overline{b(\upsilon' + \Delta\_{k'} \ k)}.\tag{7}$$

This equation indicates that the inversion of bit-*k* holds between *v* + ∆*<sup>k</sup>* and *gk*(*v* ) + ∆*k*. By carrying out the above addition of ∆*<sup>k</sup>* in modulus of 2*M*, we can correspond the interval [ <sup>2</sup>*<sup>M</sup>* <sup>−</sup> <sup>∆</sup>*k*, 2*<sup>M</sup>* <sup>−</sup> <sup>1</sup> ] of *<sup>v</sup>* to the interval [ 0, <sup>∆</sup>*<sup>k</sup>* <sup>−</sup> <sup>1</sup> ] of *<sup>v</sup>*, and thus, the upper end-effect interval no longer exists in the translated coordinate system.

In the new coordinate system, the function *gk* is defined by a level transformation from an input level *v*in, that is,

$$g\_k(v\_{\rm in}) = \begin{cases} (2m+1)\Delta\_k & \text{if } n = 2m\\ (2m+1)\Delta\_k - 1 & \text{if } n = 2m+1 \end{cases} \tag{8}$$

where *<sup>n</sup>* <sup>=</sup> *<sup>v</sup>*in/∆*<sup>k</sup>* taking values in [ 0, 2*M*−*k*+<sup>1</sup> <sup>−</sup> <sup>1</sup> ] and, as described later, *<sup>k</sup>* <sup>≥</sup> 1. Thus, this transformation maps 2*<sup>M</sup>* consecutive source levels into 2*M*−*k*+<sup>1</sup> discrete output levels.

Figure 4 shows the input-output relationship of the transformation *gk*. The transformation is no longer affected by the *M*-bit end effects. Hence, the level change has the absolute magnitude in the range [ 1, ∆*<sup>k</sup>* ] throughout the entire source range, as shown in Fig. 5. Thus, the function *gk* yields the smallest difference for each input level while achieving the bit *k* inversion in terms of Eq. (7). Note that it is the bit (*k* − 1) of *gk*(*v*in) that is actually inverted compared with *v*in, and the level change is not the smallest for each input level in the situation of bit (*k* − 1) inversion. This performance holds for 1 ≤ *k* ≤ *M* − 1. Hence, *gk* has been defined in this range of *k*.

(iii) The amount of level change is limited so as not to increase excessively. Taking into account that the largest change caused by *gk* is ∆*<sup>k</sup>* levels, we determine this limitation

Sophisticated Spatial Domain Watermarking by Bit Inverting Transformation 97

out can be obtained by adding an appropriate random level to *v*out. Generating

out from input level *v*in is performed using a single level transformation expressed in

out − *v*in| ≤ ∆*k*. (10)

*gk*(*v*in) <sup>−</sup> *<sup>r</sup>*, if *<sup>n</sup>* <sup>=</sup> <sup>2</sup>*<sup>m</sup>* <sup>+</sup> <sup>1</sup> , (11)

<sup>2</sup>(*<sup>m</sup>* <sup>+</sup> <sup>1</sup>)∆*<sup>k</sup>* <sup>−</sup> *<sup>v</sup>*in <sup>−</sup> 1, if *<sup>n</sup>* <sup>=</sup> <sup>2</sup>*<sup>m</sup>* <sup>+</sup> <sup>1</sup> . (12)

out = *hk* (*v*in) by showing the ranges

out = *hk*(*v*in) varies in a stochastic manner,

out − *v*in is then written as *d* = ∆*<sup>k</sup>* − *w* + *r* for

out = (*n* + 1) ∆*<sup>k</sup>* + *r*, where *r*

out and *v*in. Figure 7 illustrates the level differences

out

*hk*(*v*in) <sup>∈</sup> [ *<sup>v</sup>*in <sup>−</sup> <sup>∆</sup>*k*, *gk*(*v*in) ], if *<sup>n</sup>* is odd . (13)


*hk*(*v*in) = *gk*(*v*in) + *<sup>r</sup>*, if *<sup>n</sup>* <sup>=</sup> <sup>2</sup>*<sup>m</sup>*

*Rk*(*v*in) = *<sup>v</sup>*in <sup>−</sup> <sup>2</sup>*m*∆*k*, if *<sup>n</sup>* <sup>=</sup> <sup>2</sup>*<sup>m</sup>*

where *n* = *v*in/∆*<sup>k</sup>* and *r* has arbitrary levels in the range [ 0, *Rk*(*v*in) ] and *Rk*(*v*in) is a

Thus, *Rk*(*v*in) varies in the range [ 0, ∆*<sup>k</sup>* − 1 ]. The value of *r* is supposed to be determined within the range in a stochastic manner. Thereby, *hk* (*v*in) is a stochastic function whose output

*hk*(*v*in) <sup>∈</sup> [ *gk*(*v*in), *<sup>v</sup>*in <sup>+</sup> <sup>∆</sup>*<sup>k</sup>* ], if *<sup>n</sup>* is even

The number of levels composing the output range varies in [ 1, ∆*<sup>k</sup>* ]. Hence, the level

where the output levels vary by the shaded areas. As seen in the figure, the range of *v*

spreads all over the *M*-bit dynamic range, compared with that of *v*out = *gk*(*v*in). The

We evaluate the amount of level change caused by *hk* by stochastic analysis. In the analysis, we assume that *<sup>M</sup>*-bit levels in the dynamic range [ 0, 2*<sup>M</sup>* <sup>−</sup> <sup>1</sup> ] are uniformly distributed over the input signals. Also, the values of *r* in Eq. (11) are assumed to be equally likely in the range

For a given *k*, an input level *v*in is rewritten as *v*in = *n* ∆*<sup>k</sup>* + *w* using *w* in the case that *n* = *v*in/∆*<sup>k</sup>* is even (See Fig. 7). Note that *w* is identical with *Rk*(*v*in) in Eq. (12). By using

is randomly chosen in [ 0, *w* ], and hence, it is a stochastic variable of a uniform occurrence

out = *hk*(*v*in) is given by *v*

out − *v*in|, however, tends to get larger than its smallest value for each *v*in, that

out = *hk*(*v*in): The function *hk* is defined as

as

such *v*

The above *v*

the relation *v*

difference |*v*

function of *v*in, defined by

range depends on *v*in as follows:

randomization is effective for *k* ≥ 2.

2.5 Properties of level transformation

and so do the level differences between *v*

2.5.1 Amount of level change

assigned for each input level.

Eq. (8) in Eq. (11), the output level *v*

probability of 1/(*w* + 1). The difference *d* = *v*

for input levels *v*in.

Figure 6 illustrates the input-output relationship of *v*

According to the definition of *hk*, the output level *v*

is, |*v*out − *v*in|, according to the stochastic manner being used.

Fig. 4. Modified transformation function *gk*.

Fig. 5. Characteristics of level change caused by the transformation function *gk*.

#### 2.4 Varying transformed levels

The transformation *gk* maps 2*<sup>M</sup>* consecutive source levels into 2*M*−*k*+<sup>1</sup> sparse output levels. Furthermore, the output levels consist of pairs of consecutive levels (2*m* + 1)∆*<sup>k</sup>* − 1 and (2*m* + <sup>1</sup>)∆*<sup>k</sup>* for *<sup>m</sup>* <sup>=</sup> 0, 1, . . . , 2*M*−*<sup>k</sup>* <sup>−</sup> 1, as shown in Eq. (8). Hence, the resulting images look like coarsely quantized at *M* − *k* bits per pixel.

We extend the range of the transformation outputs within the *M*-bit dynamic range to improve the resulting image quality. In the pulse code modulation (PCM), dithering signals has effects on improving subjective image quality. Then, we consider distributing the transformed levels in the dynamic range by a stochastic process. A transformation from the outputs *v*out = *gk*(*v*in) further to *v* out is developed so that all the following three conditions can be satisfied:

(i) The inversion of bit-*k* holds in terms of

$$b(v\_{\rm out}^{\prime} + \Delta\_{k\prime} \ k) = \overline{b(v\_{\rm in} + \Delta\_{k\prime} \ k)}.\tag{9}$$

(ii) The range of *v* out extends to the whole of the *M*-bit dynamic range. 6 Will-be-set-by-IN-TECH

(2 1) *m* + ∆*<sup>k</sup>*

<sup>3</sup>∆*<sup>k</sup>* <sup>0</sup> <sup>0</sup> <sup>2</sup>*<sup>M</sup>*

2*m*∆*<sup>k</sup>*

2*m*∆*<sup>k</sup>*

The transformation *gk* maps 2*<sup>M</sup>* consecutive source levels into 2*M*−*k*+<sup>1</sup> sparse output levels. Furthermore, the output levels consist of pairs of consecutive levels (2*m* + 1)∆*<sup>k</sup>* − 1 and (2*m* + <sup>1</sup>)∆*<sup>k</sup>* for *<sup>m</sup>* <sup>=</sup> 0, 1, . . . , 2*M*−*<sup>k</sup>* <sup>−</sup> 1, as shown in Eq. (8). Hence, the resulting images look like

We extend the range of the transformation outputs within the *M*-bit dynamic range to improve the resulting image quality. In the pulse code modulation (PCM), dithering signals has effects on improving subjective image quality. Then, we consider distributing the transformed levels in the dynamic range by a stochastic process. A transformation from the outputs *v*out =

out extends to the whole of the *M*-bit dynamic range.

out is developed so that all the following three conditions can be satisfied:

out + ∆*k*, *k*) = *b*(*v*in + ∆*k*, *k*). (9)

Fig. 5. Characteristics of level change caused by the transformation function *gk*.

(2 1) *m* + ∆*<sup>k</sup>*

∆*k* 2∆*<sup>k</sup>*

( ) <sup>1</sup> 2 , 0, 1, , 2 1 *<sup>k</sup> M k <sup>k</sup> m* − − ∆ = = … −

> ∆*k* 2∆*<sup>k</sup>* 3∆*<sup>k</sup>*

> > *b*(*v*

0

( ) *<sup>k</sup> gv v* −

−1 1

1 ∆ −*<sup>k</sup>* ∆*k* 3 1 ∆ −*<sup>k</sup>* 3∆*<sup>k</sup>*

(2 1) 1 *m*+ ∆ −*<sup>k</sup>* (2 1) *m*+ ∆*<sup>k</sup>*

Fig. 4. Modified transformation function *gk*.

0

−∆*<sup>k</sup>*

2.4 Varying transformed levels

*gk*(*v*in) further to *v*

(ii) The range of *v*

coarsely quantized at *M* − *k* bits per pixel.

(i) The inversion of bit-*k* holds in terms of

∆*k*

2*M*

( ) *<sup>k</sup> g v*

2 1 *<sup>M</sup>* −∆ −*<sup>k</sup>* 2*<sup>M</sup>* −∆*<sup>k</sup>*

> 2*<sup>M</sup>* − ∆*<sup>k</sup>* 2 2 *<sup>M</sup>* − ∆*<sup>k</sup>*

2*<sup>M</sup>* − ∆*<sup>k</sup>* 2 2 *<sup>M</sup>* − ∆*<sup>k</sup>*

2*M*

*v*

*v*

(iii) The amount of level change is limited so as not to increase excessively. Taking into account that the largest change caused by *gk* is ∆*<sup>k</sup>* levels, we determine this limitation as

$$|v'\_{\rm out} - v\_{\rm in}| \le \Delta\_{\rm k}.\tag{10}$$

The above *v* out can be obtained by adding an appropriate random level to *v*out. Generating such *v* out from input level *v*in is performed using a single level transformation expressed in the relation *v* out = *hk*(*v*in): The function *hk* is defined as

$$h\_k(v\_{\rm in}) = \begin{cases} g\_k(v\_{\rm in}) + r\_\prime & \text{if } n = 2m \\ g\_k(v\_{\rm in}) - r\_\prime & \text{if } n = 2m + 1 \end{cases} \tag{11}$$

where *n* = *v*in/∆*<sup>k</sup>* and *r* has arbitrary levels in the range [ 0, *Rk*(*v*in) ] and *Rk*(*v*in) is a function of *v*in, defined by

$$R\_k(v\_{\rm in}) = \begin{cases} v\_{\rm in} - 2m\Delta\_k & \text{if } n = 2m\\ 2(m+1)\Delta\_k - v\_{\rm in} - 1 & \text{if } n = 2m+1 \end{cases}.\tag{12}$$

Thus, *Rk*(*v*in) varies in the range [ 0, ∆*<sup>k</sup>* − 1 ]. The value of *r* is supposed to be determined within the range in a stochastic manner. Thereby, *hk* (*v*in) is a stochastic function whose output range depends on *v*in as follows:

$$\begin{cases} \begin{array}{ll} h\_k(v\_{\text{in}}) \in \left[ g\_k(v\_{\text{in}}), \ v\_{\text{in}} + \Delta\_k \right] \text{\textsuperscript{\text{\tiny}}} & \text{if } n \text{ is even} \\\ h\_k(v\_{\text{in}}) \in \left[ v\_{\text{in}} - \Delta\_{k'}, g\_k(v\_{\text{in}}) \right] \text{\textsuperscript{\text{\tiny}}} & \text{if } n \text{ is odd} \end{array} \end{cases} \tag{13}$$

The number of levels composing the output range varies in [ 1, ∆*<sup>k</sup>* ]. Hence, the level randomization is effective for *k* ≥ 2.

Figure 6 illustrates the input-output relationship of *v* out = *hk* (*v*in) by showing the ranges where the output levels vary by the shaded areas. As seen in the figure, the range of *v* out spreads all over the *M*-bit dynamic range, compared with that of *v*out = *gk*(*v*in). The difference |*v* out − *v*in|, however, tends to get larger than its smallest value for each *v*in, that is, |*v*out − *v*in|, according to the stochastic manner being used.

#### 2.5 Properties of level transformation

#### 2.5.1 Amount of level change

According to the definition of *hk*, the output level *v* out = *hk*(*v*in) varies in a stochastic manner, and so do the level differences between *v* out and *v*in. Figure 7 illustrates the level differences for input levels *v*in.

We evaluate the amount of level change caused by *hk* by stochastic analysis. In the analysis, we assume that *<sup>M</sup>*-bit levels in the dynamic range [ 0, 2*<sup>M</sup>* <sup>−</sup> <sup>1</sup> ] are uniformly distributed over the input signals. Also, the values of *r* in Eq. (11) are assumed to be equally likely in the range assigned for each input level.

For a given *k*, an input level *v*in is rewritten as *v*in = *n* ∆*<sup>k</sup>* + *w* using *w* in the case that *n* = *v*in/∆*<sup>k</sup>* is even (See Fig. 7). Note that *w* is identical with *Rk*(*v*in) in Eq. (12). By using Eq. (8) in Eq. (11), the output level *v* out = *hk*(*v*in) is given by *v* out = (*n* + 1) ∆*<sup>k</sup>* + *r*, where *r* is randomly chosen in [ 0, *w* ], and hence, it is a stochastic variable of a uniform occurrence probability of 1/(*w* + 1). The difference *d* = *v* out − *v*in is then written as *d* = ∆*<sup>k</sup>* − *w* + *r* for

The same value of sum is obtained in the case that *n* is odd. Consequently, averaging the above sum over ∆*<sup>k</sup>* levels yields the mean of the squared level differences over the entire dynamic

Sophisticated Spatial Domain Watermarking by Bit Inverting Transformation 99

makes the occurrence probabilities of output levels unequal. We have analyzed the occurrence

the dynamic range and the assumption that output levels for a given input level *v*in are equally probable in the range of *hk*(*v*in). Suppose that any input level has a frequency of 1. Then, for

*<sup>k</sup>* + 15 ∆*<sup>k</sup>* − 1

out on both the assumption that *M*-bit input levels are uniformly distributed in

(2 1) 1 *m* + ∆ −*<sup>k</sup>*

2*m*∆*<sup>k</sup>* (2 1) *m* + ∆*<sup>k</sup>*

. (15)

out = *hk*(*v*in) vary for a given *v*in

out = (2*m* + 1)∆*<sup>k</sup>* + *w*

out by *v*

out. Hence, for input levels of the uniform frequency

2( 1) 1 *m* + ∆ −*<sup>k</sup>*

*v*

*<sup>p</sup>*=*w*+<sup>1</sup> 1/*p*. A similar analysis is obtained

range, denoted by *E*H(*k*), on the above assumptions. As a result, we have

*<sup>E</sup>*H(*k*) = <sup>1</sup>

The difference in width among the ranges where the values *v*

2*m*∆*<sup>k</sup>*

<sup>1</sup> 2*<sup>k</sup> k* <sup>−</sup> ∆ =

2.6 Evaluation of level transformation

2.6.1 Amount of level change

0, 1, , 2 1 *M k m* <sup>−</sup> = … −

These reversed ranges result in distortions on the picture surface.

0

*h v <sup>k</sup>* ( )

*w*

(2 1) *m* + ∆*<sup>k</sup>* (2 1) 1 *m* + ∆ −*<sup>k</sup>*

2( 1) 1 *m* + ∆ −*<sup>k</sup>*

2.5.2 Change of level occurrences

(0 ≤ *w* < ∆*k*), the frequency of *v*

for the interval [ (2*m*∆*k*, (2*m* + 1)∆*<sup>k</sup>* ) of *v*

frequencies of *v*

an output level *v*

36 22 ∆<sup>2</sup>

out ∈ [ (2*m* + 1)∆*k*, 2(*m* + 1)∆*<sup>k</sup>* ), by expressing *v*

out is given as <sup>∑</sup>∆*<sup>k</sup>*

distribution, the output levels of *hk* has the frequency distribution as illustrated in Fig. 8.

1 1 *k p p* ∆ = <sup>1</sup> <sup>∆</sup>*<sup>k</sup>* ∑

Fig. 8. Occurrence distribution of output levels by the transformation function *hk*.

*p*

1 1 *k p w*

∆ = + ∑

0 1

Although the output range of *hk* spreads over the entire *M*-bit range, in every two ranges of ∆*k*-level width the order of two ranges are reversed through the mapping, as shown in Fig. 9.

We evaluate the amount of level change caused by *hk* in terms of the mean squared level difference (MSD). For comparison with the MSD values of *hk* in Eq. (15), the expected MSD values of other transformations are derived on the assumption that 2*<sup>M</sup>* levels are uniformly distributed over the *M*-bit input signals as below. The level change caused by the function

Freq. Freq.

Fig. 6. Transformation function with level randomization, *hk*: The thick solid lines show the input-output relationship of *gk* for reference.

Fig. 7. Characteristics of level change caused by the transformation function *hk*: The thick solid lines show the level change caused by *gk* for reference.

a certain *r*. Thus, *d* is also a stochastic variable ranging in [ ∆*<sup>k</sup>* − *w*, ∆*<sup>k</sup>* ] for each *w*. Then, the expected value of the squared *d* are summed over the range of *w* from 0 to ∆*<sup>k</sup>* − 1 in the form

$$\sum\_{w=0}^{\Delta\_k - 1} \left( \sum\_{d'=\Delta\_k - w}^{\Delta\_k} d'^2 \cdot \frac{1}{w+1} \right) . \tag{14}$$

The same value of sum is obtained in the case that *n* is odd. Consequently, averaging the above sum over ∆*<sup>k</sup>* levels yields the mean of the squared level differences over the entire dynamic range, denoted by *E*H(*k*), on the above assumptions. As a result, we have

$$E\_{\rm H}(k) = \frac{1}{36} \left( 22\,\Delta\_k^2 + 15\,\Delta\_k - 1 \right). \tag{15}$$

#### 2.5.2 Change of level occurrences

8 Will-be-set-by-IN-TECH

∆*k*

2*m*∆*<sup>k</sup>* (2 1) *m*+ ∆*<sup>k</sup>*

*w*

(2 1) 1 *m* + ∆ −*<sup>k</sup>*

( ) 0, 1, , 2 1 *M k m* <sup>−</sup> = … − ( ) <sup>1</sup> 2*<sup>k</sup>*

Fig. 7. Characteristics of level change caused by the transformation function *hk*: The thick

a certain *r*. Thus, *d* is also a stochastic variable ranging in [ ∆*<sup>k</sup>* − *w*, ∆*<sup>k</sup>* ] for each *w*. Then, the expected value of the squared *d* are summed over the range of *w* from 0 to ∆*<sup>k</sup>* − 1 in the form

Fig. 6. Transformation function with level randomization, *hk*: The thick solid lines show the

2( 1) *m*+ ∆*<sup>k</sup>*

(2 1) *m* + ∆*<sup>k</sup>*

*k* <sup>−</sup> ∆ =

*<sup>d</sup>*<sup>2</sup> · <sup>1</sup> *w* + 1 *w*+1 levels

(2 3) *m*+ ∆*<sup>k</sup>*

2 2 *<sup>M</sup>* − ∆*<sup>k</sup>*

2( 1) 1 *m* + ∆ −*<sup>k</sup>*

*v*

. (14)

2*M*

2*<sup>M</sup>* −∆*<sup>k</sup>*

*v*

∆*k*

1 ∆ −*<sup>k</sup>*

(2 1) *m*+ ∆*<sup>k</sup>*

(2 1) 1 *m*+ ∆ −*<sup>k</sup>* 2( 1) 1 *m*+ ∆ −*<sup>k</sup>* (2 3) 1 *m*+ ∆ −*<sup>k</sup>*

2( 1) *m*+ ∆*<sup>k</sup>*

(2 3) *m*+ ∆*<sup>k</sup>*

2 1 *<sup>M</sup>* −∆ −*<sup>k</sup>* 2*<sup>M</sup>* −∆*<sup>k</sup>*

0, 1, , 2 1 *M k m* <sup>−</sup> = … −

<sup>1</sup> 2*<sup>k</sup> k* <sup>−</sup> ∆ =

input-output relationship of *gk* for reference.

−∆*<sup>k</sup>*

−1 1 0

∆*k*

( ) *<sup>k</sup> hv v* −

2*M*

*h v <sup>k</sup>* ( )

0 0

∆ −*<sup>k</sup> w*

2*m*∆*<sup>k</sup>*

solid lines show the level change caused by *gk* for reference.

∆*k*−1 ∑ *w*=0

 ∆*<sup>k</sup>* ∑ *d*=∆*k*−*w* The difference in width among the ranges where the values *v* out = *hk*(*v*in) vary for a given *v*in makes the occurrence probabilities of output levels unequal. We have analyzed the occurrence frequencies of *v* out on both the assumption that *M*-bit input levels are uniformly distributed in the dynamic range and the assumption that output levels for a given input level *v*in are equally probable in the range of *hk*(*v*in). Suppose that any input level has a frequency of 1. Then, for an output level *v* out ∈ [ (2*m* + 1)∆*k*, 2(*m* + 1)∆*<sup>k</sup>* ), by expressing *v* out by *v* out = (2*m* + 1)∆*<sup>k</sup>* + *w* (0 ≤ *w* < ∆*k*), the frequency of *v* out is given as <sup>∑</sup>∆*<sup>k</sup> <sup>p</sup>*=*w*+<sup>1</sup> 1/*p*. A similar analysis is obtained for the interval [ (2*m*∆*k*, (2*m* + 1)∆*<sup>k</sup>* ) of *v* out. Hence, for input levels of the uniform frequency distribution, the output levels of *hk* has the frequency distribution as illustrated in Fig. 8.

Fig. 8. Occurrence distribution of output levels by the transformation function *hk*.

Although the output range of *hk* spreads over the entire *M*-bit range, in every two ranges of ∆*k*-level width the order of two ranges are reversed through the mapping, as shown in Fig. 9. These reversed ranges result in distortions on the picture surface.

#### 2.6 Evaluation of level transformation

#### 2.6.1 Amount of level change

We evaluate the amount of level change caused by *hk* in terms of the mean squared level difference (MSD). For comparison with the MSD values of *hk* in Eq. (15), the expected MSD values of other transformations are derived on the assumption that 2*<sup>M</sup>* levels are uniformly distributed over the *M*-bit input signals as below. The level change caused by the function

*k*

MSD 2 3 4 5 *E*T(*k*) 16 64 256 1024 *E*G(*k*) 2.5 7.5 25.5 93.5 *E*H(*k*) 3.3 11.4 42.4 163.1 *E*L(*k*) 4.5 18.5 74.5 298.5

Sophisticated Spatial Domain Watermarking by Bit Inverting Transformation 101

showing an example of the stochastic output levels of *hk*. In accordance with Fig. 2, although the results of *t*<sup>5</sup> has all levels in the dynamic range, there are level gaps every 32 (= 25) levels as observed in Fig. 10(b). In the gradation transformed by *g*<sup>5</sup> of Fig. 10(c), we may perceive

Fig. 10(d) shows the result of adding random signals to Fig. 10(c) by *h*<sup>5</sup> and has all levels in the dynamic range. For a *v*in = (2*m* + 1)∆*<sup>k</sup>* − 1, the absolute difference between *gk*(*v*in) and *gk*(*v*in + 1) is one level, but the absolute difference between *hk*(*v*in) and *hk*(*v*in + 1), in contrast, can reach the utmost 2∆*<sup>k</sup>* − 1 as shown in Fig. 11. Hence, the visible steps in Fig. 10(c)

*hk* (Fig. 10(d)) though the level frequency distributions in the dynamic range are different. Note that an extremely large value of *k* has been used in Fig. 10 so that the input-output

(a) (b) (c)

Fig. 10. Experimental results (*k* = 5): (a) Source (8-bit grayscale ramp); (b), (c), (d) and (e)

We have also conducted experiments of each transformation on 8-bit monochrome test images. All the pixels in an input image were transformed to evaluate the level changes in terms of visual quality. Figure 12 shows results of the transformations for one of test images, *Lena*. We can observe a distortion pattern similar to that in Fig. 10, in particular, in smooth image areas such as the *shoulder* in each image of Fig. 12. Such distortions can be referred to

The amount of level change from the source image has been estimated an MSD value for each transformed images. The MSD between a source image *f* and the transformed image *f* is

*<sup>k</sup>* in Fig. 10(e) looks similar to that of

*<sup>k</sup>*, respectively. All images are printed at 200 pels/inch.

Table 1. Expected values of mean squared difference.

eight *steps* while the result actually has 16 different levels.

are divided into halves in Fig. 10(d). The result of *h*

(d) (e)

as *false contours* after those caused by coarsely quantizing pixel levels.

transformed images by *tk*, *gk*, *hk* and *h*

relationships get evident.

Fig. 9. Correspondence of *hk* between input and output levels.

*gk* can be evaluated similarly to that caused by *hk*. For a given *k*, the mean squared level difference for the signals transformed by *gk*, denoted by *E*G(*k*), is given by

$$E\_{\mathcal{G}}(k) = \frac{1}{6}(2\,\Delta\_k + 1)(\Delta\_k + 1). \tag{16}$$

With regard to the function *tk*, for a given *k*, the magnitude of level change caused by the transformation is 2∆*<sup>k</sup>* for any input level. Then, the mean squared level difference, denoted by *ET*(*k*), is obviously 4∆<sup>2</sup> *k*.

For comparison, we consider another method for extending the output levels of *gk* within the dynamic range in a stochastic manner; that is, for a given *k*, all the lowest (*k* − 1) bits of *gk*(*v*in) are replaced with random ones for any *v*in. We express *gk*(*v*in) and the following bit replacing operation together as a single function of *v*in, *h <sup>k</sup>*(*v*in). The value of *h <sup>k</sup>*(*v*in) is a random variable in either range [ *gk*(*v*in), *gk*(*v*in) + ∆*<sup>k</sup>* − 1 ] or [ *gk*(*v*in) − ∆*<sup>k</sup>* + 1, *gk*(*v*in) ] that is determined from the input interval including *v*in. The whole range of *h <sup>k</sup>* coincides with the *M*-bit dynamic range. Also, on the assumption that input levels have a uniform frequency distribution in the dynamic range, the frequency distribution of the output levels gets uniform. As a disadvantage, *h <sup>k</sup>* tends to increase the resulting level change; the upper bound of the level difference |*h <sup>k</sup>*(*v*in) − *v*in| depends on *v*in and varies in the range [ ∆*k*, 2∆*<sup>k</sup>* ). The expected value of MSD is given on the assumption that output levels for each input level are equally probable in ∆*<sup>k</sup>* levels by

$$E\_{\mathcal{L}}(k) = \frac{1}{6} (7\,\Delta\_k^2 - 1). \tag{17}$$

Table 1 lists the MSD values of each transformation for *k* = 2, 3, 4 and 5. In comparison for a given *k*, *gk* reduces substantially the MSD from that of *tk*; the ratio *E*G/*E*<sup>T</sup> is, for example, 0.12 for *k* = 3, and approaches approximately to 1/12 for large *k*'s. The transformation *hk* increases the MSD from that of *gk* due to the level randomizing; the ratio *E*H/*E*<sup>G</sup> is 1.52 for *k* = 3, and about 11/6 for large *k*'s. On the other hand, the MSD of *hk* is smaller than that of *h <sup>k</sup>*; we find the ratio *E*H/*E*<sup>L</sup> to be 0.62 for *k* = 3 and about 11/21 for large *k*'s.

#### 2.6.2 Experiments of transformation

To visualize the input-output relationship, we have conducted each transformation of *tk*, *gk*, *hk* and *h <sup>k</sup>* on an 8-bit grayscale ramp image where the pixel level increases at one per pixel from 0 (black) to 255 (white) along the gradation. Figure 10 shows the transformed results with *k* = 5. Also, Fig. 11 illustrates a one-dimensional pixel sequence along the ramp of Fig. 10(d),


Table 1. Expected values of mean squared difference.

10 Will-be-set-by-IN-TECH

0 ∆*<sup>k</sup>* 2*m*∆*<sup>k</sup>* (2 1) *m* + ∆*<sup>k</sup>* 2*<sup>M</sup>* − ∆*<sup>k</sup>* 2 2 *<sup>M</sup>* − ∆*<sup>k</sup>* 2*<sup>M</sup>*

( ) 0, 1, , 2 1 *M k m* <sup>−</sup> = … − ( ) <sup>1</sup> 2*<sup>k</sup>*

*gk* can be evaluated similarly to that caused by *hk*. For a given *k*, the mean squared level

With regard to the function *tk*, for a given *k*, the magnitude of level change caused by the transformation is 2∆*<sup>k</sup>* for any input level. Then, the mean squared level difference, denoted

For comparison, we consider another method for extending the output levels of *gk* within the dynamic range in a stochastic manner; that is, for a given *k*, all the lowest (*k* − 1) bits of *gk*(*v*in) are replaced with random ones for any *v*in. We express *gk*(*v*in) and the following

a random variable in either range [ *gk*(*v*in), *gk*(*v*in) + ∆*<sup>k</sup>* − 1 ] or [ *gk*(*v*in) − ∆*<sup>k</sup>* + 1, *gk*(*v*in) ]

with the *M*-bit dynamic range. Also, on the assumption that input levels have a uniform frequency distribution in the dynamic range, the frequency distribution of the output levels

The expected value of MSD is given on the assumption that output levels for each input level

6 7 ∆<sup>2</sup> *<sup>k</sup>* − 1 

Table 1 lists the MSD values of each transformation for *k* = 2, 3, 4 and 5. In comparison for a given *k*, *gk* reduces substantially the MSD from that of *tk*; the ratio *E*G/*E*<sup>T</sup> is, for example, 0.12 for *k* = 3, and approaches approximately to 1/12 for large *k*'s. The transformation *hk* increases the MSD from that of *gk* due to the level randomizing; the ratio *E*H/*E*<sup>G</sup> is 1.52 for *k* = 3, and about 11/6 for large *k*'s. On the other hand, the MSD of *hk* is smaller than that of

To visualize the input-output relationship, we have conducted each transformation of *tk*, *gk*, *hk*

*<sup>k</sup>* on an 8-bit grayscale ramp image where the pixel level increases at one per pixel from 0 (black) to 255 (white) along the gradation. Figure 10 shows the transformed results with *k* = 5. Also, Fig. 11 illustrates a one-dimensional pixel sequence along the ramp of Fig. 10(d),

that is determined from the input interval including *v*in. The whole range of *h*

*<sup>E</sup>*L(*k*) = <sup>1</sup>

*<sup>k</sup>*; we find the ratio *E*H/*E*<sup>L</sup> to be 0.62 for *k* = 3 and about 11/21 for large *k*'s.

Fig. 9. Correspondence of *hk* between input and output levels.

*k*.

bit replacing operation together as a single function of *v*in, *h*

by *ET*(*k*), is obviously 4∆<sup>2</sup>

gets uniform. As a disadvantage, *h*

are equally probable in ∆*<sup>k</sup>* levels by

2.6.2 Experiments of transformation

bound of the level difference |*h*

*h*

and *h*

difference for the signals transformed by *gk*, denoted by *E*G(*k*), is given by

*<sup>E</sup>*G(*k*) = <sup>1</sup>

6

*k* <sup>−</sup> ∆ = *S v*

Input

*R v*

(2∆*<sup>k</sup>* + 1)(∆*<sup>k</sup>* + 1). (16)

*<sup>k</sup>* tends to increase the resulting level change; the upper

*<sup>k</sup>*(*v*in) − *v*in| depends on *v*in and varies in the range [ ∆*k*, 2∆*<sup>k</sup>* ).

*<sup>k</sup>*(*v*in). The value of *h*

. (17)

*<sup>k</sup>*(*v*in) is

*<sup>k</sup>* coincides

Output

0 ∆*<sup>k</sup>* 2*m*∆*<sup>k</sup>* (2 1) *m* + ∆*<sup>k</sup>* 2*<sup>M</sup>* − ∆*<sup>k</sup>* 2 2 *<sup>M</sup>* − ∆*<sup>k</sup>* 2*<sup>M</sup>*

showing an example of the stochastic output levels of *hk*. In accordance with Fig. 2, although the results of *t*<sup>5</sup> has all levels in the dynamic range, there are level gaps every 32 (= 25) levels as observed in Fig. 10(b). In the gradation transformed by *g*<sup>5</sup> of Fig. 10(c), we may perceive eight *steps* while the result actually has 16 different levels.

Fig. 10(d) shows the result of adding random signals to Fig. 10(c) by *h*<sup>5</sup> and has all levels in the dynamic range. For a *v*in = (2*m* + 1)∆*<sup>k</sup>* − 1, the absolute difference between *gk*(*v*in) and *gk*(*v*in + 1) is one level, but the absolute difference between *hk*(*v*in) and *hk*(*v*in + 1), in contrast, can reach the utmost 2∆*<sup>k</sup>* − 1 as shown in Fig. 11. Hence, the visible steps in Fig. 10(c) are divided into halves in Fig. 10(d). The result of *h <sup>k</sup>* in Fig. 10(e) looks similar to that of *hk* (Fig. 10(d)) though the level frequency distributions in the dynamic range are different. Note that an extremely large value of *k* has been used in Fig. 10 so that the input-output relationships get evident.

Fig. 10. Experimental results (*k* = 5): (a) Source (8-bit grayscale ramp); (b), (c), (d) and (e) transformed images by *tk*, *gk*, *hk* and *h <sup>k</sup>*, respectively. All images are printed at 200 pels/inch.

We have also conducted experiments of each transformation on 8-bit monochrome test images. All the pixels in an input image were transformed to evaluate the level changes in terms of visual quality. Figure 12 shows results of the transformations for one of test images, *Lena*. We can observe a distortion pattern similar to that in Fig. 10, in particular, in smooth image areas such as the *shoulder* in each image of Fig. 12. Such distortions can be referred to as *false contours* after those caused by coarsely quantizing pixel levels.

The amount of level change from the source image has been estimated an MSD value for each transformed images. The MSD between a source image *f* and the transformed image *f* is

Table 2 lists the measured MSD values for two test images. Although grayscale levels have nonuniform frequency distribution in each input image, the MSD values measured for each transformed image almost agree with the expected values listed in Table 1. Accordingly, the expected values of MSD can be generally used to estimate the actual amount of level change. (a) Test image *Lena*

Sophisticated Spatial Domain Watermarking by Bit Inverting Transformation 103

Transformation 2 3 4 5

(b) Test image *Peppers*

Transformation 2 3 4 5

It is necessary to make level changes caused by a transformation imperceptible both for keeping the embedded watermarks secret and for preserving the image quality. Here, we assume that a range where a pixel level can be changed without being perceived depends only on the own pixel. Then, let us express the upper limit of the range for a source level *v* as a function only of *v*, *A*(*v*), supposing that *A*(*v*) ≥ 0; that is, a source level *v* is allowed to

On the above assumption, for a pixel of level *v*, if the amount of level change caused by a transformation is to be under *A*(*v*), the transformation can be actually applied to the pixel. We use the function *hk* as the level transformation for watermarking in the rest of this chapter, and let *dk*(*v*) = *hk*(*v*) − *v*. As described in Sec. 2.5.1, for a given *v*, *dk*(*v*) varies in the range that depends on *v*, and the upper bound of |*dk*(*v*)| is fixed to ∆*<sup>k</sup>* for any *v*. Accordingly, we regard |*dk*(*v*)| as the constant ∆*<sup>k</sup>* to determine if the transformation *hk* can be applied to a pixel or not. That is, given the function *A*(*v*), for a pixel of level *v*, if ∆*<sup>k</sup>* ≤ *A*(*v*) with a certain *k*, then the level can be changed by *hk*(*v*). Otherwise, the level is left unchanged. The function that keeps *v* unchanged is expressed by the identity transformation, denoted by *fI*(*v*) such

Given the bounding function *A* of an input level and a value of *k*, according to the scheme described in the preceding section, each input level *v*in of the dynamic range is classified into two categories: one that *hk* can be applied to, and the other that *fI* is to be applied to.

*h*

*h*

Table 2. Measured values of mean squared difference.

3. Implementation of level transformation

change within the range [ *v* − *A*(*v*), *v* + *A*(*v*) ].

3.1 Limitations on level changes

that *fI*(*v*) = *v*.

3.2 Transformation domains

*gk* 2.5 7.5 26.1 91.4 *hk* 3.3 11.4 42.8 161.5

*<sup>k</sup>* 4.5 18.5 75.7 294.7

*gk* 2.5 7.5 25.9 92.5 *hk* 3.2 11.4 42.7 162.3

*<sup>k</sup>* 4.5 18.5 75.2 296.9

*k*

*k*

Fig. 11. An example of the transformed 256-grayscale ramp by *hk*: levels varying along a horizontal line in Fig. 10(d).

Fig. 12. Experimental result (*k* = 4): (a) zoomed source (128 × 128-pixel part of *Lena*); (b), (c), (d) and (e) transformed images by *tk*, *gk*, *hk* and *h <sup>k</sup>*, respectively.

generally defined by

$$\text{MSD} = \frac{1}{\text{MN}} \sum\_{i=1}^{M} \sum\_{j=1}^{N} \left\{ f'(i,j) - f(i,j) \right\}^2 \tag{18}$$

where *f*(*i*, *j*) and *f* (*i*, *j*) are the respective pixel levels in *f* and *f* at the coordinates of (*i*, *j*), for *i* = 1, 2, . . . , *M* and *j* = 1, 2, . . . , *N* supposing that both *f* and *f* are *M* × *N* images. Table 2 lists the measured MSD values for two test images. Although grayscale levels have nonuniform frequency distribution in each input image, the MSD values measured for each transformed image almost agree with the expected values listed in Table 1. Accordingly, the expected values of MSD can be generally used to estimate the actual amount of level change.




Table 2. Measured values of mean squared difference.

#### 3. Implementation of level transformation

#### 3.1 Limitations on level changes

12 Will-be-set-by-IN-TECH

0 0

(d) (e)

MSD <sup>=</sup> <sup>1</sup>

*MN*

*M* ∑ *i*=1

(d) and (e) transformed images by *tk*, *gk*, *hk* and *h*

generally defined by

where *f*(*i*, *j*) and *f*

32

64

96

128

160

Intensity level

horizontal line in Fig. 10(d).

192

224

256

32

*k* = 5 16 ∆*<sup>k</sup>* =

64

96

Fig. 11. An example of the transformed 256-grayscale ramp by *hk*: levels varying along a

(a) (b) (c)

Fig. 12. Experimental result (*k* = 4): (a) zoomed source (128 × 128-pixel part of *Lena*); (b), (c),

*N* ∑ *j*=1 { *f*

for *i* = 1, 2, . . . , *M* and *j* = 1, 2, . . . , *N* supposing that both *f* and *f* are *M* × *N* images.

*<sup>k</sup>*, respectively.

(*i*, *j*) are the respective pixel levels in *f* and *f* at the coordinates of (*i*, *j*),

(*i*, *<sup>j</sup>*) <sup>−</sup> *<sup>f</sup>*(*i*, *<sup>j</sup>*)}<sup>2</sup> (18)

128

160

A source grayscale ramp

Pixel location

192

224

256

It is necessary to make level changes caused by a transformation imperceptible both for keeping the embedded watermarks secret and for preserving the image quality. Here, we assume that a range where a pixel level can be changed without being perceived depends only on the own pixel. Then, let us express the upper limit of the range for a source level *v* as a function only of *v*, *A*(*v*), supposing that *A*(*v*) ≥ 0; that is, a source level *v* is allowed to change within the range [ *v* − *A*(*v*), *v* + *A*(*v*) ].

On the above assumption, for a pixel of level *v*, if the amount of level change caused by a transformation is to be under *A*(*v*), the transformation can be actually applied to the pixel. We use the function *hk* as the level transformation for watermarking in the rest of this chapter, and let *dk*(*v*) = *hk*(*v*) − *v*. As described in Sec. 2.5.1, for a given *v*, *dk*(*v*) varies in the range that depends on *v*, and the upper bound of |*dk*(*v*)| is fixed to ∆*<sup>k</sup>* for any *v*. Accordingly, we regard |*dk*(*v*)| as the constant ∆*<sup>k</sup>* to determine if the transformation *hk* can be applied to a pixel or not. That is, given the function *A*(*v*), for a pixel of level *v*, if ∆*<sup>k</sup>* ≤ *A*(*v*) with a certain *k*, then the level can be changed by *hk*(*v*). Otherwise, the level is left unchanged. The function that keeps *v* unchanged is expressed by the identity transformation, denoted by *fI*(*v*) such that *fI*(*v*) = *v*.

#### 3.2 Transformation domains

Given the bounding function *A* of an input level and a value of *k*, according to the scheme described in the preceding section, each input level *v*in of the dynamic range is classified into two categories: one that *hk* can be applied to, and the other that *fI* is to be applied to.

bit 0 bit 1

(least significant)

3.3.2 Watermarking procedures

must be kept secret.

(1) Embedding procedure

*Step* 3: Collect the bits *b*(*v*

parity value, *y*, of *Q*(*i*)

so on. For the value *ki* assigned to *P*(*i*)

Proceed to the next pixel-block.

(*i*) *<sup>j</sup>* + ∆*ki*

*ki* .

Fig. 13. Data structure for bit-block watermarking.

is expressed as a sequence of *NB* pixels, that is, *P*(*i*) =

bit *k*

bit (*M*-1) (most significant) Pixel Pixel block

Sophisticated Spatial Domain Watermarking by Bit Inverting Transformation 105

The procedures for watermarking in the data structure of bit-blocks by using the function *hk* are described below. Here, suppose that the domains of *hk* in the *M*-bit dynamic range are given for each value of *k*. Suppose also that a value of *k* is assigned to each pixel-block of a source image *F*. A set of these values of *k* is associated with the watermarked image and it

Each pixel-block is processed in order. Let *P*(*i*) be the *i*th pixel-block being processed, which

*Step* 2: Compare *ne* with a specified threshold *NE* (1 ≤ *NE* ≤ *NB*). The result determines how

, *ki*) of the pixel *p*

*<sup>j</sup>* <sup>=</sup> 1, 2, . . . , *ne*. Let the resulting bits compose a bit-block, *<sup>Q</sup>*(*i*)

(a) If *ne* ≥ *NE*, we use this block to represent a new watermark bit. Proceed to *Step* 3. (b) If *ne* < *NE*, we skip this block without using it to represent any watermark bit.

> (*i*) *<sup>j</sup>* <sup>∈</sup> *<sup>P</sup>*(*i*)

*Step* 1: Extract from *<sup>P</sup>*(*i*) those pixels whose levels belong to the domains of *hki*

set of these pixels and *ne* be the number of the pixels.

to treat the block with a watermark bit as follows:

*p* (*i*) <sup>1</sup> , *p* (*i*) <sup>2</sup> , ..., *p*

, the block is processed by the following procedure:

*<sup>e</sup>* , where *v*

(*i*)

*<sup>j</sup>* is a level of *p*

*ki* . Then, calculate the

Bit block *k*

(*i*) *NB* 

, for *i* = 1, 2 and

. Let *P*(*i*)

*<sup>e</sup>* be a

(*i*) *<sup>j</sup>* , for

*M*-bit image

Consecutive input levels of the same category, then, are collected to compose domains of each transformation. The entire dynamic range of input levels, denoted by *UM* = [ 0, 2*<sup>M</sup>* <sup>−</sup> <sup>1</sup> ], is consequently expressed as a disjoint union of one or more domains. Let *U*(*k*) <sup>1</sup> , *<sup>U</sup>*(*k*) <sup>2</sup> , ..., *<sup>U</sup>*(*k*) • (*k*) be a sequence of these domains in order, where • (*k*) is the number of domains for the *k*. Note that the domains of *hk* and those of *fI* alternate in the sequence.

Next, we modify the domains in *UM* so that a blind watermark can be achieve, that is, so that a watermark can be recovered from the transformed image without referring to its source image. Our approach is to make the transformation output ranges disjoint to each other. Let *V*(*k*) *<sup>i</sup>* be the range onto which a domain *<sup>U</sup>*(*k*) *<sup>i</sup>* is mapped, for *i* = 1, 2, . . . , • (*k*). If these ranges are disjoint mutually, for any output level *<sup>v</sup>*out, the range containing it, say, *<sup>V</sup>*(*k*) *<sup>j</sup>* , is determined uniquely. This range directly indicates not only the corresponding domain, *U*(*k*) *<sup>j</sup>* , but also the kind of transformation that is to be applied to the domain. Consequently, it is found whether a pixel is one of those pixels that *hk* can be applied to or not. Furthermore, with regard to domains of *hk* and the corresponding regions, say, *<sup>U</sup>*(*k*) *<sup>j</sup>* and *<sup>V</sup>*(*k*) *<sup>j</sup>* , if that region to which *<sup>U</sup>*(*k*) *j* can be mapped by *fI* is included in *<sup>V</sup>*(*k*) *<sup>j</sup>* , that is,

$$\{f\_I(v) \mid v \in \mathcal{U}\_j^{(k)}\} \subseteq V\_j^{(k)},\tag{19}$$

then, for pixels of source level *<sup>v</sup>*in <sup>∈</sup> *<sup>U</sup>*(*k*) *<sup>j</sup>* , we can use either *hk* to have the bit-*k* inverted or *fI* to keep the bit-*k* unchanged.

Based on the input-output relationship of *hk*, the domains that have the disjoint relationship among the corresponding regions are defined every 2∆*<sup>k</sup>* levels in the dynamic range, that is, the boundaries of each domain must be located at the levels of 2*m*∆*<sup>k</sup>* (<sup>0</sup> <sup>≤</sup> *<sup>m</sup>* <sup>≤</sup> <sup>2</sup>*M*−*k*). Such domains also satisfy Eq. (19).

#### 3.3 Bit-block watermarking

#### 3.3.1 Data structure for watermarks

As a data structure for representing watermarks, we use bit-planes of a monochrome image or a color component image composed of *M*-bit pixels, which was also used by Oka & Matsui (1997). As depicted in Fig. 13, first, a source image is divided into pixel-blocks each consisting of *NB* pixels. More generally, these blocks can have arbitrary shapes rather than rectangles, and besides, even *NB* can be varied in the same image. Then, each pixel-block is regarded as a hierarchy of *M* bit-blocks. Suppose that a pixel is composed of *M* bits, a bit (*M* − 1) to a bit 0, that represent the signal level in the natural binary expression. A bit-block *k* is a set of all the bit *k* of each pixel in the pixel-block, for *k* = 0, 1, . . . , *M* − 1.

A watermark bit is represented by a parity value in one of the bit-blocks. Here, the parity value of a set of bits is defined by the sum of the bits in modulo 2. Suppose that to achieve the watermarking, one bit in each bit-block is to be inverted, if necessary, so that the resultant parity value can agree with the watermark bit. The details of the watermarking procedures will be described in the next section.

Fig. 13. Data structure for bit-block watermarking.

#### 3.3.2 Watermarking procedures

14 Will-be-set-by-IN-TECH

Consecutive input levels of the same category, then, are collected to compose domains of each transformation. The entire dynamic range of input levels, denoted by *UM* = [ 0, 2*<sup>M</sup>* <sup>−</sup> <sup>1</sup> ], is

be a sequence of these domains in order, where • (*k*) is the number of domains for the *k*. Note

Next, we modify the domains in *UM* so that a blind watermark can be achieve, that is, so that a watermark can be recovered from the transformed image without referring to its source image. Our approach is to make the transformation output ranges disjoint to each other. Let

kind of transformation that is to be applied to the domain. Consequently, it is found whether a pixel is one of those pixels that *hk* can be applied to or not. Furthermore, with regard to

Based on the input-output relationship of *hk*, the domains that have the disjoint relationship among the corresponding regions are defined every 2∆*<sup>k</sup>* levels in the dynamic range, that is, the boundaries of each domain must be located at the levels of 2*m*∆*<sup>k</sup>* (<sup>0</sup> <sup>≤</sup> *<sup>m</sup>* <sup>≤</sup> <sup>2</sup>*M*−*k*). Such

As a data structure for representing watermarks, we use bit-planes of a monochrome image or a color component image composed of *M*-bit pixels, which was also used by Oka & Matsui (1997). As depicted in Fig. 13, first, a source image is divided into pixel-blocks each consisting of *NB* pixels. More generally, these blocks can have arbitrary shapes rather than rectangles, and besides, even *NB* can be varied in the same image. Then, each pixel-block is regarded as a hierarchy of *M* bit-blocks. Suppose that a pixel is composed of *M* bits, a bit (*M* − 1) to a bit 0, that represent the signal level in the natural binary expression. A bit-block *k* is a set of all

A watermark bit is represented by a parity value in one of the bit-blocks. Here, the parity value of a set of bits is defined by the sum of the bits in modulo 2. Suppose that to achieve the watermarking, one bit in each bit-block is to be inverted, if necessary, so that the resultant parity value can agree with the watermark bit. The details of the watermarking procedures

the bit *k* of each pixel in the pixel-block, for *k* = 0, 1, . . . , *M* − 1.

<sup>1</sup> , *<sup>U</sup>*(*k*)

*<sup>j</sup>* , if that region to which *<sup>U</sup>*(*k*)

*<sup>j</sup>* , (19)

*<sup>i</sup>* is mapped, for *i* = 1, 2, . . . , • (*k*). If these ranges

*<sup>j</sup>* , we can use either *hk* to have the bit-*k* inverted or *fI*

*<sup>j</sup>* and *<sup>V</sup>*(*k*)

*<sup>j</sup>* } <sup>⊆</sup> *<sup>V</sup>*(*k*)

<sup>2</sup> , ..., *<sup>U</sup>*(*k*)

*<sup>j</sup>* , is determined

*<sup>j</sup>* , but also the

*j*

• (*k*)

consequently expressed as a disjoint union of one or more domains. Let *U*(*k*)

are disjoint mutually, for any output level *<sup>v</sup>*out, the range containing it, say, *<sup>V</sup>*(*k*)

uniquely. This range directly indicates not only the corresponding domain, *U*(*k*)

*<sup>j</sup>* , that is,

{ *fI*(*v*)<sup>|</sup> *<sup>v</sup>* <sup>∈</sup> *<sup>U</sup>*(*k*)

that the domains of *hk* and those of *fI* alternate in the sequence.

*<sup>i</sup>* be the range onto which a domain *<sup>U</sup>*(*k*)

can be mapped by *fI* is included in *<sup>V</sup>*(*k*)

then, for pixels of source level *<sup>v</sup>*in <sup>∈</sup> *<sup>U</sup>*(*k*)

to keep the bit-*k* unchanged.

domains also satisfy Eq. (19).

3.3 Bit-block watermarking

3.3.1 Data structure for watermarks

will be described in the next section.

domains of *hk* and the corresponding regions, say, *<sup>U</sup>*(*k*)

*V*(*k*)

The procedures for watermarking in the data structure of bit-blocks by using the function *hk* are described below. Here, suppose that the domains of *hk* in the *M*-bit dynamic range are given for each value of *k*. Suppose also that a value of *k* is assigned to each pixel-block of a source image *F*. A set of these values of *k* is associated with the watermarked image and it must be kept secret.

#### (1) Embedding procedure

Each pixel-block is processed in order. Let *P*(*i*) be the *i*th pixel-block being processed, which is expressed as a sequence of *NB* pixels, that is, *P*(*i*) = *p* (*i*) <sup>1</sup> , *p* (*i*) <sup>2</sup> , ..., *p* (*i*) *NB* , for *i* = 1, 2 and so on. For the value *ki* assigned to *P*(*i*) , the block is processed by the following procedure:


(a) If *ne* ≥ *NE*, we use this block to represent a new watermark bit. Proceed to *Step* 3.

(b) If *ne* < *NE*, we skip this block without using it to represent any watermark bit. Proceed to the next pixel-block.

*Step* 3: Collect the bits *b*(*v* (*i*) *<sup>j</sup>* + ∆*ki* , *ki*) of the pixel *p* (*i*) *<sup>j</sup>* <sup>∈</sup> *<sup>P</sup>*(*i*) *<sup>e</sup>* , where *v* (*i*) *<sup>j</sup>* is a level of *p* (*i*) *<sup>j</sup>* , for *<sup>j</sup>* <sup>=</sup> 1, 2, . . . , *ne*. Let the resulting bits compose a bit-block, *<sup>Q</sup>*(*i*) *ki* . Then, calculate the parity value, *y*, of *Q*(*i*) *ki* .

human eyes on a liquid crystal display (LCD), we have observed two pairs of approximate values, • *v* = 8 at *v* = 192 and • *v* = 4 at *v* = 64. By using these values in Eq. (20), then, we

Sophisticated Spatial Domain Watermarking by Bit Inverting Transformation 107

For simplicity, approximating *A*(*v*) by 2*D*(*v*) where *D*(*v*) has integers, we have determined

*<sup>D</sup>*(*v*) = 3, 128 <sup>≤</sup> *<sup>v</sup>* <sup>≤</sup> <sup>255</sup>

*k hk fI* ≤ 3 [ 0, 255 ] • 4 [ 0, 127 ] [ 128, 255 ] ≥ 5 • [ 0, 255 ]

We have carried out an experiment of the bit-block watermarking scheme using 8-bit grayscale test images to evaluate visual quality of the resulting images. In the experiment, the location of an isolated pixel to be transformed was fixed at the center of each block. Generally, a pixel to be transformed can be chosen at random among available pixels in each block. Besides, the transformation *hk* was always performed for each block. Note that, assuming that a watermark bit is a random variable, a half of the pixels being processed in the experiment

Figure 14 shows an example of the results transformed with pixel-blocks of 3 × 3 pixels and *k* = 4. Figure 14(b) includes those pixels which were transformed by *t*<sup>4</sup> and changed by 24 levels. These pixels are considerably perceptible. In contrast, those pixels which were transformed by *h*<sup>4</sup> in Fig. 14(c) have their levels changed by at most 23 levels. We can hardly

Distortions caused by *hk* have two phases. The first phase is caused by *gk*; although the change in each signal level is made minimum, the number of levels appearing in the transformed

Using *A*(*v*) with Eq. (22) in the source range of [ 0, 255 ], we have defined the transformation domains according to the manner described in Sec. 3.2. Table 3 lists the respective domains of the level transformation *hk* and the identity transformation *fI* for each *k* (*k* ≥ 1). Under the limitation given by Eq. (22), the range of *k* such that one or more domains are available for *hk* is found out to be [ 1, 4 ]. Hence, vales of *k* are to be chosen from this range for each

Domains

*A*(*v*) = •(*v* + *v*0). (21)

2, 0 <sup>≤</sup> *<sup>v</sup>* <sup>≤</sup> 127. (22)

obtained *v*<sup>0</sup> = 64 and • = 0.03, which is comparable to a typical • of 0.02. Suppose that we can replace • *v* with *A*(*v*); that is, Eq. (20) is rewritten as

pixel-block in the bit-block watermarking scheme.

are expected to be actually transformed by *hk*.

4. Perceptually adaptive watermarking

4.1 Perceptual modeling 4.1.1 Image distortions

Table 3. Example of transformation domains based on Weber's law.

observe any difference between the transformed and source images.

*D*(*v*) as

3.4.2 Examples

*Step* 4: Fetch a new watermark bit, • , and compare it with *y*.


Proceed to the next pixel-block.

After all the pixel-blocks of *F* are processed, the watermark image *F* is obtained.

(2) Extracting procedure

Let *P*(*i*) denote the *i*th pixel-block of *F* in the form *P*(*i*) = *p* (*i*) <sup>1</sup> , *p* (*i*) <sup>2</sup> , ..., *p* (*i*) *NB* . *P*(*i*) is processed using the same *ki* and *NE* as those used for *P*(*i*) by the following procedure:


(a) If *n <sup>e</sup>* ≥ *NE*, proceed to *Step* 3 to find out the watermark bit.

(b) If *n <sup>e</sup>* < *NE*, it is found that this block contains no watermark. Proceed to the next pixel-block.

*Step* 3: Collect the bits *b*(*v* (*i*) *<sup>j</sup>* + ∆*ki* , *ki*) of the pixel *p* (*i*) *<sup>j</sup>* <sup>∈</sup> *<sup>P</sup>*(*i*) *<sup>e</sup>* , where *<sup>v</sup>* (*i*) *<sup>j</sup>* is a level of *p* (*i*) *<sup>j</sup>* , for *j* = 1, 2, . . . , *n <sup>e</sup>*. Let the resulting bits compose a bit-block, *<sup>Q</sup>*(*i*) *ki* . Then, calculate the parity value, *y* , of *Q*(*i*) *ki* . As a result, *<sup>y</sup>* gives the watermark bit directly. Proceed to the next pixel-block.

The threshold *NE* can be altered every pixel-block. Furthermore, being kept secret, *NE* is expected to improve the concealment of watermarks.

#### 3.4 Experiments

#### 3.4.1 Domains defined by Weber's law

We demonstrate the defining of transformation domains by using a bounding function for level change, *A*(*v*), as described in Sec. 3.1. To specify this function, as an example, we here use Weber's law, which is known as a description of human visual properties for luminance contrast (Jain (1989)). This law states the experimental result that • *L*/*L* is constant where • *L* is the magnitude just noticeably different from the surround luminance *L*. Assuming that this law holds true in the entire *M*-bit dynamic range, we can express it for a luminance level *v* in the fractional form: • *<sup>v</sup>*

$$\frac{\bullet \ v}{v + v\_0} = \bullet$$

where *v*<sup>0</sup> is a fixed level equivalent to a light intensity on human eyes at *v* = 0, and • is a constant known as Weber's ratio. Letting *M* = 8, we now consider 8-bit levels in the range [ 0, 255 ]. From a result of our preliminary experiment for measuring contrast sensitivity of human eyes on a liquid crystal display (LCD), we have observed two pairs of approximate values, • *v* = 8 at *v* = 192 and • *v* = 4 at *v* = 64. By using these values in Eq. (20), then, we obtained *v*<sup>0</sup> = 64 and • = 0.03, which is comparable to a typical • of 0.02.

Suppose that we can replace • *v* with *A*(*v*); that is, Eq. (20) is rewritten as

$$A(v) = \bullet \, (v + v\_0). \tag{21}$$

For simplicity, approximating *A*(*v*) by 2*D*(*v*) where *D*(*v*) has integers, we have determined *D*(*v*) as

$$D(v) = \begin{cases} \text{3,} & 128 \le v \le 255 \\ \text{2,} & 0 \le v \le 127. \end{cases} \tag{22}$$

Using *A*(*v*) with Eq. (22) in the source range of [ 0, 255 ], we have defined the transformation domains according to the manner described in Sec. 3.2. Table 3 lists the respective domains of the level transformation *hk* and the identity transformation *fI* for each *k* (*k* ≥ 1). Under the limitation given by Eq. (22), the range of *k* such that one or more domains are available for *hk* is found out to be [ 1, 4 ]. Hence, vales of *k* are to be chosen from this range for each pixel-block in the bit-block watermarking scheme.


Table 3. Example of transformation domains based on Weber's law.

#### 3.4.2 Examples

16 Will-be-set-by-IN-TECH

*<sup>e</sup>* .

After all the pixel-blocks of *F* are processed, the watermark image *F* is obtained.

*Step* 1: Extract from *<sup>P</sup>*(*i*) those pixels whose levels belong to the ranges of *hki*

*<sup>e</sup>* with a specified threshold *NE*.

processed using the same *ki* and *NE* as those used for *P*(*i*) by the following procedure:

*<sup>e</sup>* ≥ *NE*, proceed to *Step* 3 to find out the watermark bit.

, *ki*) of the pixel *p*

The threshold *NE* can be altered every pixel-block. Furthermore, being kept secret, *NE* is

We demonstrate the defining of transformation domains by using a bounding function for level change, *A*(*v*), as described in Sec. 3.1. To specify this function, as an example, we here use Weber's law, which is known as a description of human visual properties for luminance contrast (Jain (1989)). This law states the experimental result that • *L*/*L* is constant where • *L* is the magnitude just noticeably different from the surround luminance *L*. Assuming that this law holds true in the entire *M*-bit dynamic range, we can express it for a luminance level *v* in

*v* + *v*<sup>0</sup>

where *v*<sup>0</sup> is a fixed level equivalent to a light intensity on human eyes at *v* = 0, and • is a constant known as Weber's ratio. Letting *M* = 8, we now consider 8-bit levels in the range [ 0, 255 ]. From a result of our preliminary experiment for measuring contrast sensitivity of

*<sup>e</sup>* , and apply *hki* to it.

*<sup>e</sup>* is equal to *ne* of *P*(*i*).

*<sup>e</sup>* < *NE*, it is found that this block contains no watermark. Proceed to the next

(*i*)

*<sup>e</sup>*. Let the resulting bits compose a bit-block, *<sup>Q</sup>*(*i*)

*p* (*i*) <sup>1</sup> , *p* (*i*) <sup>2</sup> , ..., *p*

*<sup>e</sup>* be the number of the pixels. The disjoint union of the

*<sup>j</sup>* <sup>∈</sup> *<sup>P</sup>*(*i*) *<sup>e</sup>* , where *<sup>v</sup>*

*ki* . As a result, *<sup>y</sup>* gives the watermark bit directly. Proceed

(*i*)

= • (20)

*<sup>j</sup>* is a level of *p*

*ki* . Then, calculate

(*i*) *NB* 

. *P*(*i*) is

(*i*) *<sup>j</sup>* ,

. Let *<sup>P</sup>*(*i*) *<sup>e</sup>* be

*Step* 4: Fetch a new watermark bit, • , and compare it with *y*.

(a) If • *<sup>y</sup>*, choose one pixel from *<sup>P</sup>*(*i*)

Proceed to the next pixel-block.

a set of these pixels and *n*

transformation ranges ensures that *n*

(*i*) *<sup>j</sup>* + ∆*ki*

expected to improve the concealment of watermarks.

, of *Q*(*i*)

(2) Extracting procedure

*Step* 2: Compare *n*

3.4 Experiments

(a) If *n*

(b) If *n*

pixel-block.

*Step* 3: Collect the bits *b*(*v*

for *j* = 1, 2, . . . , *n*

the parity value, *y*

to the next pixel-block.

3.4.1 Domains defined by Weber's law

the fractional form: • *<sup>v</sup>*

(b) Otherwise, no operation is done to *P*(*i*)

Let *P*(*i*) denote the *i*th pixel-block of *F* in the form *P*(*i*) =

We have carried out an experiment of the bit-block watermarking scheme using 8-bit grayscale test images to evaluate visual quality of the resulting images. In the experiment, the location of an isolated pixel to be transformed was fixed at the center of each block. Generally, a pixel to be transformed can be chosen at random among available pixels in each block. Besides, the transformation *hk* was always performed for each block. Note that, assuming that a watermark bit is a random variable, a half of the pixels being processed in the experiment are expected to be actually transformed by *hk*.

Figure 14 shows an example of the results transformed with pixel-blocks of 3 × 3 pixels and *k* = 4. Figure 14(b) includes those pixels which were transformed by *t*<sup>4</sup> and changed by 24 levels. These pixels are considerably perceptible. In contrast, those pixels which were transformed by *h*<sup>4</sup> in Fig. 14(c) have their levels changed by at most 23 levels. We can hardly observe any difference between the transformed and source images.

#### 4. Perceptually adaptive watermarking

#### 4.1 Perceptual modeling

#### 4.1.1 Image distortions

Distortions caused by *hk* have two phases. The first phase is caused by *gk*; although the change in each signal level is made minimum, the number of levels appearing in the transformed

We consider a local image area where source levels are bounded in a small range. If the source level range is [2*m*∆*k*, 2(*m* + 1)∆*k*), the transformed levels lie in the same range, as Fig. 8 has shown. On the contrary, if the source range of 2∆*<sup>k</sup>* levels is [(2*m* + 1)∆*k*, (2*m* + 3)∆*k*), the transformed levels lie outside the range and no levels appear inside the original range as shown in Fig. 15(a). If the source range of 3∆*<sup>k</sup>* levels is [(2*m* + 1)∆*k*, 2(*m* + 2)∆*k*), there also exists an empty range where no transformed levels appear (Fig. 15(b)). Both the replacement

Sophisticated Spatial Domain Watermarking by Bit Inverting Transformation 109

(2 1) *m* + ∆*<sup>k</sup>*

(2 1) *m* + ∆*<sup>k</sup>*

2( 1) *m* + ∆*<sup>k</sup>*

2( 1) *m* + ∆*<sup>k</sup>*

*S v*

*S v*

(2 3) *m* + ∆*<sup>k</sup>*

2( 2) *m* + ∆*<sup>k</sup>*

(2 3) *m* + ∆*<sup>k</sup>*

of ranges and the missing of ranges cause distortions in the texture of the local areas.

Freq.

Freq.

No levels

Freq.

(b)

Fig. 15. Transformation *hk* from bounded source ranges: Source levels are assumed to occur

The level transformation *hk* causes distortion in the source image, and thus, degrades the image quality. According to the performance analysis of *hk*, we have defined two kinds of

No levels

Freq.

(a)

2*m*∆*<sup>k</sup>*

( 0, 1, , 2 2) *M k m* <sup>−</sup> = … −

*R v*

*R v*

(2 1) *m* + ∆*<sup>k</sup>*

2*m*∆*<sup>k</sup>*

( 0, 1, , 2 2) *M k m* <sup>−</sup> = … −

(2 1) *m* + ∆*<sup>k</sup>*

uniformly in the range.

(1) Change of signal levels

4.1.2 Objective quality measures

objective measures to evaluate the distortion.

2( 1) *m* + ∆*<sup>k</sup>*

(2 3) *m* + ∆*<sup>k</sup>*

2( 2) *m* + ∆*<sup>k</sup>*

(2 3) *m* + ∆*<sup>k</sup>*

2( 2) *m* + ∆*<sup>k</sup>*

Fig. 14. Results of watermarking with 3 × 3 pixel-blocks and *k* = 4 on a 8-bit test image *Cameraman*: (a) original image; (b) and (c) watermarked image by *tk* and *hk*, respectively. The left images are of 256 × 256 pixels; the right images are enlarged portion of 32 × 32 pixels of the left images.

image is reduced. The second phase is caused by the level randomization. In this phase the number of levels appearing in the image increases while the change from the source image increases accordingly. Each phase of distortion affects the image quality in different manners.

#### (1) Distortion in low-detail image regions

The first phase of the distortion affects particularly the quality of low-detail image areas. Visible false contours are likely to appear in such smooth areas due to the effect similar to coarse quantization. As described in Sec. 2.6.2, randomizing output levels can improve visual quality by making the steps of false contours narrow.

(2) Distortion in high-detail image regions

18 Will-be-set-by-IN-TECH

(a)

(b)

(c)

image is reduced. The second phase is caused by the level randomization. In this phase the number of levels appearing in the image increases while the change from the source image increases accordingly. Each phase of distortion affects the image quality in different manners.

The first phase of the distortion affects particularly the quality of low-detail image areas. Visible false contours are likely to appear in such smooth areas due to the effect similar to coarse quantization. As described in Sec. 2.6.2, randomizing output levels can improve visual

Fig. 14. Results of watermarking with 3 × 3 pixel-blocks and *k* = 4 on a 8-bit test image *Cameraman*: (a) original image; (b) and (c) watermarked image by *tk* and *hk*, respectively. The left images are of 256 × 256 pixels; the right images are enlarged portion of 32 × 32 pixels of

the left images.

(1) Distortion in low-detail image regions

(2) Distortion in high-detail image regions

quality by making the steps of false contours narrow.

We consider a local image area where source levels are bounded in a small range. If the source level range is [2*m*∆*k*, 2(*m* + 1)∆*k*), the transformed levels lie in the same range, as Fig. 8 has shown. On the contrary, if the source range of 2∆*<sup>k</sup>* levels is [(2*m* + 1)∆*k*, (2*m* + 3)∆*k*), the transformed levels lie outside the range and no levels appear inside the original range as shown in Fig. 15(a). If the source range of 3∆*<sup>k</sup>* levels is [(2*m* + 1)∆*k*, 2(*m* + 2)∆*k*), there also exists an empty range where no transformed levels appear (Fig. 15(b)). Both the replacement of ranges and the missing of ranges cause distortions in the texture of the local areas.

Fig. 15. Transformation *hk* from bounded source ranges: Source levels are assumed to occur uniformly in the range.

#### 4.1.2 Objective quality measures

The level transformation *hk* causes distortion in the source image, and thus, degrades the image quality. According to the performance analysis of *hk*, we have defined two kinds of objective measures to evaluate the distortion.

(1) Change of signal levels

levels ranging uniformly in bounded intervals of a width of a multiple of ∆*<sup>k</sup>* levels for a given *k*. According to the stochastic analysis of the mapping characteristics, the transformed images have the same *D*msd (more strictly, almost same *D*msd with a small difference due to the pseudo-random numbers) and the different *D*dst depending on the source intervals. Each transformed image was compared with the source image just printed beside on the same paper and then, evaluated according to the degree of perceptible difference with the impairment rating scale listed in Table 4(a). Thus, eleven scores were collected for each test

Sophisticated Spatial Domain Watermarking by Bit Inverting Transformation 111

In the evaluation of the ramp images, a subjective quality of each test image of a different *D*msd value was measured in MOS. The result has indicated an approximately linear correlation between the subjective quality and the logarithms of *D*msd. Consequently, we have observed that *D*msd has the primary effect on the estimation of MOS for the images transformed by *hk*. In the evaluation of the granular images, those test images transformed with the same *k* have the same *D*msd and the different *D*dst. The result has demonstrated that, for a given *k*, the MOS values *S*mos decrease as the *D*dst increases. Accordingly, we suppose that the correlation between *S*mos and *D*dst can be considered approximately linear, while the gradient of linearity

For a transformed image region, let us consider a subjective quality measure as a function of two parameters, *D*msd and *D*dst, that can estimate a subjective quality of the distortion, where *D*msd and *D*dst are measured by comparing the distorted image with the source image. According to the above analysis of the measurements, we suppose that a subjective quality measure *S*mos can be expressed as a linear combination of the logarithm of *D*msd and *D*dst:

where • , • and • are parameters to be estimated by multiple linear regression analysis.

•<sup>ˆ</sup> <sup>=</sup> <sup>−</sup>0.46, <sup>ˆ</sup>

To carry out the regression analysis, we used the triplets of {*S*mos, *D*msd, *D*dst} measured from the test granular images described above. Thus, 54 measurements of the dependent variable *S*mos at 12 different values of the independent variable vector (*D*msd, *D*dst) were obtained and used in the multiple linear regression analysis. As a result, the values of • , • and • are

respectively. Here, the resulting coefficient of determination, which is commonly denoted by *R*2, is 0.86. Let *Se* be a predicted value of *S*mos by Eq. (24) with Eq. (25). Thus, *Se* yields values

*S*mos = • · ln *D*msd + • · *D*dst + • (24)

• = −0.70 and •ˆ = 6.4, (25)

(a) (b)

Value Rating 5 Excellent 4 Good 3 Fair 2 Poor 1 Bad

image, and the MOS was calculated from them.

5 Imperceptible

Table 4. Ratings used in the subjective testing.

4.1.4 Subjective quality measure

varies with the value *k*.

That is,

determined as

comparable to the MOS values.

3 Slightly annoying 2 Annoying 1 Very annoying

4 Perceptible but not annoying

Value Rating

The first measure is the mean squared difference (MSD) between two images, defined by Eq. (18). The MSD value, *D*msd, of a watermarked image (or an image region) evaluates the mean distortion over the entire area of measurement.

(2) Change of level occurrence distributions

By *hk* in every interval of 2∆*<sup>k</sup>* levels in the input dynamic range, the upper half and the lower half are mapped inversely into the output dynamic range, as already shown in Fig. 9. To evaluate the change in the level occurrence distribution, we define the square variation of level occurrence between a source image *X* and the transformed image *X* , denoted by *D*dst, by

$$D\_{\mathbf{dst}} \stackrel{\triangle}{=} \sum\_{i=0}^{2^{M}-1} (\bullet\_{i}^{\prime} - \bullet\_{i})^{2} \bigg/ \sum\_{i=0}^{2^{M}-1} \bullet\_{i}^{2} \tag{23}$$

where • *<sup>i</sup>* and • *<sup>i</sup>* are the relative occurrence frequencies of a signal level *i* in the picture *X* and *X* , respectively, for *<sup>i</sup>* <sup>=</sup> 0, 1, . . . , 2*<sup>M</sup>* <sup>−</sup> 1, satisfying <sup>∑</sup>2*<sup>M</sup>*−<sup>1</sup> *<sup>i</sup>*=<sup>0</sup> • *<sup>i</sup>* <sup>=</sup> 1 and <sup>∑</sup>2*<sup>M</sup>*−<sup>1</sup> *<sup>i</sup>*=<sup>0</sup> • *<sup>i</sup>* = 1.

#### 4.1.3 Subjective testing

To find a correlation between the objective qualities and the subjective quality of the transformed images, we have carried out the subjective evaluations by human observers (Kimoto (2008)). In the measurement of image quality, we used a *rating-scale* method (Netravali & Haskell (1988)) where the observers viewed the test images and assigned each image to one of the given ratings. The results were presented by computing a mean value from the numerical values corresponding to the ratings, which is referred to as a Mean Opinion Score (MOS).

The testing materials were prepared in the following way: Here, assuming that *M* = 8, that is, we have only considered 8-bit images.


All the observers, who were all in their twenties, were unfamiliar with the performance of *hk*. They were asked to look at the materials sitting at a desk under ceiling lights inside a room.

Two kinds of source images were used in the testing. The first one is a 256-grayscale ramp image where the pixel levels vary linearly from 0 to 255 extending from the top side to the bottom side. Thus, the ramp image represents a low-detail region. The transformation *hk* makes the linear gradation of levels distorted in the output image. Accordingly, transforming the source ramp with varying both the value of *k* and the ratio of pixels chosen to transform in the image, which is referred to as the transformation ratio • , yields the distorted images of various values of *D*msd. In the measurement of subjective quality each test image was not compared with the source image, but evaluated from a viewpoint of the appearance of level gradation and assigned one of the five ratings of the absolute rating scale listed in Table 4(b).

The other kind of source image was a granular image representing a high-detail region. The source images used in the measurement were composed of pixels of (pseudo-)random levels ranging uniformly in bounded intervals of a width of a multiple of ∆*<sup>k</sup>* levels for a given *k*. According to the stochastic analysis of the mapping characteristics, the transformed images have the same *D*msd (more strictly, almost same *D*msd with a small difference due to the pseudo-random numbers) and the different *D*dst depending on the source intervals. Each transformed image was compared with the source image just printed beside on the same paper and then, evaluated according to the degree of perceptible difference with the impairment rating scale listed in Table 4(a). Thus, eleven scores were collected for each test image, and the MOS was calculated from them.


Table 4. Ratings used in the subjective testing.

#### 4.1.4 Subjective quality measure

20 Will-be-set-by-IN-TECH

The first measure is the mean squared difference (MSD) between two images, defined by Eq. (18). The MSD value, *D*msd, of a watermarked image (or an image region) evaluates the

By *hk* in every interval of 2∆*<sup>k</sup>* levels in the input dynamic range, the upper half and the lower half are mapped inversely into the output dynamic range, as already shown in Fig. 9. To evaluate the change in the level occurrence distribution, we define the square variation of

To find a correlation between the objective qualities and the subjective quality of the transformed images, we have carried out the subjective evaluations by human observers (Kimoto (2008)). In the measurement of image quality, we used a *rating-scale* method (Netravali & Haskell (1988)) where the observers viewed the test images and assigned each image to one of the given ratings. The results were presented by computing a mean value from the numerical values corresponding to the ratings, which is referred to as a Mean Opinion

The testing materials were prepared in the following way: Here, assuming that *M* = 8, that



All the observers, who were all in their twenties, were unfamiliar with the performance of *hk*. They were asked to look at the materials sitting at a desk under ceiling lights inside a room. Two kinds of source images were used in the testing. The first one is a 256-grayscale ramp image where the pixel levels vary linearly from 0 to 255 extending from the top side to the bottom side. Thus, the ramp image represents a low-detail region. The transformation *hk* makes the linear gradation of levels distorted in the output image. Accordingly, transforming the source ramp with varying both the value of *k* and the ratio of pixels chosen to transform in the image, which is referred to as the transformation ratio • , yields the distorted images of various values of *D*msd. In the measurement of subjective quality each test image was not compared with the source image, but evaluated from a viewpoint of the appearance of level gradation and assigned one of the five ratings of the absolute rating scale listed in Table 4(b). The other kind of source image was a granular image representing a high-detail region. The source images used in the measurement were composed of pixels of (pseudo-)random

 <sup>2</sup>*<sup>M</sup>*−<sup>1</sup> ∑ *i*=0

*<sup>i</sup>* are the relative occurrence frequencies of a signal level *i* in the picture *X* and

• 2

*<sup>i</sup>*=<sup>0</sup> • *<sup>i</sup>* <sup>=</sup> 1 and <sup>∑</sup>2*<sup>M</sup>*−<sup>1</sup>

, denoted by *D*dst,

*<sup>i</sup>* (23)

*<sup>i</sup>* = 1.

*<sup>i</sup>*=<sup>0</sup> •

level occurrence between a source image *X* and the transformed image *X*

*D*dst = <sup>2</sup>*<sup>M</sup>*−<sup>1</sup> ∑ *i*=0 (• *<sup>i</sup>* − • *<sup>i</sup>*) 2

, respectively, for *<sup>i</sup>* <sup>=</sup> 0, 1, . . . , 2*<sup>M</sup>* <sup>−</sup> 1, satisfying <sup>∑</sup>2*<sup>M</sup>*−<sup>1</sup>

printer, which has a printing resolution of 400 dots per inch.

mean distortion over the entire area of measurement.

(2) Change of level occurrence distributions

is, we have only considered 8-bit images.

in the stochastic process.

by

*X*

where • *<sup>i</sup>* and •

Score (MOS).

4.1.3 Subjective testing

In the evaluation of the ramp images, a subjective quality of each test image of a different *D*msd value was measured in MOS. The result has indicated an approximately linear correlation between the subjective quality and the logarithms of *D*msd. Consequently, we have observed that *D*msd has the primary effect on the estimation of MOS for the images transformed by *hk*.

In the evaluation of the granular images, those test images transformed with the same *k* have the same *D*msd and the different *D*dst. The result has demonstrated that, for a given *k*, the MOS values *S*mos decrease as the *D*dst increases. Accordingly, we suppose that the correlation between *S*mos and *D*dst can be considered approximately linear, while the gradient of linearity varies with the value *k*.

For a transformed image region, let us consider a subjective quality measure as a function of two parameters, *D*msd and *D*dst, that can estimate a subjective quality of the distortion, where *D*msd and *D*dst are measured by comparing the distorted image with the source image. According to the above analysis of the measurements, we suppose that a subjective quality measure *S*mos can be expressed as a linear combination of the logarithm of *D*msd and *D*dst: That is,

$$S\_{\rm mos} = \bullet \cdot \ln D\_{\rm msd} + \bullet \cdot D\_{\rm dst} + \bullet \tag{24}$$

where • , • and • are parameters to be estimated by multiple linear regression analysis.

To carry out the regression analysis, we used the triplets of {*S*mos, *D*msd, *D*dst} measured from the test granular images described above. Thus, 54 measurements of the dependent variable *S*mos at 12 different values of the independent variable vector (*D*msd, *D*dst) were obtained and used in the multiple linear regression analysis. As a result, the values of • , • and • are determined as

$$
\hat{\mathbf{r}}^{\star} = -0.46, \hat{\mathbf{\hat{r}}}^{\star} = -0.70 \text{ and } \hat{\mathbf{r}}^{\star} = 6.4,\tag{25}
$$

respectively. Here, the resulting coefficient of determination, which is commonly denoted by *R*2, is 0.86. Let *Se* be a predicted value of *S*mos by Eq. (24) with Eq. (25). Thus, *Se* yields values comparable to the MOS values.

4.2.2 Examples

used.

quality.

Simulations of the adaptive watermarking were carried out. The test images of 256 × 256 8-bit pixels, which are well known as *Lena*, *Peppers*, *Cameraman* and so on, were used as the source images. The adaptively watermarking procedure was implemented in each block of 4 × 4

Sophisticated Spatial Domain Watermarking by Bit Inverting Transformation 113

Fig. 16 shows an example of the values of the parameter *kB* that were determined for each block by the adaptive scheme. From this figure it is observed that the adaptive scheme performs such that in the low-detail regions such as the *sky*, where the human visual system is sensitive to distortion (Netravali & Haskell (1988)), *kB*'s of small values are assigned, and in the high-detail regions such as the lower half area of the image, *kB*'s of large values can be

(a) (b) *kB* values

Fig. 16. The embedding parameters *kB* that are determined for each block by the adaptive scheme: The block size is 4 × 4 pels and the threshold *ST* = 3.5; (a) source image *Cameraman* of 256 × 256 8-bit pels; (b) *kB* values, black means *kB* = 1 and the brighter, the larger *kB*.

Table 5 shows for the source image *Peppers* the distribution of the values of *kB* that were determined by the adaptive scheme for various *ST*. With increasing *ST* the ratios of large

Table 5 also shows the value of *Se* averaged over all the blocks in the image. As this result indicates, the averaged value of *Se* was achieved at about 0.5 greater than the given threshold for each of the images in the simulation. The reason why this difference is likely to occur is

Examples of the images resulting from the same source image for *ST* = 3.0, 4.0 and 5.0 are shown in Fig. 17. From these images it is observed that the better visual quality is certainly

We consider the validity of the subjective quality measure in this section. When we look at an image, we usually evaluate the quality of the whole image. Taking account of this fact, we compare the subjective quality measure to human evaluations in terms of the whole image

Using various images produced in the simulation of the adaptive scheme, first, we have carried out the subjective evaluations of visual quality by use of the impairment rating scale listed in Table 4(a). The MOS value *S*mos was then obtained from the scores of about forty

pixels in the simulation with various threshold value *ST*.

*kB*'s decreases, and the average of *kB* decreases accordingly.

considered that only integers are available for *k*.

achieved as the threshold quality *ST* is set larger.

4.2.3 Validity of subjective quality measure

To evaluate the effect of *D*dst on the modeling of *S*mos, we have also used a simple linear regression model of one independent variable of *D*msd. In this model, *S*mos is expressed in the form

$$S\_{\rm mos} = \text{\*}' \cdot \ln D\_{\rm msd} + \text{\*}' \tag{26}$$

where • and • are the model parameters. By using the same data as those used in the above multiple regression analysis, these two parameters were estimated from 54 measurements of *S*mos at four different values of *D*msd in simple linear regression analysis. As a result, we obtained the estimated values of • and • as

$$
\hat{\bullet}^{\gamma} = -0.55 \text{ and } \hat{\bullet}^{\prime} = 5.7,\tag{27}
$$

respectively. Here, the resulting value of *R*<sup>2</sup> was 0.55. The values of *S*mos predicted by this model have been compared with the measured values. The relation between the measured and predicted values of *S*mos of Eq. (24) has higher correlation than that of Eq. (26) as a result. Thus, *D*dst improves the linear model of *S*mos as one of independent variables.

#### 4.2 Experiments

#### 4.2.1 Block processing procedures

The subjective quality measure *Se* gives an estimate of subjective quality for each image region on the assumption that the entire region is transformed with a given value of *k*. Let *Se*(*k*) denote the value of *Se* resulting from the transformation with the parameter *k*.

By using the subjective quality measure *Se*, we can examine whether the transformation of an image region with a value of *k* satisfies a given condition of subjective quality. Accordingly, the measure can determine the values of *k* to transform an image region with so that the desired subjective quality can be achieved. To enhance the difficulty of detecting the values of *k* in use, the largest one of the available *k*'s is to be chosen. Furthermore, the value of *k* to use can be changed for each image region.

To implement the above adaptive watermarking, we determine both a threshold value *ST* comparable to the subjective quality measure and an upper limit of available *k*, which is set equal to *M* − 1 here for simplicity. The *ST* can be determined from the ratings listed in Table 4. Here, let us assume the transformation ratio is 1. Then, the value of *k* for each image region, *kB*, is determined as

$$k\_{\mathcal{B}} = \max\_{1 \le k \le M-1} \{ k \mid \mathcal{S}\_{\ell}(k) \ge \mathcal{S}\_{T} \}. \tag{28}$$

The region is actually transformed with *kB*. Thereby, the subjective image quality of *Se*(*kB*) is achieved in the transformed region.

The procedure for implementing Eq. (28) is carried out in each image region, starting from *k* = 1 as follows:

*Step* 1: Transform the region with the value of *k*.


#### 4.2.2 Examples

22 Will-be-set-by-IN-TECH

To evaluate the effect of *D*dst on the modeling of *S*mos, we have also used a simple linear regression model of one independent variable of *D*msd. In this model, *S*mos is expressed in the

where • and • are the model parameters. By using the same data as those used in the above multiple regression analysis, these two parameters were estimated from 54 measurements of *S*mos at four different values of *D*msd in simple linear regression analysis. As a result, we

respectively. Here, the resulting value of *R*<sup>2</sup> was 0.55. The values of *S*mos predicted by this model have been compared with the measured values. The relation between the measured and predicted values of *S*mos of Eq. (24) has higher correlation than that of Eq. (26) as a result.

The subjective quality measure *Se* gives an estimate of subjective quality for each image region on the assumption that the entire region is transformed with a given value of *k*. Let *Se*(*k*)

By using the subjective quality measure *Se*, we can examine whether the transformation of an image region with a value of *k* satisfies a given condition of subjective quality. Accordingly, the measure can determine the values of *k* to transform an image region with so that the desired subjective quality can be achieved. To enhance the difficulty of detecting the values of *k* in use, the largest one of the available *k*'s is to be chosen. Furthermore, the value of *k* to use can

To implement the above adaptive watermarking, we determine both a threshold value *ST* comparable to the subjective quality measure and an upper limit of available *k*, which is set equal to *M* − 1 here for simplicity. The *ST* can be determined from the ratings listed in Table 4. Here, let us assume the transformation ratio is 1. Then, the value of *k* for each image region,

> {*k*

The region is actually transformed with *kB*. Thereby, the subjective image quality of *Se*(*kB*) is

The procedure for implementing Eq. (28) is carried out in each image region, starting from

*Step* 4: Compare *Se*(*k*) with the given *ST*. If *Se*(*k*) < *ST*, then, *kB* = *k* − 1. Otherwise, increase

*S*mos = • · ln *D*msd + • (26)

• <sup>=</sup> <sup>−</sup>0.55 and <sup>ˆ</sup>• <sup>=</sup> 5.7, (27)

*Se*(*k*) ≥ *ST*}. (28)

form

4.2 Experiments

4.2.1 Block processing procedures

be changed for each image region.

achieved in the transformed region.

*Step* 1: Transform the region with the value of *k*.

*k* by one, and repeat from *Step 1.*

*kB*, is determined as

*k* = 1 as follows:

obtained the estimated values of • and • as

ˆ

Thus, *D*dst improves the linear model of *S*mos as one of independent variables.

denote the value of *Se* resulting from the transformation with the parameter *k*.

*kB* = max 1≤*k*≤*M*−1

*Step* 2: Calculate *D*msd by Eq. (18) and *D*dst by Eq. (23) within the region.

*Step* 3: Calculate *Se*(*k*) as *Se* of Eq. (24) with the specified coefficients of Eq. (25).

Simulations of the adaptive watermarking were carried out. The test images of 256 × 256 8-bit pixels, which are well known as *Lena*, *Peppers*, *Cameraman* and so on, were used as the source images. The adaptively watermarking procedure was implemented in each block of 4 × 4 pixels in the simulation with various threshold value *ST*.

Fig. 16 shows an example of the values of the parameter *kB* that were determined for each block by the adaptive scheme. From this figure it is observed that the adaptive scheme performs such that in the low-detail regions such as the *sky*, where the human visual system is sensitive to distortion (Netravali & Haskell (1988)), *kB*'s of small values are assigned, and in the high-detail regions such as the lower half area of the image, *kB*'s of large values can be used.

Fig. 16. The embedding parameters *kB* that are determined for each block by the adaptive scheme: The block size is 4 × 4 pels and the threshold *ST* = 3.5; (a) source image *Cameraman* of 256 × 256 8-bit pels; (b) *kB* values, black means *kB* = 1 and the brighter, the larger *kB*.

Table 5 shows for the source image *Peppers* the distribution of the values of *kB* that were determined by the adaptive scheme for various *ST*. With increasing *ST* the ratios of large *kB*'s decreases, and the average of *kB* decreases accordingly.

Table 5 also shows the value of *Se* averaged over all the blocks in the image. As this result indicates, the averaged value of *Se* was achieved at about 0.5 greater than the given threshold for each of the images in the simulation. The reason why this difference is likely to occur is considered that only integers are available for *k*.

Examples of the images resulting from the same source image for *ST* = 3.0, 4.0 and 5.0 are shown in Fig. 17. From these images it is observed that the better visual quality is certainly achieved as the threshold quality *ST* is set larger.

#### 4.2.3 Validity of subjective quality measure

We consider the validity of the subjective quality measure in this section. When we look at an image, we usually evaluate the quality of the whole image. Taking account of this fact, we compare the subjective quality measure to human evaluations in terms of the whole image quality.

Using various images produced in the simulation of the adaptive scheme, first, we have carried out the subjective evaluations of visual quality by use of the impairment rating scale listed in Table 4(a). The MOS value *S*mos was then obtained from the scores of about forty

(a) Source image *Peppers*

Sophisticated Spatial Domain Watermarking by Bit Inverting Transformation 115

(b) *ST* = 3.0; *Se* = 3.60, PSNR=30.3 dB

(c) *ST* = 4.0; *Se* = 4.54, PSNR=37.3 dB

(d) *ST* = 5.0; *Se* = 5.46, PSNR=44.2 dB

Fig. 17. Results of the adaptive watermarking with the various threshold values *ST*: Each right column image magnifies a 64 by 64 pel region of the left one, whose size is 256 by 256

pels.


Table 5. Experimental result of perceptually adaptive watermarking for the source image *Peppers*: The block size is 4 × 4 pixels.

people for each image, who were given no information about the making of the images. Note that these *S*mos's are the human evaluation of the whole image quality. Next, to estimate subjective quality of the whole image from a collection of the calculated values of block quality, the block values *Se* were averaged over all the blocks in each image, and the mean value *Se* is obtained.

Fig. 18(a)–(c) shows the relationship between the mean value *Se* and the measured *S*mos in each of the three test images. A linear correlation between *S*mos and *Se* is clearly observed from either result in this figure. However, the slope of the linear regression line is 1.9, 1.8 and 2.2 in Fig. 18(a), (b) and (c), respectively.

This inclined linear correlation results in the incorrect prediction of subjective quality by the developed measure; for example, as Fig. 18 shows, viewers evaluated the quality of the image at the worst MOS of 1, while the mean value of block quality indicates that the image possesses the quality of MOS of 3.

The value *Se* can be corrected using the corresponding value of *S*mos by linear regression analysis. The linear regression model is expressed in the form

$$S\_{\rm{moos}} = \bullet \cdot \overline{S\_{\ell}} + \bullet \tag{29}$$

where • and • are to be estimated by simple linear regression analysis. To carry out the analysis, we collected the pairs of {*Se*, *S*mos}, where 1 ≤ *Se* ≤ 5, from the results of three images shown in Fig. 18(a)–(c). As a result, the parameters • and • are estimated as

$$
\bullet = 1.9 \text{, and } \bullet = -4.4,
\tag{30}
$$

respectively.

Using Eq. (30), the values of *Se* in Fig. 18(a)–(c) were modified to *Se* . The resulting relationships between *S*mos and *Se* are shown in Fig. 18(a)'–(c)'. In this figure, each value of *S*mos is shown with its 95% confidence interval. The slope of any linear regression line in the figure is about 1.1. Furthermore, from the viewpoint of the confidence intervals of MOS, the linear correlation looks almost valid. Consequently, *Se* can be used to predict the evaluation of subjective quality for at least these three images.

24 Will-be-set-by-IN-TECH

Number of blocks *kB* = 1 0 0 0 2.3 44.6

Averaged *kB* 4.14 3.45 2.88 2.28 1.59 *Se* 3.60 4.10 4.54 4.99 5.46 PSNR (dB) 30.3 34.0 37.3 40.6 44.2

people for each image, who were given no information about the making of the images. Note that these *S*mos's are the human evaluation of the whole image quality. Next, to estimate subjective quality of the whole image from a collection of the calculated values of block quality, the block values *Se* were averaged over all the blocks in each image, and the mean

Fig. 18(a)–(c) shows the relationship between the mean value *Se* and the measured *S*mos in each of the three test images. A linear correlation between *S*mos and *Se* is clearly observed from either result in this figure. However, the slope of the linear regression line is 1.9, 1.8 and

This inclined linear correlation results in the incorrect prediction of subjective quality by the developed measure; for example, as Fig. 18 shows, viewers evaluated the quality of the image at the worst MOS of 1, while the mean value of block quality indicates that the image possesses

The value *Se* can be corrected using the corresponding value of *S*mos by linear regression

where • and • are to be estimated by simple linear regression analysis. To carry out the analysis, we collected the pairs of {*Se*, *S*mos}, where 1 ≤ *Se* ≤ 5, from the results of three

*S*mos is shown with its 95% confidence interval. The slope of any linear regression line in the figure is about 1.1. Furthermore, from the viewpoint of the confidence intervals of MOS, the

images shown in Fig. 18(a)–(c). As a result, the parameters • and • are estimated as

Using Eq. (30), the values of *Se* in Fig. 18(a)–(c) were modified to *Se*

linear correlation looks almost valid. Consequently, *Se*

of subjective quality for at least these three images.

Table 5. Experimental result of perceptually adaptive watermarking for the source image

*Peppers*: The block size is 4 × 4 pixels.

2.2 in Fig. 18(a), (b) and (c), respectively.

analysis. The linear regression model is expressed in the form

value *Se* is obtained.

the quality of MOS of 3.

respectively.

relationships between *S*mos and *Se*

(%) *kB* = 2 0 0.68 18.7 67.5 52.2 *kB* = 3 7.7 54.3 74.9 30.0 3.2 *kB* = 4 71.0 44.5 6.5 0.07 0 *kB* = 5 21.3 0.51 0 0 0 *kB* = 6 0.07 0 0 0 0

Threshold *ST* 3.0 3.5 4.0 4.5 5.0

*S*mos = • · *Se* + • (29)

•ˆ = 1.9, and •ˆ = −4.4, (30)

are shown in Fig. 18(a)'–(c)'. In this figure, each value of

can be used to predict the evaluation

. The resulting

(a) Source image *Peppers*

(b) *ST* = 3.0; *Se* = 3.60, PSNR=30.3 dB

(c) *ST* = 4.0; *Se* = 4.54, PSNR=37.3 dB

(d) *ST* = 5.0; *Se* = 5.46, PSNR=44.2 dB

Fig. 17. Results of the adaptive watermarking with the various threshold values *ST*: Each right column image magnifies a 64 by 64 pel region of the left one, whose size is 256 by 256 pels.

5. Conclusion

summarized below:

evaluation.

6. References

regarding the locations in the watermarked images.

The first result of this chapter is the bit inverting transformation, *hk*. This level transformation performs all the three functions simultaneously: (a) It represents the inversion of a specified bit; (b) it reduces the level change caused by the bit inversion to the minimum; and (c) it adds a random variation to the output levels under limitations on level changes. The transformed level that has both the specified, say, *k*th bit inverted and the level change minimized includes the lowest (*k* − 1) bits either of all 1's or all 0's. In contrast, for most of the input levels, some of these bits or all of them are replaced with random bits by randomly varying the transformed levels. Accordingly, the transformed pixels are hard to discriminate without any information

Sophisticated Spatial Domain Watermarking by Bit Inverting Transformation 117

The properties of the subjective quality measure, which is the second result of this chapter, are




The subjective quality measure was derived from the measurements of the computer synthesized test patterns. The validity of the measure for images of natural scenery was examined in terms of the whole image quality. We obtained the whole image quality value by averaging the block quality values over the image. In the experiment using three test images, a highly but inclined linear correlation was observed between the mean value and the actually measured MOS. Although the inclined gradient was successfully corrected by simple linear regression analysis for the images used in this experiment, the cause of the difference between the estimated values and measured values may be related to human eye's characteristics (Netravali & Haskell (1988)). Accordingly, we have to consider a method for

Another remaining subject is related to the block size of the subjective quality measure. The block size of 4 × 4 has been used in the experiment. The block size is related to the resolution of quality measure. Besides, because a bit position to be inverted is decided at each block in the adaptive scheme, as many bit positions as the blocks must be stored in a secure manner.

Macq, B. M. ed. (1999). Special issue on identification and protection of multimedia

Wang, F.H., Pan, J.S. & Jain, L.C. (2009). *Innovations in Digital Watermarking Techniques*.

information. *Proc. IEEE*, Vol. 87, No. 7, July 1999, pp. 1059–1276.

quality measure is essentially to be carried out in block processing.

according to the image texture so as to achieve the adaptive scheme.

estimating the whole image quality from block qualities in more detail.

From these viewpoints, the appropriate block size should be considered.

Springer-Verlag, Berlin Heidelberg.

Fig. 18. MOS versus the subjective quality measure averaged over the blocks *Se* ((a)–(c)) and MOS versus the modified values *Se* ((a)'–(c)') for each source image: (a) and (a)' source *Lena*; (b) and (b)' *Peppers*; (c) and (c)' *Cameraman*; the line in each figure shows the linear regression of the data points; the points painted solid in white are excluded due to out of range [ 1, 5 ].

#### 5. Conclusion

26 Will-be-set-by-IN-TECH

(a) (a)'

(b) (b)'

(c) (c)'

Fig. 18. MOS versus the subjective quality measure averaged over the blocks *Se* ((a)–(c)) and

(b) and (b)' *Peppers*; (c) and (c)' *Cameraman*; the line in each figure shows the linear regression of the data points; the points painted solid in white are excluded due to out of range [ 1, 5 ].

MOS *S*mos

MOS *S*mos

MOS *S*mos

0 1 2 3 4 5 6

0 1 2 3 4 5 6

0 1 2 3 4 5 6

((a)'–(c)') for each source image: (a) and (a)' source *Lena*;

*e S*′

*e S*′

*e S*′

0 1 2 3 4 5 6

0 1 2 3 4 5 6

0 1 2 3 4 5 6

*e S*

*e S*

*e S*

MOS *S*mos

MOS *S*mos

MOS versus the modified values *Se*

MOS *S*mos

The first result of this chapter is the bit inverting transformation, *hk*. This level transformation performs all the three functions simultaneously: (a) It represents the inversion of a specified bit; (b) it reduces the level change caused by the bit inversion to the minimum; and (c) it adds a random variation to the output levels under limitations on level changes. The transformed level that has both the specified, say, *k*th bit inverted and the level change minimized includes the lowest (*k* − 1) bits either of all 1's or all 0's. In contrast, for most of the input levels, some of these bits or all of them are replaced with random bits by randomly varying the transformed levels. Accordingly, the transformed pixels are hard to discriminate without any information regarding the locations in the watermarked images.

The properties of the subjective quality measure, which is the second result of this chapter, are summarized below:


The subjective quality measure was derived from the measurements of the computer synthesized test patterns. The validity of the measure for images of natural scenery was examined in terms of the whole image quality. We obtained the whole image quality value by averaging the block quality values over the image. In the experiment using three test images, a highly but inclined linear correlation was observed between the mean value and the actually measured MOS. Although the inclined gradient was successfully corrected by simple linear regression analysis for the images used in this experiment, the cause of the difference between the estimated values and measured values may be related to human eye's characteristics (Netravali & Haskell (1988)). Accordingly, we have to consider a method for estimating the whole image quality from block qualities in more detail.

Another remaining subject is related to the block size of the subjective quality measure. The block size of 4 × 4 has been used in the experiment. The block size is related to the resolution of quality measure. Besides, because a bit position to be inverted is decided at each block in the adaptive scheme, as many bit positions as the blocks must be stored in a secure manner. From these viewpoints, the appropriate block size should be considered.

#### 6. References


**6** 

Tingyuan Nie

*China* 

*Qingdao Technological University* 

**Performance Evaluation for IP** 

**Protection Watermarking Techniques** 

The advance of processing technology has led to a rapid increase in IC design complexity. There are now more than thousand million transistors integrated on a chip, and the increasing trend is expected to continue until 2020 or later. This creates the design productivity gap between IC design (typically 20% per year) and IC manufacturing (over 40% per year), and this gap is becoming wider and wider. To close this gap, IP (intellectual property) reuse emerged as the most significant design technology innovation in the past decades. IP companies, third-party libraries, and industry organizations such as the VSIA (Virtual Socket Interface Alliance) have created high expectations for the value and reusability of design IP.

The IP reuse in the reuse-based design methodology is rather different from other reuses such as media, devices to produce artifacts. The reuse of components, designed for a class of applications, is a method to reduce the design-effort, which is well-known from software design for a long time already. In the field of IC design, the reuse of blocks has been practiced in design houses mainly in form of an evolution of existing products. Due to shorter product cycles and rapidly increasing product complexity, many design companies will more and more refer to module cores from outside. During the process of the transfer of design blocks from the original provider to the integrator, intellectual property issues have to be seriously considered. At the same time, some essential issues for IP reuse are outlined: design quality, documentation, security, support, and integration (Thomas et al., 2001). As suggested in the "Reuse Methodology Manual for System-On-A-Chip Designs" (Keating & Bricaud, 1998), an example process of integrating IPs and doing physical chip design can be

**1. Introduction** 

broken into the following steps:

**P**lanning the physical design; **S**ynthesis and initial timing analysis;

**P**hysical verification of the design.

**S**electing IP blocks and preparing them for integration; **I**ntegrating all the IP blocks into the top-level RTL;

**F**inal physical design, timing verification, and power analysis;

**I**nitial physical design and timing analysis, with iteration until timing closure;

There are many solved or unsolved issues need to be addressed for IP market: friendly interface between IP provider and IP user, design-for-manufacturing, design-for-test,


## **Performance Evaluation for IP Protection Watermarking Techniques**

Tingyuan Nie *Qingdao Technological University China* 

#### **1. Introduction**

28 Will-be-set-by-IN-TECH

118 Watermarking – Volume 2

Oka, K., Matsui, K. (1997). Signature method into gray-scale images with embedding function.

Kimoto, T. (2005). Implementation of level transformations for hiding watermarks in image

Kimoto, T. (2007). Modified level transformation for bit inversion in watermarking. *Proc. of the*

Kimoto, T. (2006). A sophisticated bit-conversion method for digital watermarking. *Proc. of*

Kimoto, T. (2009). An advanced method for watermarking digital signals in bit-plane

Awrangjeb, M., Kankanhalli, M.S (2004). Lossless watermarking considering the human visual

Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P (2004). Image Quality Assessment: From

Cox, I.J., Miller, M.L., Bloom, J.A (2002). *Digital watermarking*. Morgan Kaufmann Publishers. Mikami, D., Shimizu, M., Makabe, S., Kamiyoshihara, Y., Kimoto, T (2008). Measurement

Netravali, A.N., Haskell, B.G (1988). *Digital Pictures*. Plenum Press, New York, USA. Jain A.K. (1989). *Fundamentals of digital image processing* Englewood Cliffs NJ, Prentice-Hall. Kimoto, T., Kosaka, F. (2010). A perceptually adaptive scheme for image bit-inversion-based

*Image Processing (ICIP 2005)* , pp. 253–256. Genova, Italy, Sept. 2005.

1186–1191 (in Japanese).

Honolulu, USA, Aug. 2006.

Springer, 2004.

600–612.

SPC-P2.8. Dresden, Germany, June 2009.

*TENCON 2008* , O17-7. Hyderabad, India, Nov. 2008.

2007.

*IEICE Transactions on Information and Systems* , Vol. J80-D-II, No. 5, May 1997, pp.

bit-planes under limited level changes. *Proc. of the IEEE International Conference on*

*IEEE International Conference on Image Processing (ICIP 2007)* . San Antonio, USA, Sept.

*the 8th IASTED International Conference on Signal and Image Processing* , pp. 139–144.

structure. *Proc. of IEEE International Conference on Communications (ICC 2009)* ,

system. In: *Digital watermarking*, pp. 581–592. Kalker, T., Cox, I.J., Ro, Y.M. eds (2004).

Error Visibility to Structural Similarity. *IEEE Transactions on Image Processing* 13 (4)

of subjective quality of watermarked images made by inverting bits. *Proc. of IEEE*

watermarks. *Proc. of the 6th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS2010)* , pp. 114–122. Kuala Lumpur, Malaysia, Dec. 2010. The advance of processing technology has led to a rapid increase in IC design complexity. There are now more than thousand million transistors integrated on a chip, and the increasing trend is expected to continue until 2020 or later. This creates the design productivity gap between IC design (typically 20% per year) and IC manufacturing (over 40% per year), and this gap is becoming wider and wider. To close this gap, IP (intellectual property) reuse emerged as the most significant design technology innovation in the past decades. IP companies, third-party libraries, and industry organizations such as the VSIA (Virtual Socket Interface Alliance) have created high expectations for the value and reusability of design IP.

The IP reuse in the reuse-based design methodology is rather different from other reuses such as media, devices to produce artifacts. The reuse of components, designed for a class of applications, is a method to reduce the design-effort, which is well-known from software design for a long time already. In the field of IC design, the reuse of blocks has been practiced in design houses mainly in form of an evolution of existing products. Due to shorter product cycles and rapidly increasing product complexity, many design companies will more and more refer to module cores from outside. During the process of the transfer of design blocks from the original provider to the integrator, intellectual property issues have to be seriously considered. At the same time, some essential issues for IP reuse are outlined: design quality, documentation, security, support, and integration (Thomas et al., 2001). As suggested in the "Reuse Methodology Manual for System-On-A-Chip Designs" (Keating & Bricaud, 1998), an example process of integrating IPs and doing physical chip design can be broken into the following steps:

**S**electing IP blocks and preparing them for integration; **I**ntegrating all the IP blocks into the top-level RTL; **P**lanning the physical design; **S**ynthesis and initial timing analysis; **I**nitial physical design and timing analysis, with iteration until timing closure; **F**inal physical design, timing verification, and power analysis; **P**hysical verification of the design.

There are many solved or unsolved issues need to be addressed for IP market: friendly interface between IP provider and IP user, design-for-manufacturing, design-for-test,

Performance Evaluation for IP Protection Watermarking Techniques 121

Kirovski et al. (Kirovski et al., 2000). It enables the identification of solutions generated by strategically different tools and algorithms. They simply check the given solution for the properties that the algorithm clustering has been performed and claim that the solution is obtained by the algorithm that has the best fit. The poor application to distinguish different algorithms as well as the requirement of candidate algorithms and computing resource is the lack of this technique. So the need for effective CAD tools and algorithms protection becomes vital and urgent. CAD tools and algorithms protection are not in the scope of this book, our work focuses on watermarking techniques for the protection of reuse IP core. We review the representative watermarking techniques and evaluate their performance for both ASIC (Application-Specific Integrated Circuit) and FPGA (Field

Fingerprinting technology is a complementary to watermarking due to the demand of ensuring the rights of both IP provider and IP users. The main challenge of fingerprinting technique is how to create numerous IP cores with the same function for different IP users. The common approach is to acquire each IP user's signature and repeat embedding it into the entire design to create high-quality solutions from scratch within reasonable amortized

To the best of our knowledge, the first IP fingerprinting technique is published by Lach et al. (Lach et al., 1998). Their approach is based on the solution by partitioning an initial solution into a large number of parts to provide different fingerprinting realizations (a restricted FPGA mapping problem). Unfortunately, the technique cannot be applied if the design do not have natural geometric structure. Also it has relatively low resilience against collusion attacks due to the identical global structure and the time overhead for creating fingerprinted solutions is relatively high. Andrew et.al proposed a generic fingerprinting methodology that applies to arbitrary incremental optimization/synthesis problems on an watermarked initial "seed" solution to yield different but functionally identical fingerprinted IPs. The approach enhanced collusion resiliency with low runtimes but different solutions are not guaranteed (Andrew et.al, 1999). Gang and Miodrag proposed a fingerprinting technique which uses arbitrary optimization on the problem formulation superimposed additional constraints to produce numbers of distinct solutions with high quality. The run-time

overhead for generating many solutions is almost zero (Gang & Miodrag, 2004).

investigation and estimation. Finally we have a conclusion for overall work.

**2. Watermarking performance evaluation** 

art watermarking-based IPP technique should be:

1. Maintenance of functional correctness.

The remainder of this section is organized as follows. We first review the related works of watermarking techniques. Analyze the representative watermarking techniques, introduce watermarking performance evaluation function, and show experimental results for watermarking techniques of ASIC. Followed give a simplified FPGA watermarking

Referencing viewpoints by VSI Alliance (FallWorldwide Member Meeting, 1997), a state-of-

2. High-credible; coincidence probability, the probability a non-watermarked design

might coincide by accident with a watermarked one should be low enough. 3. High-security; watermark should be in the integrity or can be extracted under attack.

Programmable Gate Array) designs.

design cost.

design-for-reuse, IP standardization, rules for IP exchange, and so on. IP reuse is based on information sharing and integration. Therefore piracy will also have much easier access to the IPs. The IP piracy affects the IP vendors, chip design houses as well as system manufacturers adversely by depriving their revenue and market share. As a result, recent trends of IP piracy have raised serious concerns among the IC design community.

In response to these trends, IP protection becomes crucial to both IP vendors and IP users and becomes one of the key solutions for industrial reuse-based integration. Although sometimes the lack of mechanisms for IP protection becomes barriers to increase design productivity, there have been significant advances from both industry and academic. Especially the VSIA's white paper on IP protection (VSIA, 2000a) and physical tagging standard (VSIA, 2000b) has now been widely adopted by semiconductor and EDA industry. Numerous protection techniques are proposed by researchers both from industry and academia. There exist three forms of IP protection techniques: tagging, fingerprinting, and watermarking. The idea of tagging proposed by Marsh & Kean is to provide a "security tag" for the IP core which can easily be detected off chip using an external receiver called as "wand" (Marsh & Kean, 2007). The approach is vulnerable because the tag can be easily removed by someone if he/her knows some information about the tagging. Bolotnyy & Robins use PUFs (Physically Unclonable Functions) to create aboard RFID (Radio Frequency Identification) tags to protect ICs from cloning (Bolotnyy & Robins, 2007). The security is really improved. However the PUF design is so complicated that the manufacture is hardly reachable. Majzoobi et al. proposed a "Lightweight Secure PUFs" with the new structure in low area, power, and delay overheads. The appeoach facilitates easy security versus implementation cost trade-offs (Majzoobi et al., 2008). There are also other variants in PUF researches, such as implementation of PUFs exploiting physical characteristics other than timing and delay information of silicon circuits. Ravikanth et al. Proposed an optical PUF, which uses the speckle patterns of optical medium for laser light (Ravikanth et al., 2001). Coating PUFs and acoustic PUFs measure the capacitance of a coating layer covering an IC and the acoustic reflections of a token, respectively (Skoric et al., 2005; Tuyls et al., 2005).

Among these techniques, watermarking is the most extensive mechanism implemented at multi-levels of IC design procedure. Primitive watermarking, also known as data hiding, embeds data into digital media for the purpose of identification, annotation, and copyright. The rapid development of digitized media and the internet revolution are creating a pressing need for copyright enforcement schemes to protect copyright ownership. Numerous techniques for data hiding in digital images, videos, audios, texts and other multimedia data have been developed. All these techniques take advantage of the limitation of human visual and auditory systems, and simply embed the signature to the digital data by introducing minute errors. The transparency of the signature relies on human's insensitiveness to these subtle changes. For detail survey, refers to (Gang & Potkonjak, 2003). Especially, watermarking techniques in VLSI domain protects IP cores, CAD tools as well as algorithms from illegal reuse.

CAD tools and algorithms are protected as traditional software by mechanisms such as licensing agreements and encryption. Despite the lack of enforcement of licensing agreements and the security holes of encryption protocols, these protections do not provide the ability to detect IP piracy (Lin et al., 2006). The rare technique that detects possible CAD tool and algorithm piracy is the forensic engineering approach proposed by

design-for-reuse, IP standardization, rules for IP exchange, and so on. IP reuse is based on information sharing and integration. Therefore piracy will also have much easier access to the IPs. The IP piracy affects the IP vendors, chip design houses as well as system manufacturers adversely by depriving their revenue and market share. As a result, recent

In response to these trends, IP protection becomes crucial to both IP vendors and IP users and becomes one of the key solutions for industrial reuse-based integration. Although sometimes the lack of mechanisms for IP protection becomes barriers to increase design productivity, there have been significant advances from both industry and academic. Especially the VSIA's white paper on IP protection (VSIA, 2000a) and physical tagging standard (VSIA, 2000b) has now been widely adopted by semiconductor and EDA industry. Numerous protection techniques are proposed by researchers both from industry and academia. There exist three forms of IP protection techniques: tagging, fingerprinting, and watermarking. The idea of tagging proposed by Marsh & Kean is to provide a "security tag" for the IP core which can easily be detected off chip using an external receiver called as "wand" (Marsh & Kean, 2007). The approach is vulnerable because the tag can be easily removed by someone if he/her knows some information about the tagging. Bolotnyy & Robins use PUFs (Physically Unclonable Functions) to create aboard RFID (Radio Frequency Identification) tags to protect ICs from cloning (Bolotnyy & Robins, 2007). The security is really improved. However the PUF design is so complicated that the manufacture is hardly reachable. Majzoobi et al. proposed a "Lightweight Secure PUFs" with the new structure in low area, power, and delay overheads. The appeoach facilitates easy security versus implementation cost trade-offs (Majzoobi et al., 2008). There are also other variants in PUF researches, such as implementation of PUFs exploiting physical characteristics other than timing and delay information of silicon circuits. Ravikanth et al. Proposed an optical PUF, which uses the speckle patterns of optical medium for laser light (Ravikanth et al., 2001). Coating PUFs and acoustic PUFs measure the capacitance of a coating layer covering an IC and the acoustic reflections of a token, respectively (Skoric et al., 2005; Tuyls et al., 2005).

Among these techniques, watermarking is the most extensive mechanism implemented at multi-levels of IC design procedure. Primitive watermarking, also known as data hiding, embeds data into digital media for the purpose of identification, annotation, and copyright. The rapid development of digitized media and the internet revolution are creating a pressing need for copyright enforcement schemes to protect copyright ownership. Numerous techniques for data hiding in digital images, videos, audios, texts and other multimedia data have been developed. All these techniques take advantage of the limitation of human visual and auditory systems, and simply embed the signature to the digital data by introducing minute errors. The transparency of the signature relies on human's insensitiveness to these subtle changes. For detail survey, refers to (Gang & Potkonjak, 2003). Especially, watermarking techniques in VLSI domain protects IP cores, CAD tools as

CAD tools and algorithms are protected as traditional software by mechanisms such as licensing agreements and encryption. Despite the lack of enforcement of licensing agreements and the security holes of encryption protocols, these protections do not provide the ability to detect IP piracy (Lin et al., 2006). The rare technique that detects possible CAD tool and algorithm piracy is the forensic engineering approach proposed by

well as algorithms from illegal reuse.

trends of IP piracy have raised serious concerns among the IC design community.

Kirovski et al. (Kirovski et al., 2000). It enables the identification of solutions generated by strategically different tools and algorithms. They simply check the given solution for the properties that the algorithm clustering has been performed and claim that the solution is obtained by the algorithm that has the best fit. The poor application to distinguish different algorithms as well as the requirement of candidate algorithms and computing resource is the lack of this technique. So the need for effective CAD tools and algorithms protection becomes vital and urgent. CAD tools and algorithms protection are not in the scope of this book, our work focuses on watermarking techniques for the protection of reuse IP core. We review the representative watermarking techniques and evaluate their performance for both ASIC (Application-Specific Integrated Circuit) and FPGA (Field Programmable Gate Array) designs.

Fingerprinting technology is a complementary to watermarking due to the demand of ensuring the rights of both IP provider and IP users. The main challenge of fingerprinting technique is how to create numerous IP cores with the same function for different IP users. The common approach is to acquire each IP user's signature and repeat embedding it into the entire design to create high-quality solutions from scratch within reasonable amortized design cost.

To the best of our knowledge, the first IP fingerprinting technique is published by Lach et al. (Lach et al., 1998). Their approach is based on the solution by partitioning an initial solution into a large number of parts to provide different fingerprinting realizations (a restricted FPGA mapping problem). Unfortunately, the technique cannot be applied if the design do not have natural geometric structure. Also it has relatively low resilience against collusion attacks due to the identical global structure and the time overhead for creating fingerprinted solutions is relatively high. Andrew et.al proposed a generic fingerprinting methodology that applies to arbitrary incremental optimization/synthesis problems on an watermarked initial "seed" solution to yield different but functionally identical fingerprinted IPs. The approach enhanced collusion resiliency with low runtimes but different solutions are not guaranteed (Andrew et.al, 1999). Gang and Miodrag proposed a fingerprinting technique which uses arbitrary optimization on the problem formulation superimposed additional constraints to produce numbers of distinct solutions with high quality. The run-time overhead for generating many solutions is almost zero (Gang & Miodrag, 2004).

The remainder of this section is organized as follows. We first review the related works of watermarking techniques. Analyze the representative watermarking techniques, introduce watermarking performance evaluation function, and show experimental results for watermarking techniques of ASIC. Followed give a simplified FPGA watermarking investigation and estimation. Finally we have a conclusion for overall work.

#### **2. Watermarking performance evaluation**

Referencing viewpoints by VSI Alliance (FallWorldwide Member Meeting, 1997), a state-ofart watermarking-based IPP technique should be:


Performance Evaluation for IP Protection Watermarking Techniques 123

signal nets. The post-processing flow provides a method that encodes a signature as specified parity of the cell row within which particular standard cells must be placed. Narayan et.al provided a method for embedding a watermark by modifying the number of vias or bends of the nets in a design (Narayan, et.al, 2001). There were 12~13% expense in the number of vias and wire length which is unpractical in real life. The author also proposed a post layout watermarking method which smartly changes route directions by setting obstacle and rerouting (Nie, et.al, 2005). There was no extra wire length overhead and the incremental watermarking time is acceptable. Other techniques at physical design

For physical design watermarking, we choose the most representative technique proposed by (Kahng et.al, 1998) for evaluation instance. According to the published results, the extra routing CPU run time for watermarking is about 9.00%; increased wire-length and via number (watermarking overhead) are 0.58% and 0.55% respectively, sum is 1.13%; the coincidence probability geometrically reduced to the constraint number, from 1.1e-8 (nearly 10-3 for 20 constraints) to less than e-85 (nearly 10-25 for 320 constraints). From their analysis, the approaches can prevent "ghost signatures" and forging attack due to enough-long constraints and message encoding. They also showed the result from tampering with placement and routing watermark which indicates solution quality degrades much faster than signature strength. It proves that tampering does not appear to

Torunoglu et.al and Oliveira introduced a similar watermarking-based copyright protection technique of sequential functions at behavioral design level (Torunoglu & Charbon,2000; Oliveira, 2001). The algorithm is based on adding new input/output sequences to the finite state machines (FSM) representation of the design. It extracts the unused transitions in a state transition graph (STG) of the behavioral model. These unused transitions are inserted in the STG associated with a new defined input/output sequence, which will act as the watermark. The main advantage of this kind approach is the ability to detect the presence of the watermark at all lower design levels. Torunoglu and Charbon performed exhaustive search only in one case due to the extreme computational complexity of this method. The CPU time in this case was 1.0 second for an area of 2.33-k gates, but it increases exponentially according to their computation formula. The coincidence probability of watermarking is from 10-7 to 10-34, averagely 10-11. The watermarking overhead (Extra area of modified FSM) is from 0.2% to 143%, average is 23.77%. It will be much larger if the expected watermark becomes longer. The number of I/O pins which is used to create sequence to insert watermark is not very long, so the approach's resistance to "ghost signatures" attack is not as strong as expected. They proved the "tampering" attack will not successful under various assumptions. Unfortunately, because there is no encryption for the

There are few watermarking works at structural-level. Kirovski et.al developed a watermarking approach to protect EDA tools and designs at the combinational logic synthesis level. The user-specific watermarking instance is soluted by imposing constraints to the original logic network, where the constraints are uniquely dependent on author's

level are also proposed (Min & Zhiqiang, 2004; Irby et.al, 2000).

be a viable form of an attack.

**2.1.1.2 Behavioral-level watermarking** 

watermarking, the approach is weak to "forging" attack.

**2.1.1.3 Structural-level watermarking** 


According to the requirements of watermarking, a complete methodology for watermarking performance evaluation should be established. Unfortunately, limited to our knowledge, there is no comprehensive evaluation function for IP watermarking techniques so far. The only literature published for watermarking investigation is accomplished by Abdel-Hamid et al. (Abdel-Hamid et al., 2003). However, they only compared performance of the approaches from their embedding cost, overhead, probability of coincidence, and security. There was no more deeply analysis and evaluation for the watermarking techniques.

In the context, we introduce representative watermarking techniques and evaluate their performance for the two usual IC forms: ASIC and FPGA respectively.

#### **2.1 Watermarking performance evaluation for ASIC**

From watermarking construction style, there are almost two methods for watermarking ASIC IP cores. One focuses on introducing additional constraints on certain parts of the solution space of synthesis and optimization algorithms. Another is adding redundancies to the original design.

From VLSI design process, pre-processing watermarking methods and post-processing watermarking methods are discussed. Pre-processing techniques embed watermark before the synthesis tools are applied to solve the watermarked problem. Post-processing techniques firstly solve the original problem without any watermarks. The solved solution will be altered sequentially based on the watermarking constraints. According to design process, watermarking techniques at behavioural-level, structural-level, physical-level, and algorithm-level are proposed.

There may be some shortfalls or defects for a certain watermarking technique. It becomes an important work to evaluate the performance of a watermarking technique because the approaches may bring influences to the origin. So it is impending to build methodologies and functions for watermarking performance evaluation.

#### **2.1.1 Watermarking technique review**

In this section, we firstly review a few representative watermarking techniques constructed at different design levels. Then analyze them form a few essential aspects: embedding cost, coincidence probability, security, and tracing cost.

#### **2.1.1.1 Physical-level watermarking**

Kahng et al. firstly proposed the constraint-based watermarking methodologies based on the usage of available tools which solves NP-hard problems (Kahng et.al, 1998). The algorithm adds extra constraints to such solutions that would make it yield the new watermarked design. They validated the approaches in pre-processing and post-processing, respectively. The pre-processing flow provides a method that adds constraints by involving segment widths, spaces, and choice of topology. They applied the watermarking by encoding a signature as upper bounds on the wrong-way wiring used to route particular

According to the requirements of watermarking, a complete methodology for watermarking performance evaluation should be established. Unfortunately, limited to our knowledge, there is no comprehensive evaluation function for IP watermarking techniques so far. The only literature published for watermarking investigation is accomplished by Abdel-Hamid et al. (Abdel-Hamid et al., 2003). However, they only compared performance of the approaches from their embedding cost, overhead, probability of coincidence, and security.

In the context, we introduce representative watermarking techniques and evaluate their

From watermarking construction style, there are almost two methods for watermarking ASIC IP cores. One focuses on introducing additional constraints on certain parts of the solution space of synthesis and optimization algorithms. Another is adding redundancies to

From VLSI design process, pre-processing watermarking methods and post-processing watermarking methods are discussed. Pre-processing techniques embed watermark before the synthesis tools are applied to solve the watermarked problem. Post-processing techniques firstly solve the original problem without any watermarks. The solved solution will be altered sequentially based on the watermarking constraints. According to design process, watermarking techniques at behavioural-level, structural-level, physical-level, and

There may be some shortfalls or defects for a certain watermarking technique. It becomes an important work to evaluate the performance of a watermarking technique because the approaches may bring influences to the origin. So it is impending to build methodologies

In this section, we firstly review a few representative watermarking techniques constructed at different design levels. Then analyze them form a few essential aspects: embedding cost,

Kahng et al. firstly proposed the constraint-based watermarking methodologies based on the usage of available tools which solves NP-hard problems (Kahng et.al, 1998). The algorithm adds extra constraints to such solutions that would make it yield the new watermarked design. They validated the approaches in pre-processing and post-processing, respectively. The pre-processing flow provides a method that adds constraints by involving segment widths, spaces, and choice of topology. They applied the watermarking by encoding a signature as upper bounds on the wrong-way wiring used to route particular

There was no more deeply analysis and evaluation for the watermarking techniques.

performance for the two usual IC forms: ASIC and FPGA respectively.

**2.1 Watermarking performance evaluation for ASIC** 

and functions for watermarking performance evaluation.

coincidence probability, security, and tracing cost.

4. Low embedding cost. 5. Low overhead. 6. Traceable.

the original design.

algorithm-level are proposed.

**2.1.1 Watermarking technique review** 

**2.1.1.1 Physical-level watermarking** 

signal nets. The post-processing flow provides a method that encodes a signature as specified parity of the cell row within which particular standard cells must be placed. Narayan et.al provided a method for embedding a watermark by modifying the number of vias or bends of the nets in a design (Narayan, et.al, 2001). There were 12~13% expense in the number of vias and wire length which is unpractical in real life. The author also proposed a post layout watermarking method which smartly changes route directions by setting obstacle and rerouting (Nie, et.al, 2005). There was no extra wire length overhead and the incremental watermarking time is acceptable. Other techniques at physical design level are also proposed (Min & Zhiqiang, 2004; Irby et.al, 2000).

For physical design watermarking, we choose the most representative technique proposed by (Kahng et.al, 1998) for evaluation instance. According to the published results, the extra routing CPU run time for watermarking is about 9.00%; increased wire-length and via number (watermarking overhead) are 0.58% and 0.55% respectively, sum is 1.13%; the coincidence probability geometrically reduced to the constraint number, from 1.1e-8 (nearly 10-3 for 20 constraints) to less than e-85 (nearly 10-25 for 320 constraints). From their analysis, the approaches can prevent "ghost signatures" and forging attack due to enough-long constraints and message encoding. They also showed the result from tampering with placement and routing watermark which indicates solution quality degrades much faster than signature strength. It proves that tampering does not appear to be a viable form of an attack.

#### **2.1.1.2 Behavioral-level watermarking**

Torunoglu et.al and Oliveira introduced a similar watermarking-based copyright protection technique of sequential functions at behavioral design level (Torunoglu & Charbon,2000; Oliveira, 2001). The algorithm is based on adding new input/output sequences to the finite state machines (FSM) representation of the design. It extracts the unused transitions in a state transition graph (STG) of the behavioral model. These unused transitions are inserted in the STG associated with a new defined input/output sequence, which will act as the watermark. The main advantage of this kind approach is the ability to detect the presence of the watermark at all lower design levels. Torunoglu and Charbon performed exhaustive search only in one case due to the extreme computational complexity of this method. The CPU time in this case was 1.0 second for an area of 2.33-k gates, but it increases exponentially according to their computation formula. The coincidence probability of watermarking is from 10-7 to 10-34, averagely 10-11. The watermarking overhead (Extra area of modified FSM) is from 0.2% to 143%, average is 23.77%. It will be much larger if the expected watermark becomes longer. The number of I/O pins which is used to create sequence to insert watermark is not very long, so the approach's resistance to "ghost signatures" attack is not as strong as expected. They proved the "tampering" attack will not successful under various assumptions. Unfortunately, because there is no encryption for the watermarking, the approach is weak to "forging" attack.

#### **2.1.1.3 Structural-level watermarking**

There are few watermarking works at structural-level. Kirovski et.al developed a watermarking approach to protect EDA tools and designs at the combinational logic synthesis level. The user-specific watermarking instance is soluted by imposing constraints to the original logic network, where the constraints are uniquely dependent on author's

Performance Evaluation for IP Protection Watermarking Techniques 125

Obviously, lower watermarking cost leads to high-performance watermarking technique. Therefore watermarking performance is reverse to embedding cost. Similarly, watermarking performance is reverse to coincidence probability, overhead, and tracing cost. Instead watermarking performance is proportional to its security which must be concerned. We give

1 2 3 45 *P f Em Cost f Coin pro f Overhead f Security f Trace Cost* &

In practice, watermarking tracing cost is almost equal to watermarking embedding cost, so

1 2 34 *P f Em Cost f Coin pro f Overhead f Security* & 2 (\_ ) ( \_) ( ) ( )

Each part of formulation (3) is related to both watermark constraints size and watermarking method. Consider process of watermarking-based IP protection, we evaluate the

The watermarking IP protection process is implemented by either intrusive software or an incremental implementation of EDA tool. So the additional CPU run time of the implementation is considered as the embedding cost. Generally, watermarking identification needs some extra circuits. We take the increased wire-length and (or) via number as watermarking overhead. It is considered that the security of watermarking techniques is related with its resistance to attacks. There is a brief introduction of prototypical attacks referred in (Kahng et.al, 1998). The attacks include "ghost signatures" finding, tampering, and forging. To find "ghost signatures", hacker may try a brute-force approach to find a signature that corresponds to a set of constraints that yields a convincing

(\_ ) ( \_) ( ) ( ) ( \_ ) &

& & & (3)

& & & (2)

a function fi and a weight to each component, equation (1) can be formulated as:

  **Em\_Cost Coin\_Pro Overhead Security Trace\_cost**

**Performance of Watermarking**

Fig. 1. Watermarking Performance Components

performance of watermarking techniques in such rules:

formula (2) can be simplified as:

signature (Kirovski et.al, 1998). Cui and Chip-Hong also proposed the similar approach by resynthesizing the "master design" to meet the application constraints.

We select the first appraoch as representative for structural-level watermarking performance evaluation. From their result, the runtime for the watermarking was controlled within ±5% of the program execution runtime. The average likelihood of watermarked solution coincidence is less than 10-13 with the overhead of 4%. Because the adopted watermark constraint length is short (5-inputs), its resistance to "ghost signatures" attack is likely low. They proved that the attacker has to perturb great deal of the obtained solution to tamper the watermark while preserving solution quality, like to develop a new optimization algorithm. For "forging" attack, it is less efficient than trying to tamper the signature in a top-down approach and it is more impossible in a bottom-up approach due to the one-way function encoding.

#### **2.1.1.4 Algorithm-level watermarking**

There are rare approaches at the algorithmic level. Chapman & Durrani proposed a Digital Signal Processing (DSP) watermarking scheme (Chapman & Durrani, 2000). The algorithm is based on the ability of designers to make minor changes in the decibel (db) requirements of filters. In this approach, the designer of a high level digital filter encodes one character (7 bits) as his/her hidden watermark data. Then the high level filter design is divided into 7 partitions where each partition is used as a modulation signal of one of the bits.

The authors did not discuss the strength of their approach or the probability *Pc* that the design might coincide with a non-watermarked design. The approach as well depends on a very low data rate, just one character (7 bits), which makes it really unpractical to be used in an industrial environment. The approach is also missing a clear way to track and extract the watermark at lower levels. Therefore, we think the approach is incipient and do not evaluate its performance.

#### **2.1.2 Watermarking performance evaluation function**

As described in the context, we consider performance evaluation of watermarking techniques from five aspects: embedding cost, coincidence probability, overhead, security, and trace cost. The components of watermarking performance are illustrated in Fig.1. We formulate watermarking technique performace P using the following function:

$$P = F(Em\\_Cost, Coin\\_pro, Overhead, Security\\_Score\\_cost) \tag{1}$$

Where P is a function with six variables: Em\_Cost, Coin\_Pro, Overhead, Security and Trace\_cost. Em\_Cost represents watermarking embedding cost which usually means the additional wire length or vias for watermarking represetation. Overhead represents how long EDA tools runt for watermarking process. Coin\_Pro represents the probability that the watermarked design coincided with a non-watermarked one. Security represents strength of watermarking technique resists to various attacks. Trace\_Cost displays the cost retrieving watermark from a protected IP design that can be considered almost the same to the embedding cost. Maintenance of functional correctness is not considered as a factor of the function because each watermarking technique in the market should at least satisfy this requirement.

signature (Kirovski et.al, 1998). Cui and Chip-Hong also proposed the similar approach by

We select the first appraoch as representative for structural-level watermarking performance evaluation. From their result, the runtime for the watermarking was controlled within ±5% of the program execution runtime. The average likelihood of watermarked solution coincidence is less than 10-13 with the overhead of 4%. Because the adopted watermark constraint length is short (5-inputs), its resistance to "ghost signatures" attack is likely low. They proved that the attacker has to perturb great deal of the obtained solution to tamper the watermark while preserving solution quality, like to develop a new optimization algorithm. For "forging" attack, it is less efficient than trying to tamper the signature in a top-down approach and it is more impossible in a bottom-up approach due to the one-way

There are rare approaches at the algorithmic level. Chapman & Durrani proposed a Digital Signal Processing (DSP) watermarking scheme (Chapman & Durrani, 2000). The algorithm is based on the ability of designers to make minor changes in the decibel (db) requirements of filters. In this approach, the designer of a high level digital filter encodes one character (7 bits) as his/her hidden watermark data. Then the high level filter design is divided into 7

The authors did not discuss the strength of their approach or the probability *Pc* that the design might coincide with a non-watermarked design. The approach as well depends on a very low data rate, just one character (7 bits), which makes it really unpractical to be used in an industrial environment. The approach is also missing a clear way to track and extract the watermark at lower levels. Therefore, we think the approach is incipient and do not evaluate

As described in the context, we consider performance evaluation of watermarking techniques from five aspects: embedding cost, coincidence probability, overhead, security, and trace cost. The components of watermarking performance are illustrated in Fig.1. We

Where P is a function with six variables: Em\_Cost, Coin\_Pro, Overhead, Security and Trace\_cost. Em\_Cost represents watermarking embedding cost which usually means the additional wire length or vias for watermarking represetation. Overhead represents how long EDA tools runt for watermarking process. Coin\_Pro represents the probability that the watermarked design coincided with a non-watermarked one. Security represents strength of watermarking technique resists to various attacks. Trace\_Cost displays the cost retrieving watermark from a protected IP design that can be considered almost the same to the embedding cost. Maintenance of functional correctness is not considered as a factor of the function because each watermarking technique in the market should at least satisfy this

*P F Em Cost Coin pro Overhead Security Trace t* ( \_ , \_ , , , \_ cos ) (1)

partitions where each partition is used as a modulation signal of one of the bits.

formulate watermarking technique performace P using the following function:

**2.1.2 Watermarking performance evaluation function** 

resynthesizing the "master design" to meet the application constraints.

function encoding.

its performance.

requirement.

**2.1.1.4 Algorithm-level watermarking** 

Fig. 1. Watermarking Performance Components

Obviously, lower watermarking cost leads to high-performance watermarking technique. Therefore watermarking performance is reverse to embedding cost. Similarly, watermarking performance is reverse to coincidence probability, overhead, and tracing cost. Instead watermarking performance is proportional to its security which must be concerned. We give a function fi and a weight to each component, equation (1) can be formulated as:

$$P = \alpha \bullet f\_1(\text{Em\\_Cost}) + \beta \bullet f\_2(\text{Cain\\_pro}) + \gamma \bullet f\_3(\text{Overall}) + \lambda \bullet f\_4(\text{Security}) + \mu \bullet f\_5(\text{Trace\\_Cost}) \tag{2}$$

In practice, watermarking tracing cost is almost equal to watermarking embedding cost, so formula (2) can be simplified as:

$$P = 2a \bullet f\_1(\text{Em\\_Cost}) + \beta \bullet f\_2(\text{Coin\\_prop}) + \gamma \bullet f\_3(\text{Overhead}) + \lambda \bullet f\_4(\text{Security})\tag{3}$$

Each part of formulation (3) is related to both watermark constraints size and watermarking method. Consider process of watermarking-based IP protection, we evaluate the performance of watermarking techniques in such rules:

The watermarking IP protection process is implemented by either intrusive software or an incremental implementation of EDA tool. So the additional CPU run time of the implementation is considered as the embedding cost. Generally, watermarking identification needs some extra circuits. We take the increased wire-length and (or) via number as watermarking overhead. It is considered that the security of watermarking techniques is related with its resistance to attacks. There is a brief introduction of prototypical attacks referred in (Kahng et.al, 1998). The attacks include "ghost signatures" finding, tampering, and forging. To find "ghost signatures", hacker may try a brute-force approach to find a signature that corresponds to a set of constraints that yields a convincing

Performance Evaluation for IP Protection Watermarking Techniques 127

\_ cos ( \_ )1

Where Design\_cost displays the original design cost. If Em\_cost (embedding cost) is too

If the coincidence probability of a watermarking techniques is sufficiently low (for example,

The overhead of watermarking (increased wire length, extra via, etc.) degrades the

There are three factors impact the watermarking security: the resistance to "ghost signature", "tampering", and "forging". We think that no matter which factor is satisfied,

<sup>1</sup> ( ) 3 3

Substituting the formulations (4), (5), (6) and (7) into formulation (3) and the of set

*P Em t* & 2 (1 \_ cos ) (1 ) / 3

We prepare three schemes to evaluate performance of watermarking techniques: (a) **Balance evaluation** where each item weights are the same, namely == ==0.2; (b) **Cost emphasis evaluation** where the weights of cost and overhead are set double to others, namely = =0.25 and ==0.125. (c) **Security emphasis evaluation** where security weight is set double to others, namely =2/6 and == =1/6. (All the weights obey: 2++ + = 1)

Based on formulation (8), we calculated the concrete performance value for the several

The first column show the evaluation schemes mentioned above: Balance evaluation, Cost emphasis evaluation, and Security emphasis evaluation. The second, the third and the forth

Scheme Physical WM Behavioral WM Structural WM Balance 0.9214 0.4858 0.9053 Cost\_Emphasis 0.9268 0.3989 0.8867 Security\_Emphasis 0.9345 0.5159 0.8656

 

\_ cos *Em t f Em Cost Design t* (4)

<sup>2</sup>*f Coin pro* ( \_ )1 ( (5)

3( )1 *Overhead f Overhead Total* (6)

*<sup>N</sup> f Security N* % (7)

(8)

*Overhead N*

1

performance of the design. The function f3 can be written as:

Where Total is the total cost for the original design.

Where N is the number of satisfied factors.

watermarking techniques. The results are shown in Table 2.

Table 2. Performance of watermarking techniques

Design\_cost and Total to 1, we have:

less than 10-3), f2 function can be set as:

expensive to exceed Design\_cost, the value of function f1 will be equal to 0.

the watermarking security gets 1/3 value augment. The f4 can be written as:

4

proof of authorship Pc. However, this brute-force attack becomes computationally infeasible if the threshold for proof of authorship is set sufficiently low. e.g., Pc-2-x (x is the length of constraints). So it is easy to prevent this type attack just by enlarging the length of signature (watermark). As an alternative, attackers may select re-solving every subsequent stage of the watermarking process to forge author's signature. Generally, Specific changes that attacker makes to the final solution will likely correspond to (1) local perturbations of the solution to the watermarked phase, or to (2) global-scale transformations such as those which exploit asymmetry of the design representation. It is critical that common watermarking technique has the resistance to such transformations. Tampering attacks might not be able to ruin the proof of authorship before they significantly degrade the quality of the final solution. Finally, attacker may select to forge author's signature. To finish this work, he needs a signature that he can convince others belong to author. If a signature corresponds simply to a text message, he simply chooses a text message resembling one that author would use. However, such attacks can be easily prevented by using a private key encryption system for watermark generation. We analyze the security of watermarking techniques from the above three aspects, and give a quantitative performance evaluation.

#### **2.1.3 Watermarking analysis summary**

Through the investigation and analysis, performance of various watermarking techniques is summarized in Table 1. There are total six columns each display the item of watermarking performance. The first column displays watermarking type. The second and the fifth column are the watermarking embedding cost and tracing cost which represent the increased CPU runtime corresponding to normal IC design process. The forth column of "Overhead" represents the increased percentage of wire length and via number. In the fifth column of "security", there are 3 sub-columns: G, T, and F which represent the resistance to "ghost signatures", "tampering", and "forging" separately. The value of "+" means the method has resistance to such attack, while the value of "-" means no resistance or resistance is really weak.


Table 1. Performance summary of watermarking techniques

#### **2.1.4 Evaluation results**

We evaluate the representative watermarking algorithms from five items: embedding cost, coincidence probability, overhead, security, and tracing cost. According to the investigated results, we calculate each sub-value in the scope (0, 1). Finally we accumulate all the value as the performance evaluation by using formula (3).

Performance of watermarking technique is related with the run time of watermarking process. The more time consumed, the more watermarking technique is ineffective. We define sub-function f1 as:

proof of authorship Pc. However, this brute-force attack becomes computationally infeasible

constraints). So it is easy to prevent this type attack just by enlarging the length of signature (watermark). As an alternative, attackers may select re-solving every subsequent stage of the watermarking process to forge author's signature. Generally, Specific changes that attacker makes to the final solution will likely correspond to (1) local perturbations of the solution to the watermarked phase, or to (2) global-scale transformations such as those which exploit asymmetry of the design representation. It is critical that common watermarking technique has the resistance to such transformations. Tampering attacks might not be able to ruin the proof of authorship before they significantly degrade the quality of the final solution. Finally, attacker may select to forge author's signature. To finish this work, he needs a signature that he can convince others belong to author. If a signature corresponds simply to a text message, he simply chooses a text message resembling one that author would use. However, such attacks can be easily prevented by using a private key encryption system for watermark generation. We analyze the security of watermarking techniques from the above

Through the investigation and analysis, performance of various watermarking techniques is summarized in Table 1. There are total six columns each display the item of watermarking performance. The first column displays watermarking type. The second and the fifth column are the watermarking embedding cost and tracing cost which represent the increased CPU runtime corresponding to normal IC design process. The forth column of "Overhead" represents the increased percentage of wire length and via number. In the fifth column of "security", there are 3 sub-columns: G, T, and F which represent the resistance to "ghost signatures", "tampering", and "forging" separately. The value of "+" means the method has resistance to such attack, while the value of "-" means

Watermarking Em\_Cost Coin\_Pro Overhead Security Trace\_Cost GTF Physical 9.00% 10-3~10-25 1.13% + + + 9.00% Behavioral expensive avg: 10-11 23.77% + + - expensive Structural 5.00% < 10-13 4.00% - + + 5.00%

We evaluate the representative watermarking algorithms from five items: embedding cost, coincidence probability, overhead, security, and tracing cost. According to the investigated results, we calculate each sub-value in the scope (0, 1). Finally we accumulate all the value as

Performance of watermarking technique is related with the run time of watermarking process. The more time consumed, the more watermarking technique is ineffective. We

2-x (x is the length of

if the threshold for proof of authorship is set sufficiently low. e.g., Pc-

three aspects, and give a quantitative performance evaluation.

Table 1. Performance summary of watermarking techniques

the performance evaluation by using formula (3).

**2.1.3 Watermarking analysis summary** 

no resistance or resistance is really weak.

**2.1.4 Evaluation results** 

define sub-function f1 as:

$$f\_1(Em\\_Cost) = 1 - \frac{Em\\_cost}{Design\\_cost} \tag{4}$$

Where Design\_cost displays the original design cost. If Em\_cost (embedding cost) is too expensive to exceed Design\_cost, the value of function f1 will be equal to 0.

If the coincidence probability of a watermarking techniques is sufficiently low (for example, less than 10-3), f2 function can be set as:

$$f\_2(\text{Coin\\_pro}) \equiv 1\tag{5}$$

The overhead of watermarking (increased wire length, extra via, etc.) degrades the performance of the design. The function f3 can be written as:

$$f\_3(Overhead) = 1 - \frac{Overhead}{Total} \tag{6}$$

Where Total is the total cost for the original design.

There are three factors impact the watermarking security: the resistance to "ghost signature", "tampering", and "forging". We think that no matter which factor is satisfied, the watermarking security gets 1/3 value augment. The f4 can be written as:

$$f\_4(Security) = N \times \frac{1}{3} = \frac{N}{3} \tag{7}$$

Where N is the number of satisfied factors.

Substituting the formulations (4), (5), (6) and (7) into formulation (3) and the of set Design\_cost and Total to 1, we have:

$$P = 2\alpha (1 - Em\\_\cos t) + \beta + \gamma (1 - Overhead) + \mathcal{X} \bullet N \;/\, \mathcal{S} \tag{8}$$

We prepare three schemes to evaluate performance of watermarking techniques: (a) **Balance evaluation** where each item weights are the same, namely == ==0.2; (b) **Cost emphasis evaluation** where the weights of cost and overhead are set double to others, namely = =0.25 and ==0.125. (c) **Security emphasis evaluation** where security weight is set double to others, namely =2/6 and == =1/6. (All the weights obey: 2++ + = 1)

Based on formulation (8), we calculated the concrete performance value for the several watermarking techniques. The results are shown in Table 2.


Table 2. Performance of watermarking techniques

The first column show the evaluation schemes mentioned above: Balance evaluation, Cost emphasis evaluation, and Security emphasis evaluation. The second, the third and the forth

Performance Evaluation for IP Protection Watermarking Techniques 129

Source Core

Net-list Core

Bitfile Core

RTL Description

Synthesis

Net-list

Fitting ( placement & Routing )

Bitfile

Additive methods in FPGA design are watermarking procedures where a signature is added to the functional core. The watermark is not embedded into the functional core yet be

There exist no publications about additive watermarking for source cores protection although it is possible to write an additive source component into the core. However it isn't an applicable watermark strategy because one can also remove this component easily.

Most additive watermarking methods for netlists just watermark the design by introducing redundant logic to the circuit. Moritz et.al presented a novel approach to watermark FPGA designs by converting functional LUTs (Lookup Tables) to LUT-based RAMs or shift registers prevents deletion due to optimization (Moritz et.al, 2009). The resource overhead for watermarking is tiny, generally less than 5%. The method is transparent to EDA tools because the watermarking is performed after the usual netlist generation. The suspected design can be verified only when the extracted bitfile is not encrypted. The authorship can be detected without requesting additional information from the producer. However the watermark can be easily removed by reverse-engineering and the authorship will dispear. An approach for watermarking bitfile-core is implemeted by embedding the signature into unused look-up tables (John et.al, 1998). The signature will be hashed and coded with an error correction code (ECC) to be able to reconstruct even if some lookup tables are tampered. After the initial placement and routing, the number of unused lookup tables are determined. The ECC code is split into the size of the lookup tables and additional LUTs are added to the design. The watermarked design is obtained after being re-placed

The approach was improved (John et.al, 1999) by using many small watermarks whose size is the exact size of a lookup table. The small watermarks are easier to search relatively. However, the published watermark positions in verification process make the watermarking

Fig. 3. FPGA watermarking design flow and IP core

**2.2.1 Additive methods** 

masked as a part of the core.

and re-routed.

column show evaluated performace value of different watermarking techniques in various test schemes. From the result, we understand performace of physical watermarking representative is high, then structural watermarking representative, and behavioral watermarking representative is relatively low, no matter the scheme. From the curves in Fig.2, we can understand the comparison more intuitively.

Fig. 2. Performace illustration of watermarking techniques

We introduce functions to evaluate watermarking techniques and hope this work can provide a standard candidate for researchers to evaluate their watermarking techniques. Although performace of various watermarking techniques is different, even the weak technique has its advantages. In future, researchers may develop stronger watermarking techniques by combining the advantages of different level watermarking techniques to prevent any IP piracy attempt from happening.

#### **2.2 Watermarking performance evaluation for FPGA**

Before the FPGA being watermarked, a signature should be prepared. The signature may be a short ASCII-text, which identifies the owner of the core. The string is then hashed and encrypted to generate a seed of watermark. Then the watermark is produced from the seed with a pseudo random generator like RC4.

Fig.3 gives an example of FPGA watermarking design flow. As shown in the figure, there are three types of FPGA cores: Source-cores, netlist-cores and bitfile-cores, corresponding to the design levels. Source-cores are delivered in HDL or C language. There are very flexible to synthesize for many target technologies. Netlist-cores have a medium flexibility because they have been fixed on a target technology. Bitfile-cores are very inflexible since they can be used only for a specific device.

Daniel & Jurgen had an accurate evaluation of watermarking methods for FPGA-based IP cores from functional correctness, hardware overhead, transparency, verifiability, difficulty to remove, and proof strength of authorship (Daniel & Jurgen, 2006). They divided watermarking techniques into two categories from their construction: additive methods and constraint based methods. In this chapter, we introduce recent FPGA watermarking techniques and estimate their performance under certain criteria.

Fig. 3. FPGA watermarking design flow and IP core

#### **2.2.1 Additive methods**

128 Watermarking – Volume 2

column show evaluated performace value of different watermarking techniques in various test schemes. From the result, we understand performace of physical watermarking representative is high, then structural watermarking representative, and behavioral watermarking representative is relatively low, no matter the scheme. From the curves in

We introduce functions to evaluate watermarking techniques and hope this work can provide a standard candidate for researchers to evaluate their watermarking techniques. Although performace of various watermarking techniques is different, even the weak technique has its advantages. In future, researchers may develop stronger watermarking techniques by combining the advantages of different level watermarking techniques to

Before the FPGA being watermarked, a signature should be prepared. The signature may be a short ASCII-text, which identifies the owner of the core. The string is then hashed and encrypted to generate a seed of watermark. Then the watermark is produced from the seed

Fig.3 gives an example of FPGA watermarking design flow. As shown in the figure, there are three types of FPGA cores: Source-cores, netlist-cores and bitfile-cores, corresponding to the design levels. Source-cores are delivered in HDL or C language. There are very flexible to synthesize for many target technologies. Netlist-cores have a medium flexibility because they have been fixed on a target technology. Bitfile-cores are very inflexible since they can

Daniel & Jurgen had an accurate evaluation of watermarking methods for FPGA-based IP cores from functional correctness, hardware overhead, transparency, verifiability, difficulty to remove, and proof strength of authorship (Daniel & Jurgen, 2006). They divided watermarking techniques into two categories from their construction: additive methods and constraint based methods. In this chapter, we introduce recent FPGA watermarking

Fig.2, we can understand the comparison more intuitively.

Fig. 2. Performace illustration of watermarking techniques

**2.2 Watermarking performance evaluation for FPGA** 

techniques and estimate their performance under certain criteria.

prevent any IP piracy attempt from happening.

with a pseudo random generator like RC4.

be used only for a specific device.

Additive methods in FPGA design are watermarking procedures where a signature is added to the functional core. The watermark is not embedded into the functional core yet be masked as a part of the core.

There exist no publications about additive watermarking for source cores protection although it is possible to write an additive source component into the core. However it isn't an applicable watermark strategy because one can also remove this component easily.

Most additive watermarking methods for netlists just watermark the design by introducing redundant logic to the circuit. Moritz et.al presented a novel approach to watermark FPGA designs by converting functional LUTs (Lookup Tables) to LUT-based RAMs or shift registers prevents deletion due to optimization (Moritz et.al, 2009). The resource overhead for watermarking is tiny, generally less than 5%. The method is transparent to EDA tools because the watermarking is performed after the usual netlist generation. The suspected design can be verified only when the extracted bitfile is not encrypted. The authorship can be detected without requesting additional information from the producer. However the watermark can be easily removed by reverse-engineering and the authorship will dispear.

An approach for watermarking bitfile-core is implemeted by embedding the signature into unused look-up tables (John et.al, 1998). The signature will be hashed and coded with an error correction code (ECC) to be able to reconstruct even if some lookup tables are tampered. After the initial placement and routing, the number of unused lookup tables are determined. The ECC code is split into the size of the lookup tables and additional LUTs are added to the design. The watermarked design is obtained after being re-placed and re-routed.

The approach was improved (John et.al, 1999) by using many small watermarks whose size is the exact size of a lookup table. The small watermarks are easier to search relatively. However, the published watermark positions in verification process make the watermarking

Performance Evaluation for IP Protection Watermarking Techniques 131

original one. However, it is impossible to verify the watermark from a bitfile. The security of this approach is insufficient because the additional logic is easy to remove by resynthesizing the design. Furthermore, although the probability of coincidence is really low,

An incremental placement and routing or timing constraint is applied to watermark FPGA

As an alternative, a watermark can be embedded by placing configurable logic blocks (CLBs) in even or odd rows depending on the constraints (Kahng et.al, 1998). The resource overhead for watermarking is very low and even tends to zero because the placement is altered marginally. The approach is transparent because the watermarking stage is performed before placement implementation. The CLBs can be corresponded to the signature uniquely by enumerating them form the top left corner. Then the watermarked design can be verified with only the given bitfile. It is nearly impossible to remove watermark from the given bitfile because the CLBs are tightly connected with each other. This approach has a strong proof of authorship due to the large amount of CLB position

Another proposed method is to add constraints to the router. The constraints make the router route a net with some unusual routing resources like "wrong way" segments, in which the net goes to a wrong direction and then back in the right direction to form a backstrap. The net can be verified as a watermark net due to its special geometry. The routing resource for watermarking is too minor to be neglected. The approach is also transparent to EDA tools because constraints are added before invoking routing. The watermarked design can be verified with the known strategy and the unique nets. It is easy to remove the mark by wrapping up the constraint nets and rerouting it again if someone knows the routing information and the watermarking algorithm. The proof of authorship is

not very strong because the watermark is ambiguous and easy to remove or tamper.

A watermarking approach by setting additional timing constraints between registers is proposed in (Kahng et.al, 2001). The timing constraints for the selected paths may split into

Another approach selects the uncritical paths and adds new timing constraints on them (Adarsh et.al, 2003). The last digit of the time delay is reset depending on the watermark. For example, a path has a delay of 10.64ns. If the corresponding watermark bit is '1', the new time delay of this path is set to10.61ns, if the corresponding watermark bit is '0', the delay is

These approaches for watermarking need no resource overhead. They are transparent because additional constraints are added before invoking the routing tool. These approaches are difficult to verify so that it no use to talk about their authorship proof and attack resistance. However designers can create different bitfiles from the same design which are

As mentioned in (Daniel & Jurgen, 2010), when considering a finished FPGA products, there are five potential information sources can be used for extracting a watermark: configuration

bitfile, ports, power consumption, electromagnetic (EM) radiation, and temperature.

forging watermarked design is possible which results in weak authorship proof.

bitfile-cores.

set to 10.60ns.

useful for fingerprinting.

**2.2.3 FPGA watermarking validation** 

candidates for watermark embedding.

two separate constraints, each have a new constraint.

technique easily attacked. Futhermore, Lach et.al improved the approach to a fingerprinting technology by encoding the fingerprint into the position of the mark in the tile (Lach et.al, 1998).

The watermark consumes low hardware overhead because the unused lookup tables in the original design would remain empty. The approaches provide a strong proof of authorship and are transparency to EDA tools. The methods are verifiable because it is possible to determine the position of the watermark in a tile. On the contrary, the watermark is alos easy to be remoed or overwrited.

#### **2.2.2 Constraint based methods**

The constraint based watermarking methods apply to solutions of hard optimization and constraint-satisfaction design problems. It is centered around the use of constraints to "sign" the output of a given design synthesis or optimization. The solutions of a given optimization instance that satisfy these constraints have a watermark embedded in them and provide a probabilistic proof of authorship. The less likely that randomly chosen solutions are to satisfy these constraints, the stronger the proof of authorship is. The coincidence probability Pc is given by the following formula:

$$P\_c = n\_w \;/\; n \tag{9}$$

where n is the number of solutions which satisfy only the original constraints and nw is the number of solutions which satisfy both the original and the watermarking constraints. If Pc is very small, the solution provide a strong proof of the watermarking existence. A watermark's resistance to attacks is inversely proportional to an adversary's ability to manipulate it without resolving a given optimization problem from scratch.

Darko & Miodrag proposed an approach for a HDL core protection using a watermarked scan chain (Darko & Miodrag, 1998). At first all registers will be sorted to be assigned a sequential number. A pseudo random sequence is generated from author's signature to select registers according to a certain algorithm. The first K selected registers are chosen for the first register in a chain, where K is the number of used scan chains. The variation of the scan chains for different signature can be used to detect the watermark. Unfortunately, an injudicious chosen of test chain could result in more routing resources overhead. The approach is transparent to the synthesis tools because the signature is added to the HDL core. The watermark can be verified easily only when the scan chains can be accessed from outside of the chip. Some deletion of watermark results in corruption of the scan chain. In additional, a strong proof of authorship can be achieved by using a large number of registers in scan chains.

An approach to protect netlist cores is implementing by preserving certain nets in the synthesis and mapping step (Kirovski et.al, 1998). Some nets are chosen from the sorted nets of design according to a signature. These nets are prevented from elimination by the design tools by connecting to a temporary output of the core. Additional logic is inserted to connect the new outputs together to reduce the amount of the additional outputs. The design with new outputs can be seen as the result of constraint based watermarking. The additional logics for watermarking require some resource overheads. This approach is transparent to EDA tools because the choice of preserved nets for watermarking can be done before the synthesis process. The watermark can be verified by comparing the given netlist with the

technique easily attacked. Futhermore, Lach et.al improved the approach to a fingerprinting technology by encoding the fingerprint into the position of the mark in the tile (Lach et.al,

The watermark consumes low hardware overhead because the unused lookup tables in the original design would remain empty. The approaches provide a strong proof of authorship and are transparency to EDA tools. The methods are verifiable because it is possible to determine the position of the watermark in a tile. On the contrary, the watermark is alos

The constraint based watermarking methods apply to solutions of hard optimization and constraint-satisfaction design problems. It is centered around the use of constraints to "sign" the output of a given design synthesis or optimization. The solutions of a given optimization instance that satisfy these constraints have a watermark embedded in them and provide a probabilistic proof of authorship. The less likely that randomly chosen solutions are to satisfy these constraints, the stronger the proof of authorship is. The coincidence probability

where n is the number of solutions which satisfy only the original constraints and nw is the number of solutions which satisfy both the original and the watermarking constraints. If Pc is very small, the solution provide a strong proof of the watermarking existence. A watermark's resistance to attacks is inversely proportional to an adversary's ability to

Darko & Miodrag proposed an approach for a HDL core protection using a watermarked scan chain (Darko & Miodrag, 1998). At first all registers will be sorted to be assigned a sequential number. A pseudo random sequence is generated from author's signature to select registers according to a certain algorithm. The first K selected registers are chosen for the first register in a chain, where K is the number of used scan chains. The variation of the scan chains for different signature can be used to detect the watermark. Unfortunately, an injudicious chosen of test chain could result in more routing resources overhead. The approach is transparent to the synthesis tools because the signature is added to the HDL core. The watermark can be verified easily only when the scan chains can be accessed from outside of the chip. Some deletion of watermark results in corruption of the scan chain. In additional, a strong proof of

An approach to protect netlist cores is implementing by preserving certain nets in the synthesis and mapping step (Kirovski et.al, 1998). Some nets are chosen from the sorted nets of design according to a signature. These nets are prevented from elimination by the design tools by connecting to a temporary output of the core. Additional logic is inserted to connect the new outputs together to reduce the amount of the additional outputs. The design with new outputs can be seen as the result of constraint based watermarking. The additional logics for watermarking require some resource overheads. This approach is transparent to EDA tools because the choice of preserved nets for watermarking can be done before the synthesis process. The watermark can be verified by comparing the given netlist with the

manipulate it without resolving a given optimization problem from scratch.

authorship can be achieved by using a large number of registers in scan chains.

*Pn n c w* / (9)

1998).

easy to be remoed or overwrited.

**2.2.2 Constraint based methods** 

Pc is given by the following formula:

original one. However, it is impossible to verify the watermark from a bitfile. The security of this approach is insufficient because the additional logic is easy to remove by resynthesizing the design. Furthermore, although the probability of coincidence is really low, forging watermarked design is possible which results in weak authorship proof.

An incremental placement and routing or timing constraint is applied to watermark FPGA bitfile-cores.

As an alternative, a watermark can be embedded by placing configurable logic blocks (CLBs) in even or odd rows depending on the constraints (Kahng et.al, 1998). The resource overhead for watermarking is very low and even tends to zero because the placement is altered marginally. The approach is transparent because the watermarking stage is performed before placement implementation. The CLBs can be corresponded to the signature uniquely by enumerating them form the top left corner. Then the watermarked design can be verified with only the given bitfile. It is nearly impossible to remove watermark from the given bitfile because the CLBs are tightly connected with each other. This approach has a strong proof of authorship due to the large amount of CLB position candidates for watermark embedding.

Another proposed method is to add constraints to the router. The constraints make the router route a net with some unusual routing resources like "wrong way" segments, in which the net goes to a wrong direction and then back in the right direction to form a backstrap. The net can be verified as a watermark net due to its special geometry. The routing resource for watermarking is too minor to be neglected. The approach is also transparent to EDA tools because constraints are added before invoking routing. The watermarked design can be verified with the known strategy and the unique nets. It is easy to remove the mark by wrapping up the constraint nets and rerouting it again if someone knows the routing information and the watermarking algorithm. The proof of authorship is not very strong because the watermark is ambiguous and easy to remove or tamper.

A watermarking approach by setting additional timing constraints between registers is proposed in (Kahng et.al, 2001). The timing constraints for the selected paths may split into two separate constraints, each have a new constraint.

Another approach selects the uncritical paths and adds new timing constraints on them (Adarsh et.al, 2003). The last digit of the time delay is reset depending on the watermark. For example, a path has a delay of 10.64ns. If the corresponding watermark bit is '1', the new time delay of this path is set to10.61ns, if the corresponding watermark bit is '0', the delay is set to 10.60ns.

These approaches for watermarking need no resource overhead. They are transparent because additional constraints are added before invoking the routing tool. These approaches are difficult to verify so that it no use to talk about their authorship proof and attack resistance. However designers can create different bitfiles from the same design which are useful for fingerprinting.

#### **2.2.3 FPGA watermarking validation**

As mentioned in (Daniel & Jurgen, 2010), when considering a finished FPGA products, there are five potential information sources can be used for extracting a watermark: configuration bitfile, ports, power consumption, electromagnetic (EM) radiation, and temperature.

Performance Evaluation for IP Protection Watermarking Techniques 133

Abdel-Hamid, A.T.; Tahar, S. & El Mostapha Aboulhamid. (2006). Finite state machine IP

Adarsh, K.J. ; Lin, Y. ; Pushkin R.P. & Gang Q. (2003). Zero overhead watermarking

Bolotnyy, L. & Robins, G. (2007). Physically unclonable function-based security and privacy

Chapman, R. & Durrani, T.S. (2000). IP protection of DSP algorithms for system on chip

Daniel, Z. & Jurgen T. (2006). Evaluation of Watermarking methods for FPGA-based IP-

Daniel, Z. & Jurgen, T. (2010). New Directions for FPGA IP Core Watermarking and

Darko, K. & Miodrag, P. (1998). Intellectual property protection using watermarking partial

FallWorldwide Member Meeting: (1997). A Year of Achievement (Guidelines Proposed by

Gang, Q. & Miodrag, P. (1999). Effective iterative techniques for fingerprinting design IP,

Gang, Q. & Miodrag, P. (2003). *Intellectual Property Protection in VLSI Design: Theory and Practice,* Kluwer Academic Publishers, ISBN 978-1-4020-7320-5, USA Irby, D.L.; Newbould, R.D.; Carothers, J.D.; Rodriguez, J.J. & Holman, W.T. (2000). Low

John L.; William H. M. & Miodrag P. (1998). Signature hiding techniques for FPGA

cores. *Technical Report 01-2006,* Erlangen, Germany, Mar, 2006

Identification, *In Proceedings of Dagstuhl Seminar 10281,* 2010

*Conference on Computer-Aided Design ICCAD,* 1998

*Lakes symposium on VLSI,* pp. 147–152, ISBN 1-58113-677-3, USA, 2003 Aijiao, C. & Chip-Hong, C. (2006). Stego-signature at logic synthesis level for digital design

watermarking. *Proceedings of AHS 2006 1st NASA*

ISBN 1-58113-109-7, New York, NY, USA, 1999

Washington, DC, USA, March 19-23, 2007

854-86 1, ISSN 1053-587X

Alliance, Santa Clara, CA, 1997


VA , USA, September, 2000

Angeles, CA, June, 1999

*of IEEE 13th lnt*

June 30-July 2, 2003

2006

*Real-Time Applications,* pp.60–65, ISBN 0-7695-1944-X, Calgary, Alberta, Canada,

*Hardware and Systems,* pp.457-464, ISBN 0-7695-2614-4, Istanbul, Turkey, June 15-18,

technique for FPGA designs. *In GLSVLSI '03: Proceedings of the 13th ACM Great* 

IP protection, *Proceedings of 2006 IEEE International Symposium on Circuits and Systems,* pp. 4611-4614, ISBN 0-7803-9389-9, Island of Kos, Greece, May, 2006 Andrew, E. C.; Hyun-Jin, C.; Andrew, B. K.; Stefanus, M.; Miodrag, P.; Gang, Q. & Jennifer,

L. W. (1999). Effective Iterative Techniques for Fingerprinting Design IP, *Proceedings of the 36th annual ACM/IEEE Design Automation Conference,* pp. 208-215,

in RFID systems. *Proceedings of PERCOM 2007 5th IEEE International Conference on Pervasive Computing and Communications,* pp.211-220, ISBN 0-7695-2787-6,

implementation. *IEEE Trans. on Signal Processing,* vol. 48, No. 3, (March 2000), pp.

scan chains for sequential logic test generation, *Proceedings of 1998 International* 

VSIA Development Working Group on Intellectual Property Protection). VSI

*Proceedings of Design Automation Conference,* pp. 587–592, ISSN 0278-0070, Los

level watermarking ofVLSI designs for intellectual property protection. *Proceedings* 

intellectual property protection. *In proceedings of ICCAD International Conference on Computer-Aided Design,* pp. 186–189, ISBN 1-58113-008-2, California, USA, 1998

*ASlC/SOC Conferenc,* pp. 136 – 140, ISBN 0-7803-6598-4, Arlington,

*ESA Conference on Adaptive* 

The bitfile can be extracted by wire tapping the communication between the PROM and the FPGA. Some FPGA manufactures provide an option to encrypt the bitstream which makes communication monitoring useless. However, it is possible to read out some information stored in RAMs or lookup tables to finish verification. Another approach is to employ unused ports which is limited only at top-level designs and impractical for IP cores.

The method called "Power Watermarking" can force patterns on the power consumption of an FPGA as a covert channel to transmit data to the outside. Related works shown in (Ziener & Teich, 2008) and (Ziener et.al, 2010) indicate the clock frequency and toggling logic can be used to control such a power spectrum covert channel. The resulting change in power consumption can be extracted as the signature from the FPGA's power spectrum.

With almost the same strategy it is also possible to extract signatures by raster scanning electromagnetic (EM) radiation of an FPGA with an EM sensor (Thomas & Christof, 2003). Unfortunately, it becomes unpractical since modern FPGAs are delivered in a packaged shape which decreases the EM radiation.

Finally, a watermark might be read out by monitoring the temperature radiation which is similar to power and EM-field watermarking approaches. There is only one commercial watermarking approach which reads a watermark from an FPGA taking up to 10 minutes (Kean et.al, 2008).

## **3. Conclusion**

In this section, we first reviewed several classical IP protection methods such as tagging, fingerprinting, and watermarking. Then we investigated representative watermarking techniques of ASIC at different design levels. We proposed functions to evaluate watermarking techniques from the under aspects: embedding cost, overhead, coincidence probability, security and tracing cost. The evaluated results show that the performance of physical watermarking technique is high, structural watermarking technique is medium, and behavioral watermarking technique is low. We also summarized watermarking techniques of FPGA core protection and validation methods from three forms of FPGA: source code, netlist, and bitfile.

From this work, we hope it provides a standard candidate for researchers to evaluate their watermarking techniques. In future, researchers may develop stronger watermarking techniques by combining the advantages of different level watermarking techniques to prevent any IP piracy attempt from happening.

#### **4. Acknowledgment**

We are grateful to our work team for the contribution to this research. The Project is sponsored by SRF for ROCS, SEM. and Supported by Shandong Province Natural Science Foundation (ZR2009GL007) and A Project of Shandong Province Higher Educational Science and Technology Program (J09LG10). The author also thanks for the support of the family.

#### **5. References**

Abdel-Hamid, A.T.; Tahar, S. & El, M.A. (2003). IP watermarking techniques: survey and comparison. *Proceedings of IWSOC2003 3rd IEEE Int. Workshop on System-on-Chip for* 

The bitfile can be extracted by wire tapping the communication between the PROM and the FPGA. Some FPGA manufactures provide an option to encrypt the bitstream which makes communication monitoring useless. However, it is possible to read out some information stored in RAMs or lookup tables to finish verification. Another approach is to employ

The method called "Power Watermarking" can force patterns on the power consumption of an FPGA as a covert channel to transmit data to the outside. Related works shown in (Ziener & Teich, 2008) and (Ziener et.al, 2010) indicate the clock frequency and toggling logic can be used to control such a power spectrum covert channel. The resulting change in power

With almost the same strategy it is also possible to extract signatures by raster scanning electromagnetic (EM) radiation of an FPGA with an EM sensor (Thomas & Christof, 2003). Unfortunately, it becomes unpractical since modern FPGAs are delivered in a packaged

Finally, a watermark might be read out by monitoring the temperature radiation which is similar to power and EM-field watermarking approaches. There is only one commercial watermarking approach which reads a watermark from an FPGA taking up to 10 minutes

In this section, we first reviewed several classical IP protection methods such as tagging, fingerprinting, and watermarking. Then we investigated representative watermarking techniques of ASIC at different design levels. We proposed functions to evaluate watermarking techniques from the under aspects: embedding cost, overhead, coincidence probability, security and tracing cost. The evaluated results show that the performance of physical watermarking technique is high, structural watermarking technique is medium, and behavioral watermarking technique is low. We also summarized watermarking techniques of FPGA core protection and validation methods from three forms of FPGA:

From this work, we hope it provides a standard candidate for researchers to evaluate their watermarking techniques. In future, researchers may develop stronger watermarking techniques by combining the advantages of different level watermarking techniques to

We are grateful to our work team for the contribution to this research. The Project is sponsored by SRF for ROCS, SEM. and Supported by Shandong Province Natural Science Foundation (ZR2009GL007) and A Project of Shandong Province Higher Educational Science and

Abdel-Hamid, A.T.; Tahar, S. & El, M.A. (2003). IP watermarking techniques: survey and

comparison. *Proceedings of IWSOC2003 3rd IEEE Int. Workshop on System-on-Chip for* 

Technology Program (J09LG10). The author also thanks for the support of the family.

unused ports which is limited only at top-level designs and impractical for IP cores.

consumption can be extracted as the signature from the FPGA's power spectrum.

shape which decreases the EM radiation.

(Kean et.al, 2008).

**3. Conclusion** 

source code, netlist, and bitfile.

**4. Acknowledgment**

**5. References** 

prevent any IP piracy attempt from happening.

*Real-Time Applications,* pp.60–65, ISBN 0-7695-1944-X, Calgary, Alberta, Canada, June 30-July 2, 2003


Performance Evaluation for IP Protection Watermarking Techniques 135

Majzoobi, M.; Koushanfar, F. & Potkonjak, M. (2008). Lightweight secure PUFs, *Proceedings* 

Marsh, C. & Kean, T. (2007). A security tagging scheme for ASIC designs and intellectual

Min, N. & Zhiqiang G. (2004). Constraint-based watermarking technique for hard IP core

Moritz, S. ; Daniel, Z. & Jurgen, T. (2008). Netlist-Level IP Protection by Watermarking for

Nie, T.; Kisaka, T. & Toyonaga, M. (2005). A watermarking system for IP protection by a

*Systems,* VOL. 20, NO. 9, September, 2001, pp.1101-1117, ISSN 0278-0070 Ravikanth, P.; Ben R.; Jason, T. & Neil, G. (2001). *Physical One-Way Functions,* PhD thesis,

Skoric, B.; Tuyls, P. & Ophey, W. (2005). Robust key extraction from physical unclonable

Thomas, H.; Zebo, P. ; Raimund, U. ; & Manfred, G. (2001). Challenges for Future System-

Thomas W. & Christof P. (2003). How Secure Are FPGAs in Cryptographic Applications. *In* 

Torunoglu, I. & Charbon, E. (2000). Watermarking based copyright protection of sequential

Tuyls, P.; Skoric, B. ; Stallinga, S. ; Akkermans, A. & Ophey, W. (2005). Information

Virtual Socket Interface Alliance (2000a). Intellectual Property Protection White Paper:

Virtual Socket Interface Alliance (2000b). Virtual Component Identification Physical Tagging

Schemes, Alternatives and Discussion Version 1.0. September 2000

protection in physical layout design level. *Proceedings of IEEE 7 Int*

*Exhibition,* pp. 6-7, France, January 2007

Massachusetts Institute of Technology

ISBN 0-7803-5443-5, 2000

Standard (IPP 1 1.0). 2000

9743, berlin, 2005

*2005,* pp. 407-422, ISSN 0302-9743, berlin, 2005

*Design,* pp.173-176, Espoo, Finland, August 28-31, 2001

China, October, 2004

September, 2001

2008

*of Computer-Aided Design 2008,* pp. 670-673, ISBN 978-1-4244-2819-9, San Jose, CA,

property cores. *Proceedings of IP-SoC 2006 IP Based SoC Design Conference &* 

*State and Integrated Circuits Technology,* pp.1360-1363, ISBN 0-7803-8511-X, Beijing,

LUT-Based FPGAs. *Proceedings of FPT 2008 International Conference on ICECE Technology 2008,* pp. 20 -216, ISBN 978-1-4244-3783-2, Taipei, China, Dec. 2008 Narayan, N.; Newbould, R.D.; Carothers, J.D.; Rodriguez, J.J. & Holman, W.T. (2001). IP

Protection for VLSI Designs Via Watermarking of Routes. *Proceedings of 14th Annual IEEE International ASIC/SOC Conference,* pp.406-410, Washington, DC, USA,

post layout incremental router. *Proceedings of DAC 42th Design Automation Conference,* pp.218-221, ISBN 1-59593-058-2, San Diego, CA, USA, June 13-17, 2005 Oliveira, A.L. (2001). Techniques for the creation of digital watermarks in sequential circuit

designs. *IEEE Transactions on Computer-Aided Design of Integrated Circuits and* 

functions, *Proceedings of the Applied Cryptography and Network Security Conference* 

on-Chip Design. *Proceedings of ECCTD15th European Conference on Circuit Theory and* 

*Proceedings of International Conference on Field Programmable Logic and Applications (FPL 2003),* Lecture Notes in Computer Science Volume 2778, pp. 91-100, Sept. 2003

functions. *IEEE Journal of Solid-State Circuits,* vol. 35, No. 3, (May 1999), pp.434-440,

theoretical security analysis of physical unclonable functions. *Proceedings of Conference on Financial Cryptography and Data Security 2005,* pp. 141-155, ISSN 0302-



John L.; William H. M. & Miodrag P. (1999). Robust FPGA intellectual property protection

John, L.; Miodrag P. ; William, H.M. & Miodrag, P. (2001). Fingerprinting Techniques for

Kahng, A.B.; Mantik, S.; Markov, I.L.; Potkonjak, M.; Tucker, P.; Huijuan, W. & Wolfe, G.

Kahng, A.B.; Lach, J.; Mangione-Smith, W.H.; Mantik, S.; Markov, I.L.; Potkonjak, M.;

Kahng, A.B.; Lach, J.; Mangione-Smith, W.H.; Mantik, S.; Markov, I.L.; Potkonjak, M.;

Kean, T.; McLaren, D. & Marsh C. (2008). Verifying the Authenticity of Chip Designs with

Kirovski, D. ; Liu, D. ; Wong, J.L. & Potkonjak, M. (2000). Forensic Engineering Techniques

Kirovski, D.; Yean-Yow Hwang; Potkonjak, M. & Cong, J. (1998). Intellectual property

Keating, M. & Bricaud, P. (1998). *Reuse Methodology Manual for System-on-a-Chip Designs,* Kluwer Academic Publishers, ISBN 0792385586, Boston, USA, 1998 Lach, J.; Mangione-Smith, W.H. & Potkonjak, M. (1998). FPGA Fingerprinting Techniques

Lin, Y.; Gang, Q. ; Lahouari, G. & Ahmed, B. (2006). VLSI design IP Protection: Solutions,

pp. 581-586, ISBN 1-58113-187-9, Los Angeles, CA, June, 2000

l98, ISBN 1-58113-008-2, San Jose, CA, USA, November 8-12, 1998

*2006,* pp. 469-476, ISBN 0-7695-2614-4, NY, USA, June 15-18, 2006

*Conference,* pp. 831–836, ISBN 1-58113-092-9, USA, 1999

2001), pp. 1253-1261, ISSN 0278-0070

Washington, USA, June, 2008

Istanbul, Turkey, June 15-18, 2006

1998

0070

Francico, California, USA, June 15-19, 1998

through multiple small watermarks. *In proceedings of DAC99 Design Automation* 

Field-programmable Gate Array Intellectual Property Protection. *IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems,* Vol.20, No.10, (October

(1998). Robust IP watermarking methodologies for physical design. *Proceedings of DAC 35th Design Automation Conference,* pp.782-787, ISBN 0-89791-964-5, San

Tucker, P.; Wang, H. & Wolfe, G. (1998). Watermarking techniques for intellectual property protection. *Proceedings of DAC98 35th ACM/IEEE Design Automation Conference,* pp. 776–781, ISBN 0-89791-964-5, San Francisco, CA, USA, June 15-19,

Tucker, P.; Wang, H.; & Wolfe, G. (2001). Constraint-based watermarking techniques for design IP protection. *IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems,* Vol.e 20, No. 10, Oct. 2001, pp. 1236-1252, ISSN 0278-

the DesignTag System. *In Proceedings of the 2008 IEEE International Workshop on Hardware-Oriented Security and Trust,* pp. 59-64, ISBN 978-1-4244-2401-6,

for VLSI CAD Tools, *Proceedings of 37th ACM/IEEE Design Automation Conference,*

protection by watermarking combinational logic synthesis solutions. *Proceedings of ICCAD 1998 IEEE/ACM International Conference on Computer-Aided Design,* pp. l94-

for Protecting Intellectual Property, *Proceedings of the IEEE 1998 Custom Integrated Circuits Conference,* pp. 299-302, ISBN 0-7803-4292-5, Santa Clara, CA, May, 1998 Lin Y.; Qu, G.; Ghouti, L. & Bouridane, A. (2006). VLSI Design IP Protection: Solutions, New

Challenges, and Opportunities, *In Proceedings of Adaptive Hardware and Systems* 

New Challenges, and Opportunities, *Proceedings of AHS 2006 1st NASA/ESA Conference on Adaptive Hardware and Systems,* pp. 469-476, ISBN 0-7695-2614-4,


**1. Introduction** 

copyright owners.

be such a vehicle.

**2. Overview** 

images and documents over the World Wide Web.

distribute it for free over a shared network.

**7** 

**Using Digital Watermarking** 

Without a doubt, the Internet has revolutionized the way we access information and share our ideas via tools such as Facebook, twitter, email, forums, blogs and instant messaging. The Internet is also an excellent distribution system for digital media. It is inexpensive, eliminates warehousing and delivery, and is almost instantaneous. Together with the advances of compression techniques such as JPEG, MP3 and MPEG; the Internet has become even faster, easier and more cost effective to distribute digital media such as audio, video,

In addition to existing web sites and shared networks, the recent development of peer-topeer (P2P) file distribution tools such as Kazaa, Limewire, Exceem or eMule enables a copious number of web users to easily access and share terabytes of digital media across the globe. These technologies also significantly reduce the efforts of pirates to illegally record, sell, copy and distribute copyright-protected material without compensating the legal

Today, content owners are eagerly seeking technologies that promise to protect their rights and secure their content from piracy, unauthorized usage and enable the tracking and conviction of media pirates. Cryptography is probably the most common method of protecting digital content [Koch & Zhao, 1995], where the content is encrypted prior to delivery and a decryption key is provided to those who have purchased legitimate copies. However, cryptography cannot help the content providers monitor their goods after the decryption process; a pirate could easily purchase a legit copy and then re-sell it or

It is therefore important to find a way to protect these digital media with a more stringent method, which would enable the vendors and artists / photographers / directors get confidence in placing and distributing their material over the Internet. Watermarking could

Digital watermarking is a field that refers to the process of embedding digital data directly

onto multimedia objects such that it can be detected or extracted later.

**for Copyright Protection** 

Charlie Obimbo and Behzad Salami

*University of Guelph* 

*Canada* 

